首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Musically trained and untrained listeners were required to listen to 27 musical excerpts and to group those that conveyed a similar emotional meaning (Experiment 1). The groupings were transformed into a matrix of emotional dissimilarity that was analysed through multidimensional scaling methods (MDS). A 3-dimensional space was found to provide a good fit of the data, with arousal and emotional valence as the primary dimensions. Experiments 2 and 3 confirmed the consistency of this 3-dimensional space using excerpts of only 1 second duration. The overall findings indicate that emotional responses to music are very stable within and between participants, and are weakly influenced by musical expertise and excerpt duration. These findings are discussed in light of a cognitive account of musical emotion.  相似文献   

2.
A perceptual performance paradigm was designed to disentangle the timing variations in music performance that are due to perceptual compensation, motor control, and musical communication. First, pianists perceptually adjusted the interonset intervals of three excerpts so that they sounded regular. These adjustments deviated systematically from regularity, highlighting two sources of perceptual biases in time perception: rhythmic grouping and a psychoacoustic intensity effect. Then the participants performed the excerpts on the piano in the same regular way. The intensity effect disappeared, and some variations due to motor constraints were observed in relation to rhythmic groups. Finally, the participants performed the excerpts musically. Variations due to musical communication involved additional group-final lengthening that reflected the hierarchical grouping structure of the excerpts. These results underline the nuclear role of grouping in musical time perception and production.  相似文献   

3.
In the present study, the gating paradigm was used to measure how much perceptual information that was extracted from musical excerpts needs to be heard to provide judgments of familiarity and of emotionality. Nonmusicians heard segments of increasing duration (250, 500, 1,000 msec, etc.). The stimuli were segments from familiar and unfamiliar musical excerpts in Experiment 1 and from very moving and emotionally neutral musical excerpts in Experiment 2. Participants judged how familiar (Experiment 1) or how moving (Experiment 2) the excerpt was to them. Results show that a feeling of familiarity can be triggered by 500-msec segments, and that the distinction between moving and neutral can be made for 250-msec segments. This finding extends the observation of fast-acting cognitive and emotional processes from face and voice perception to music perception.  相似文献   

4.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

5.
In tennis, the non-verbal behaviours shown after a rally may indicate the affective state of players. The purpose of the present study was to assess whether (a) the point outcome, (b) the duration of video-excerpts, and (c) the tennis expertise of the participants would influence the recognition rates of the affective state. To that end, 115 participants were shown non-verbal behaviour of tennis players after a point and asked to rate whether the player had just won or lost a point. The results indicate that the recognition rates were higher for lost than for won points. Moreover, participants who were members of a tennis club had a higher recognition rate. Finally, there was no difference in the recognition rate regarding the duration of video excerpts. The findings point to a negativity bias and the bio-cultural framework in relation to the recognition of affective states associated with non-verbal behaviour.  相似文献   

6.
The way people with various degrees of musical training integrate timbre, melodic contour, rhythm, and pitch information in an overall pleasantness judgment for musical excerpts was investigated. The theoretical and methodological framework of the study was the functional theory of cognition. In 2 experiments, participants were asked to attribute an overall pleasantness value to combinations of these factors. In Experiment 1, timbre, contour, rhythm, and overall pitch were manipulated. In Experiment 2, timbre and theme (a pattern of pitch and rhythm) were manipulated. Both experiments showed that in judging the pleasantness of musical combinations, participants apply a simple, additive rule in which the weight attributed to one element does not depend of the value of the other elements. Very few differences in regard to the combination rule were observed between participants with and without musical training. These results are discussed in reference to the controversy over pitch and rhythm interaction.  相似文献   

7.
Using recent regional brain activation/emotion models as a theoretical framework, we examined whether the pattern of regional EEG activity distinguished emotions induced by musical excerpts which were known to vary in affective valence (i.e., positive vs. negative) and intensity (i.e., intense vs. calm) in a group of undergraduates. We found that the pattern of asymmetrical frontal EEG activity distinguished valence of the musical excerpts. Subjects exhibited greater relative left frontal EEG activity to joy and happy musical excerpts and greater relative right frontal EEG activity to fear and sad musical excerpts. We also found that, although the pattern of frontal EEG asymmetry did not distinguish the intensity of the emotions, the pattern of overall frontal EEG activity did, with the amount of frontal activity decreasing from fear to joy to happy to sad excerpts. These data appear to be the first to distinguish valence and intensity of musical emotions on frontal electrocortical measures.  相似文献   

8.
Infants can detect information specifying affect in infant- and adult-directed speech, familiar and unfamiliar facial expressions, and in point-light displays of facial expressions. We examined 3-, 5-, 7-, and 9-month-olds' discrimination of musical excerpts judged by adults and preschoolers as happy and sad. In Experiment 1, using an infant-controlled habituation procedure, 3-, 5-, 7-, and 9-month-olds heard three musical excerpts that were rated as either happy or sad. Following habituation, infants were presented with two new musical excerpts from the other affect group. Nine-month-olds discriminated the musical excerpts rated as affectively different. Five- and seven-month-olds discriminated the happy and sad excerpts when they were habituated to sad excerpts but not when they were habituated to happy excerpts. Three-month-olds showed no evidence of discriminating the sad and happy excerpts. In Experiment 2, 5-, 7-, and 9-month-olds were presented with two new musical excerpts from the same affective group as the habituation excerpts. At no age did infants discriminate these novel, yet affectively similar, musical excerpts. In Experiment 3, we examined 5-, 7-, and 9-month-olds' discrimination of individual excerpts rated as affectively similar. Only the 9-month-olds discriminated the affectively similar individual excerpts. Results are discussed in terms of infants' ability to discriminate affect across a variety of events and its relevance for later social-communicative development.  相似文献   

9.
B H Repp 《Cognition》1992,44(3):241-281
To determine whether structural factors interact with the perception of musical time, musically literate listeners were presented repeatedly with eight-bar musical excerpts, realized with physically regular timing on an electronic piano. On each trial, one or two randomly chosen time intervals were lengthened by a small amount, and the score. The resulting detection accuracy profile across all positions in each musical excerpt showed pronounced dips in places where lengthening would typically occur in an expressive (temporally modulated) performance. False alarm percentages indicated that certain tones seemed longer a priori, and these were among the ones whose actual lengthening was easiest to detect. The detection accuracy and false alarm profiles were significantly correlated with each other and with the temporal microstructure of expert performances, as measured from sound recordings by famous artists. Thus the detection task apparently tapped into listeners' musical thought and revealed their expectations about the temporal microstructure of music performance. These expectations, like the timing patterns of actual performances, derive from the cognitive representation of musical structure, as cued by a variety of systemic factors (grouping, meter, harmonic progression) and their acoustic correlates. No simple psycho-acoustic explanation of the detection accuracy profiles was evident. The results suggest that the perception of musical time is not veridical but "warped" by the structural representation. This warping may provide a natural basis for performance evaluation: expected timing patterns sound more or less regular, unexpected ones irregular. Parallels to language performance and perception are noted.  相似文献   

10.
Do children use the same properties as adults in determining whether music sounds happy or sad? We addressed this question with a set of 32 excerpts (16 happy and 16 sad) taken from pre-existing music. The tempo (i.e. the number of beats per minute) and the mode (i.e. the specific subset of pitches used to write a given musical excerpt) of these excerpts were modified independently and jointly in order to measure their effects on happy-sad judgments. Adults and children from 3 to 8 years old were required to judge whether the excerpts were happy or sad. The results show that as adults, 6--8-year-old children are affected by mode and tempo manipulations. In contrast, 5-year-olds' responses are only affected by a change of tempo. The youngest children (3--4-year-olds) failed to distinguish the happy from the sad tone of the music above chance. The results indicate that tempo is mastered earlier than mode to infer the emotional tone conveyed by music.  相似文献   

11.
Older adults, compared to younger adults, are more likely to attend to pleasant situations and avoid unpleasant ones. Yet, it is unclear whether such a phenomenon may be generalized to musical emotions. In this study, we investigated whether there is an age-related difference in how musical emotions are experienced and how positive and negative music influences attention performances in a target identification task. Thirty-one young and twenty-eight older adults were presented with 40 musical excerpts conveying happiness, peacefulness, sadness, and threat. While listening to music, participants were asked to rate their feelings and monitor each excerpt for the occurrence of an auditory target. Compared to younger adults, older adults reported experiencing weaker emotional activation when listening to threatening music and showed higher level of liking for happy music. Correct reaction times (RTs) for target identification were longer for threatening than for happy music in older adults but not in younger adults. This suggests that older adults benefit from a positive musical context and can regulate emotion elicited by negative music by decreasing attention towards it (and therefore towards the auditory target).  相似文献   

12.
The notion that the melody (i.e., pitch structure) of familiar music is more recognizable than its accompanying rhythm (i.e., temporal structure) was examined with the same set of nameable musical excerpts in three experiments. In Experiment 1, the excerpts were modified so as to keep either their original pitch variations, whereas durations were set to isochrony (melodic condition) or their original temporal pattern while played on a single constant pitch (rhythmic condition). The subjects, who were selected without regard to musical training, were found to name more tunes and to rate their feeling of knowing the musical excerpts far higher in the melodic condition than in the rhythmic condition. These results were replicated in Experiment 2, wherein the melodic and rhythmic patterns of the musical excerpts were interchanged to create chimeric mismatched tunes. The difference in saliency of the melodic pattern and the rhythmic pattern also emerged with a music-title-verification task in Experiment 3, hence discarding response selection as the main source of the discrepancy. The lesser effectiveness of rhythmic structure appears to be related to its lesser encoding distinctiveness relative to melodic structure. In general, rhythm was found to be a poor cue for the musical representations that are stored in long-term memory. Nevertheless, in all three experiments, the most effective cue for music identification involved the proper combination of pitches and durations. Therefore, the optimal code of access to long-term memory for music resides in a combination of rhythm and melody, of which the latter would be the most informative.  相似文献   

13.
为探讨基于视听双通道的音乐情绪冲突效应、冲突情境下的优势加工通道和音乐经验对结果的影响,本研究采用音乐表演视频为材料,比较音乐组和非音乐组被试在一致型和不一致型视听双通道下的情绪评定速度、准确性及强度。结果发现:(1)一致型条件下的情绪评定更准确且更强烈;(2)不一致型条件下,被试更多以听觉通道的情绪线索为依据进行情绪类型评定;(3)非音乐组被试比音乐组被试更依赖视觉通道的情绪线索。结果表明:通道间情绪信息的不一致阻碍了音乐情绪加工; 听觉通道是音乐情绪冲突情境下的优势加工通道; 音乐经验降低了情绪冲突效应对音乐组被试的干扰。  相似文献   

14.
In comparison with other modalities, the recognition of emotion in music has received little attention. An unexplored question is whether and how emotion recognition in music changes as a function of ageing. In the present study, healthy adults aged between 17 and 84 years (N=114) judged the magnitude to which a set of musical excerpts (Vieillard et al., 2008) expressed happiness, peacefulness, sadness and fear/threat. The results revealed emotion-specific age-related changes: advancing age was associated with a gradual decrease in responsiveness to sad and scary music from middle age onwards, whereas the recognition of happiness and peacefulness, both positive emotional qualities, remained stable from young adulthood to older age. Additionally, the number of years of music training was associated with more accurate categorisation of the musical emotions examined here. We argue that these findings are consistent with two accounts on how ageing might influence the recognition of emotions: motivational changes towards positivity and, to a lesser extent, selective neuropsychological decline.  相似文献   

15.
Humans are extremely good at detecting anomalies in sensory input. For example, while listening to a piece of Western-style music, an anomalous key change or an out-of-key pitch is readily apparent, even to the non-musician. In this paper we investigate differences between musical experts and non-experts during musical anomaly detection. Specifically, we analyzed the electroencephalograms (EEG) of five expert cello players and five non-musicians while they listened to excerpts of J.S. Bach’s Prelude from Cello Suite No. 1. All subjects were familiar with the piece, though experts also had extensive experience playing the piece. Subjects were told that anomalous musical events (AMEs) could occur at random within the excerpts of the piece and were told to report the number of AMEs after each excerpt. Furthermore, subjects were instructed to remain still while listening to the excerpts and their lack of movement was verified via visual and EEG monitoring. Experts had significantly better behavioral performance (i.e. correctly reporting AME counts) than non-experts, though both groups had mean accuracies greater than 80%. These group differences were also reflected in the EEG correlates of key-change detection post-stimulus, with experts showing more significant, greater magnitude, longer periods of, and earlier peaks in condition-discriminating EEG activity than novices. Using the timing of the maximum discriminating neural correlates, we performed source reconstruction and compared significant differences between cellists and non-musicians. We found significant differences that included a slightly right lateralized motor and frontal source distribution. The right lateralized motor activation is consistent with the cortical representation of the left hand – i.e. the hand a cellist would use, while playing, to generate the anomalous key-changes. In general, these results suggest that sensory anomalies detected by experts may in fact be partially a result of an embodied cognition, with a model of the action for generating the anomaly playing a role in its detection.  相似文献   

16.
Pigeons were exposed to a procedure under which five pecks on one response key (the observing key) changed the schedule on a second key (the food key) from a mixed schedule to a multiple schedule for 25 sec. In Experiment I, a random-ratio 50 schedule alternated with extinction. The duration of the random-ratio 50 schedule component was varied between 1.25 and 320 sec, and extinction was scheduled for a varying time, ranging from the duration of the random-ratio 50 to four times that value. Each set of values was scheduled for a block of sessions. Before observing-key pecks were allowed at each set of parameter values, the pigeons were exposed to a condition where the mixed and multiple schedule alternated every 10 min, and observing-key pecks were not permitted. Rates of pecking on the observing key were high for all values of random-ratio component durations except 1.25 sec. Experiment II was conducted with the random-ratio component duration equal to 40 sec, and the random-ratio schedule was varied from random-ratio 50 to 100, 200, and 400. Observing-key pecking rates were high for all values of the random-ratio schedule except random-ratio 400. In both experiments, observing response rates were relatively little affected, suggesting that neither schedule component duration nor schedule value is a strong determinant of observing responses.  相似文献   

17.
In two experiments, we examined the effect of timing conditions on the magnitude of gender differences in performance on the Mental Rotations Test (MRT). In Experiment 1, each of 196 females and 119 males was administered the MRT via a Microsoft PowerPoint presentation in one of five timing conditions (15, 20, 25, 30, and 40 sec). The participants were exposed to each MRT item for the period specified in the assigned timing condition. Experiment 2 was conducted to address flaws found in Experiment 1. Accordingly, each of 105 females and 105 males was individually administered the task in one of three timing conditions (15 sec, 25 sec, or unlimited duration). The results of both experiments showed that the magnitude of gender differences was similar across timing conditions when a conventional scoring method was used. An analysis of guessing behavior generally indicated that men tend to show little effect of timing conditions, whereas women's propensity to guess increases when they are given more time to respond. In general, the results supported an interpretation of gender differences on the MRT that relies on the joint operation of performance factors and level of spatial ability.  相似文献   

18.
Three experiments were conducted in order to validate 56 musical excerpts that conveyed four intended emotions (happiness, sadness, threat and peacefulness). In Experiment 1, the musical clips were rated in terms of how clearly the intended emotion was portrayed, and for valence and arousal. In Experiment 2, a gating paradigm was used to evaluate the course for emotion recognition. In Experiment 3, a dissimilarity judgement task and multidimensional scaling analysis were used to probe emotional content with no emotional labels. The results showed that emotions are easily recognised and discriminated on the basis of valence and arousal and with relative immediacy. Happy and sad excerpts were identified after the presentation of fewer than three musical events. With no labelling, emotion discrimination remained highly accurate and could be mapped on energetic and tense dimensions. The present study provides suitable musical material for research on emotions.Keywords.  相似文献   

19.
This study investigated the interaction between sampling behavior and preference formation underlying subjective decision making for like and dislike decisions. Two-alternative forced-choice tasks were used with closely-matched musical excerpts and the participants were free to listen and re-listen, i.e. to sample and resample each excerpt, until they reached a decision. We predicted that for decisions involving resampling, a sampling bias would be observed before the moment of conscious decision for the like decision only. The results indeed showed a gradually increasing sampling bias favouring the choice (73%) before the moment of overt response for like decisions. Such a bias was absent for dislike decisions. Furthermore, the participants reported stronger relative preferences for like decisions as compared to dislike decisions. This study demonstrated distinct differences in preference formation between like and dislike decisions, both in the implicit orienting/sampling processes prior to the conscious decision and in the subjective evaluation afterwards.  相似文献   

20.
The present study adapted a paradigm used in visual perception by Biederman, Glass, and Stacy (1973) and analyzed the influence of a coherent global context on the detection and recognition of musical target excerpts. Global coherence was modified by segmenting minuets into chunks of four, two, or one bar. These chunks were either reordered (Experiments 1, 3, 4, 5) or transposed to different keys (Experiment 2). The results indicate that target detection is influenced only by a reorganization on a very local level (i.e. chunks of one bar). Context incoherence did not influence the recognition of the real targets, but rendered the rejection of wrong target excerpts (foils) more difficult. The present findings revealed only a weak effect of global context on target identification and only for extremely modified structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号