首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using recent regional brain activation/emotion models as a theoretical framework, we examined whether the pattern of regional EEG activity distinguished emotions induced by musical excerpts which were known to vary in affective valence (i.e., positive vs. negative) and intensity (i.e., intense vs. calm) in a group of undergraduates. We found that the pattern of asymmetrical frontal EEG activity distinguished valence of the musical excerpts. Subjects exhibited greater relative left frontal EEG activity to joy and happy musical excerpts and greater relative right frontal EEG activity to fear and sad musical excerpts. We also found that, although the pattern of frontal EEG asymmetry did not distinguish the intensity of the emotions, the pattern of overall frontal EEG activity did, with the amount of frontal activity decreasing from fear to joy to happy to sad excerpts. These data appear to be the first to distinguish valence and intensity of musical emotions on frontal electrocortical measures.  相似文献   

2.
Relative desynchronization (ERD) and synchronization (ERS) of the 8–10 Hz and 10–12 Hz alpha freqency bands elicited by music were studied in ten musically untrained right-handed subjects. The subjects listened to two five-minute musical excerpts representing two different musical genres, popular and classical, presented both foward and backward. ERD/ERS of the two alpha frequency bands was examined during the first four minutes of stimulus presentation using one-minute time windows. The results demonstrated that both the 8–10 Hz and the 10–12Hz frequency band exhibited reactivity to musical stimuli. The responses of the 8–10Hz and 10–12 Hz alpha frequency bands were dissimilar, dymanic and dependent on both time and stimulation type. The dynamics of these changes over time may explain some discrepancies in earlier EEG studies during listening to music.  相似文献   

3.
Can listeners distinguish unfamiliar performers playing the same piece on the same instrument? Professional performers recorded two expressive and two inexpressive interpretations of a short organ piece. Nonmusicians and musicians listened to these recordings and grouped together excerpts they thought had been played by the same performer. Both musicians and nonmusicians performed significantly above chance. Expressive interpretations were sorted more accurately than inexpressive ones, indicating that musical individuality is communicated more efficiently through expressive performances. Furthermore, individual performers' consistency and distinctiveness with respect to expressive patterns were shown to be excellent predictors of categorisation accuracy. Categorisation accuracy was superior for prize-winning performers compared to non-winners, suggesting a link between performer competence and the communication of musical individuality. Finally, results indicate that temporal information is sufficient to enable performer recognition, a finding that has broader implications for research on the detection of identity cues.  相似文献   

4.
This study examined the effect of listening to a newly learned musical piece on subsequent motor retention of the piece. Thirty-six non-musicians were trained to play an unfamiliar melody on a piano keyboard. Next, they were randomly assigned to participate in three follow-up listening sessions over 1 week. Subjects who, during their listening sessions, listened to the same initial piece showed significant improvements in motor memory and retention of the piece despite the absence of physical practice. These improvements included increased pitch accuracy, time accuracy, and dynamic intensity of key pressing. Similar improvements, though to a lesser degree, were observed in subjects who, during their listening sessions, were distracted by another task. Control subjects, who after learning the piece had listened to nonmusical sounds, showed impaired motoric retention of the piece at 1 week from the initial acquisition day. These results imply that motor sequences can be established in motor memory without direct access to motor-related information. In addition, the study revealed that the listening-induced improvements did not generalize to the learning of a new musical piece composed of the same notes as the initial piece learned, limiting the effects to musical motor sequences that are already part of the individual’s motor repertoire.  相似文献   

5.
Procedural skills such as riding a bicycle and playing a musical instrument play a central role in daily life. Such skills are learned gradually and are retained throughout life. The present study investigated 1-year retention of procedural skill in a version of the widely used serial reaction time task (SRTT) in young and older motor-skill experts and older controls in two experiments. The young experts were college-age piano and action video-game players, and the older experts were piano players. Previous studies have reported sequence-specific skill retention in the SRTT as long as 2 weeks but not at 1 year. Results indicated that both young and older experts and older non-experts revealed sequence-specific skill retention after 1 year with some evidence that general motor skill was retained as well. These findings are consistent with theoretical accounts of procedural skill learning such as the procedural reinstatement theory as well as with previous studies of retention of other motor skills.  相似文献   

6.
This study examined how expert abacus operators process imagery. Without imagery instructions a digit series was auditorily presented as one whole number (WHL list) or separate digits (SEP list). RT from offset of the probe to onset of the response was measured. The main findings were as follows: experts showed no difference in RT between the two lists, while significant differences occurred in non-experts; non-experts' RT increased with probed position, while experts' RT was flat if the series size was within their image capacity; experts' RT increased with probed position when the series size was longer than their image capacity, but its rate of increase was smaller than that of non-experts; and the smaller the image capacity, the steeper the slope of the RT function. It was concluded that experts spontaneously encode the digit series into an imaged abacus, while non-experts encode it verbally; that experts directly access the probed position within their image but serially process the verbally coded overflowed part; and that non-experts search the digit series serially.  相似文献   

7.
A repeated listening procedure was designed to monitor changes in listener's appreciation of thematic categories in musical compositions. Subjects listened to a recorded musical composition. Passages selected from the composition were then played in pairs, and listeners rated their similarity. The similarity data were submitted to INDSCAL, a multidimensional scaling procedure, which located the passages in an n-dimensional space. This procedure was repeated in three separate sessions, so that changes in the perceived musical structure could be observed. In Study 1, subjects heard Liszt's Sonata in b, and target passages were Theme A, Theme B, and three variations of each theme. While extrathematic dimensions dominated early acquaintance, a theme dimension emerged in the second and third sessions. Musicians gave higher weight to the theme dimension than did nonmusicians, and theme was the only dimension for experts on this sonata. Musicians were also more accurate in a final classification test, but only after repeated listening. The effect of repeated exposure on transfer to new theme exemplars was considered in Study 2. It is hoped this work will foster more naturalistic approaches to musical cognition.  相似文献   

8.
Singing in the Brain: Independence of Lyrics and Tunes   总被引:1,自引:0,他引:1  
Why is vocal music the oldest and still the most popular form of music? Very possibly because vocal music involves an intimate combination of speech and music, two of the most specific, high-level skills of human beings. The issue we address is whether people listening to a song treat the linguistic and musical components separately or integrate them within a single percept. Event-related potentials were recorded while musicians listened to excerpts from operas sung a capella. Excerpts were ended by semantically congruous or incongruous words sung either in or out of key. Results clearly demonstrated the independence of lyrics and tunes, so that an additive model of semantic- and harmonic-violations processing predicted the data extremely well. These results are consistent with a modular organization of the human cognitive system and open new perspectives in the search for the similarities and differences between language and music processing.  相似文献   

9.
10.
The aim of this study was to investigate visual behaviour of expert and non-expert ski athletes during an alpine slalom. Fourteen non-experts and five expert slalom skiers completed an alpine slalom course in an indoor ski slope while wearing a head-mounted eye tracking device. Experts completed the slalom clearly faster than non-experts, but no significant difference was found in timing and position of the turn initiation. Although both groups already looked at future obstacles approximately 0,5 s before passing the upcoming pole, the higher speed of experts implied that they shifted gaze spatially earlier in the bend than non-experts. Furthermore, experts focussed more on the second next pole while non-expert slalom skiers looked more to the snow surface immediately in front of their body. No difference was found in the fixation frequency, average fixation duration, and quiet eye duration between both groups. These results suggest that experts focus on the timing of their actions while non-experts still need to pay attention to the execution of these actions. These results also might suggest that ski trainers should instruct non-experts and experts to focus on the next pole and, shift their gaze to the second next pole shortly before reaching it. Based on the current study it seems unadvisable to instruct slalom skiers to look several poles ahead during the actual slalom. However, future research should test if these results still hold on a real outdoor slope, including multiple vertical gates.  相似文献   

11.
In two experiments, event-related brain potentials (ERPs) were recorded from 13 scalp locations while subjects read sentences containing a syntactically or a semantically anomalous word. The position (sentence-embedded vs sentence-final) and word class (open vs closed) of the syntactic anomalies were manipulated. In both experiments, semantically anomalous words elicited an enhanced N400 component. Syntactically anomalous closed class words elicited a widely distributed late positive wave (P600) regardless of the word's position and a smaller negative-going effect that was largest over anterior sites when the anomaly occurred in sentence-final position. The response to syntactically anomalous open class words revealed striking qualitative individual differences: These words elicited a P600 response in the majority of subjects and an N400 response in others. The proportion of subjects exhibiting the N400 response was greater when the anomaly occurred in sentence-final position. These results are interpreted in the context of prior findings, and implications for the hypothesis that syntactic and semantic anomalies elicit distinct brain potentials are discussed.  相似文献   

12.
Infants can detect information specifying affect in infant- and adult-directed speech, familiar and unfamiliar facial expressions, and in point-light displays of facial expressions. We examined 3-, 5-, 7-, and 9-month-olds' discrimination of musical excerpts judged by adults and preschoolers as happy and sad. In Experiment 1, using an infant-controlled habituation procedure, 3-, 5-, 7-, and 9-month-olds heard three musical excerpts that were rated as either happy or sad. Following habituation, infants were presented with two new musical excerpts from the other affect group. Nine-month-olds discriminated the musical excerpts rated as affectively different. Five- and seven-month-olds discriminated the happy and sad excerpts when they were habituated to sad excerpts but not when they were habituated to happy excerpts. Three-month-olds showed no evidence of discriminating the sad and happy excerpts. In Experiment 2, 5-, 7-, and 9-month-olds were presented with two new musical excerpts from the same affective group as the habituation excerpts. At no age did infants discriminate these novel, yet affectively similar, musical excerpts. In Experiment 3, we examined 5-, 7-, and 9-month-olds' discrimination of individual excerpts rated as affectively similar. Only the 9-month-olds discriminated the affectively similar individual excerpts. Results are discussed in terms of infants' ability to discriminate affect across a variety of events and its relevance for later social-communicative development.  相似文献   

13.
Older adults, compared to younger adults, are more likely to attend to pleasant situations and avoid unpleasant ones. Yet, it is unclear whether such a phenomenon may be generalized to musical emotions. In this study, we investigated whether there is an age-related difference in how musical emotions are experienced and how positive and negative music influences attention performances in a target identification task. Thirty-one young and twenty-eight older adults were presented with 40 musical excerpts conveying happiness, peacefulness, sadness, and threat. While listening to music, participants were asked to rate their feelings and monitor each excerpt for the occurrence of an auditory target. Compared to younger adults, older adults reported experiencing weaker emotional activation when listening to threatening music and showed higher level of liking for happy music. Correct reaction times (RTs) for target identification were longer for threatening than for happy music in older adults but not in younger adults. This suggests that older adults benefit from a positive musical context and can regulate emotion elicited by negative music by decreasing attention towards it (and therefore towards the auditory target).  相似文献   

14.
Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members’ gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers’ gaze shifts.  相似文献   

15.
The achievement of mastery in playing a composition by means of a musical instrument typically requires numerous repetitions and corrections according to the keys and notations of the music piece. Nevertheless, differences in the interpretation of the same music piece by highly skilled musicians seem to be recognizable. The present study investigated differences within and between skilled flute players in their finger and body movements playing the same piece several times on the same and on different days. Six semiprofessional and four professional musicians played an excerpt of Mozart’s Flute Concerto No. 2 several times on three different days. Finger and body movements were recorded by 3D motion capture and analyzed by linear and nonlinear classification approaches. The findings showed that the discrete and continuous movement timing data correctly identified individuals up to 100% by means of their finger movements and up to 94% by means of their body movements. These robust examples of identifying individual movement patterns contradict the prevailing models of small, economic finger movements that are favored in the didactic literature for woodwind players and question traditional recommendations for teaching the learning of motor skills.  相似文献   

16.
Shifts in the psychophysical study of sound sensation reinforced the changing status of musical expertise in the nineteenth century. The Carl Stumpf‐Wilhelm Wundt debate about tone‐differentiation experimentation narrowed the conception of hearing. For Stumpf, “music consciousness” ( M usikbewusstsein) granted the experimental subjects exceptional insight into sound sensation. This belief reflected a cultural reevaluation of listening, exemplified in music critic Eduard Hanslick decrying the scourge of the city: the piano playing of the neighbors. Stumpf and Hanslick's defenses of subjective musical expertise both inside the laboratory and on the city streets reveal the increasingly divergent conceptions of hearing and listening.  相似文献   

17.
This study was designed to determine the effect of Quran listening without its musical tone (Tartil) on the mental health of personnel in Zahedan University of Medical Sciences, southeast of Iran. The results showed significant differences between the test and control groups in their mean mental health scores after Quran listening (P = 0.037). No significant gender differences in the test group before and after intervention were found (P = 0.806). These results suggest that Quran listening could be recommended by psychologists for improving mental health and achieving greater calm.  相似文献   

18.
This study was designed to investigate how emotion category, characterized by distinct musical structures (happiness, sadness, threat) and expressiveness (mechanical, expressive) may influence overt and covert behavioral judgments and physiological responses in musically trained and untrained listeners. Mechanical and expressive versions of happy, sad and scary excerpts were presented while physiological measures were recorded. Participants rated the intensity of the emotion they felt. In addition, they monitored excerpts for the presence of brief breaths. Results showed that the emotion categories were rated higher in the expressive than in the mechanical versions and that this effect was larger in musicians. Moreover, expressive excerpts were found to increase skin conductance level more than the mechanical ones, independently of their arousal value, and to slow down response times in the breath detection task relative to the mechanical versions, suggesting enhanced capture of attention by expressiveness. Altogether, the results support the key role of the performer’s expression in the listener’s emotional response to music.  相似文献   

19.
The notion that the melody (i.e., pitch structure) of familiar music is more recognizable than its accompanying rhythm (i.e., temporal structure) was examined with the same set of nameable musical excerpts in three experiments. In Experiment 1, the excerpts were modified so as to keep either their original pitch variations, whereas durations were set to isochrony (melodic condition) or their original temporal pattern while played on a single constant pitch (rhythmic condition). The subjects, who were selected without regard to musical training, were found to name more tunes and to rate their feeling of knowing the musical excerpts far higher in the melodic condition than in the rhythmic condition. These results were replicated in Experiment 2, wherein the melodic and rhythmic patterns of the musical excerpts were interchanged to create chimeric mismatched tunes. The difference in saliency of the melodic pattern and the rhythmic pattern also emerged with a music-title-verification task in Experiment 3, hence discarding response selection as the main source of the discrepancy. The lesser effectiveness of rhythmic structure appears to be related to its lesser encoding distinctiveness relative to melodic structure. In general, rhythm was found to be a poor cue for the musical representations that are stored in long-term memory. Nevertheless, in all three experiments, the most effective cue for music identification involved the proper combination of pitches and durations. Therefore, the optimal code of access to long-term memory for music resides in a combination of rhythm and melody, of which the latter would be the most informative.  相似文献   

20.
Current research has suggested that musical stimuli are processed in the right hemisphere except in musicians, in whom there is an increased involvement of the left hemisphere. The present study hypothesized that the more musical training persons receive, the more they will rely on an analytic/left-hemispheric processing strategy. The subjects were 10 faculty and 10 student musicians, and 10 faculty and 10 student nonmusicians. All subjects listened to a series of melodies (some recurring and some not) and excerpts (some actual and some not) in one ear and, after a rest, to a different series of melodies in the other ear. The task was to identify recurring vs. nonrecurring melodies and actual vs. nonactual excerpts. For student musicians, there was a right-ear/left-hemispheric advantage for melody recognition, while for student nonmusicians, the situation was the reverse. Neither faculty group showed any ear preference. There were no significant differences for excerpt recognition. Two possible explanations of the faculty performance were discussed in terms of physical maturation and a functionally more integrated hemispheric approach to the task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号