首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
音乐的速度与调式对大学生情绪影响的实证研究   总被引:1,自引:0,他引:1  
以92名大学本科生为实验对象,用剪辑的8首乐曲片断为实验材料,考察其调式和速度对大学生情绪的影响。结果表明:乐曲片断的速度对大学生的情绪影响非常显著,但调式的主效应不显著;慢速的乐曲易于诱发大学生忧伤、悲哀、痛苦、烦躁和愤恨等负性情绪;快速的乐曲大多数导致大学生愉悦与兴奋等正性情绪。乐曲的速度与调式对大学生情绪影响的差异十分显著。速度在调式上两水平的差异显著,尤其是在大调上差异非常显著。但调式只有在慢速水平上差异十分显著,而在快速水平上差异不显著。  相似文献   

2.
Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.  相似文献   

3.
Relative desynchronization (ERD) and synchronization (ERS) of the 8–10 Hz and 10–12 Hz alpha freqency bands elicited by music were studied in ten musically untrained right-handed subjects. The subjects listened to two five-minute musical excerpts representing two different musical genres, popular and classical, presented both foward and backward. ERD/ERS of the two alpha frequency bands was examined during the first four minutes of stimulus presentation using one-minute time windows. The results demonstrated that both the 8–10 Hz and the 10–12Hz frequency band exhibited reactivity to musical stimuli. The responses of the 8–10Hz and 10–12 Hz alpha frequency bands were dissimilar, dymanic and dependent on both time and stimulation type. The dynamics of these changes over time may explain some discrepancies in earlier EEG studies during listening to music.  相似文献   

4.
Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.  相似文献   

5.
Judgement of emotion conveyed by music is determined notably by mode (major-minor) and tempo (fast-slow). This suggestion was examined using the same set of equitone melodies, in two experiments. Melodies were presented to nonmusicians who were required to judge whether the melodies sounded “happy” or “sad” on a 10-point scale. In order to assess the specific and relative contributions of mode and tempo to these emotional judgements, the melodies were manipulated so that the only verying characteristic was either the mode or the tempo in two “isolated” conditions. In two further conditions, mode and tempo manipulations were combined so that mode and tempo either converged towards the same emotion (Convergent condition) or suggested opposite emotions (Divergent condition). The results confirm that both mode and tempo determine the “happy-sad” judgements in isolation, with the tempo being more salient, even when tempo salience was adjusted. The findings further support the view that, in music, structural features that are emotionally meaningful are easy to isolate, and that music is an effective and reliable medium to study emotions.  相似文献   

6.
Current research has suggested that musical stimuli are processed in the right hemisphere except in musicians, in whom there is an increased involvement of the left hemisphere. The present study hypothesized that the more musical training persons receive, the more they will rely on an analytic/left-hemispheric processing strategy. The subjects were 10 faculty and 10 student musicians, and 10 faculty and 10 student nonmusicians. All subjects listened to a series of melodies (some recurring and some not) and excerpts (some actual and some not) in one ear and, after a rest, to a different series of melodies in the other ear. The task was to identify recurring vs. nonrecurring melodies and actual vs. nonactual excerpts. For student musicians, there was a right-ear/left-hemispheric advantage for melody recognition, while for student nonmusicians, the situation was the reverse. Neither faculty group showed any ear preference. There were no significant differences for excerpt recognition. Two possible explanations of the faculty performance were discussed in terms of physical maturation and a functionally more integrated hemispheric approach to the task.  相似文献   

7.
A melody’s identity is determined by relations between consecutive tones in terms of pitch and duration, whereas surface features (i.e., pitch level or key, tempo, and timbre) are irrelevant. Although surface features of highly familiar recordings are encoded into memory, little is known about listeners’ mental representations of melodies heard once or twice. It is also unknown whether musical pitch is represented additively or interactively with temporal information. In two experiments, listeners heard unfamiliar melodies twice in an initial exposure phase. In a subsequent test phase, they heard the same (old) melodies interspersed with new melodies. Some of the old melodies were shifted in key, tempo, or key and tempo. Listeners’ task was to rate how well they recognized each melody from the exposure phase while ignoring changes in key and tempo. Recognition ratings were higher for old melodies that stayed the same compared to those that were shifted in key or tempo, and detrimental effects of key and tempo changes were additive in between-subjects (Experiment 1) and within-subjects (Experiment 2) designs. The results confirm that surface features are remembered for melodies heard only twice. They also imply that key and tempo are processed and stored independently.  相似文献   

8.
Two experiments explore the validity of conceptualizing musical beats as auditory structural features and the potential for increases in tempo to lead to greater sympathetic arousal, measured using skin conductance. In the first experiment, fast- and slow-paced rock and classical music excerpts were compared to silence. As expected, skin conductance response (SCR) frequency was greater during music processing than during silence. Skin conductance level (SCL) data showed that fast-paced music elicits greater activation than slow-paced music. Genre significantly interacted with tempo in SCR frequency, with faster tempo increasing activation for classical music and decreasing it for rock music. A second experiment was conducted to explore the possibility that the presumed familiarity of the genre led to this interaction. Although further evidence was found for conceptualizing musical beat onsets as auditory structure, the familiarity explanation was not supported.  相似文献   

9.
Rhythm (a pattern of onset times and duration of sounds) and melody (a pattern of sound pitches) were studied in 22 children and adolescents several years after temporal lobectomy for intractable epilepsy. Left and right lobectomy groups discriminated rhythms equally well, but the right lobectomy group was poorer at discriminating melodies. Children and adolescents with right lobectomy, but not those with left temporal lobectomy, had higher melody scores with increasing age. Rhythm but not melody was related to memory for the right lobectomy group. In neither group was melody related to age at onset of non-febrile seizures, time from surgery to music tests, or the linear amount of temporal lobe resection. Pitch and melodic contour show different patterns of lateralization after temporal lobectomy in childhood or adolescence.  相似文献   

10.
The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4 s or 12 s. Linear correlation coefficients > .89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, ‘indirect’ loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.  相似文献   

11.
Neonates (M age = 16 days) born to depressed and non-depressed mothers were randomly assigned to hear an audiotaped lullaby of instrumental music with vocals or without vocals. Neonatal EEG and EKG were recorded for 2 min (baseline) of silence and for 2 min of one or the other music presentation. Neonates of non-depressed mothers showed greater relative right frontal EEG asymmetry to both types of music, suggesting a withdrawal response. Neonates of depressed mothers on the other hand showed greater relative left frontal EEG asymmetry to the instrumental without vocal segment, suggesting an approach response, and greater relative right frontal EEG asymmetry to the instrumental with vocal segment, suggesting a withdrawal response. Heart rate decelerations occurred following the music onset for both groups of infants, however, compared to infants of non-depressed mothers, infants of depressed mothers showed a delayed heart rate deceleration, suggesting slower processing and/or delayed attention. These findings suggest that neonates of depressed and non-depressed mothers show different EKG and EEG responses to instrumental music with versus without vocals.  相似文献   

12.
Parieto-occipital EEG alpha was recorded bilaterally, while 20 high- and 20 low-hypnotizable women performed one left-hemisphere and one right-hemisphere task of low difficulty and two other comparable tasks of high difficulty. Every task was performed twice, once with eyes open and once with eyes closed. All subjects were right-handed. The tasks were originally selected to be of high and low difficulty. The subjective rating of task-difficulty was also evaluated. The integrated amplitude alpha and the alpha ratio (R-L/R + L) were the dependent variables. The highly hypnotizable women showed significantly higher alpha amplitude in eyes-closed condition than the low scorers; this difference disappeared during task performance and in the eyes-open condition. The left-tasks showed lower alpha amplitude in both hemispheres than right-tasks and baseline. The right-hemisphere alpha amplitude was lower than left in all experimental conditions. On tasks of high and low difficulty there was different hemispheric behavior on right and left tasks. Performance reflecting the right and left hemispheres in the low-difficulty condition showed no changes between baseline, right- and left-tasks, while under high difficulty there was a decrease in alpha amplitude in the right and even more marked decrease in the left hemisphere during left-tasks. The pattern of task effects for ratio scores was the same as for alpha amplitude, however, despite the analysis of alpha scores, an interaction of hypnotizability X task-difficulty was detected. The highly hypnotizable women showed less negative alpha ratio during a task of low difficulty than during tasks of high difficulty; the reverse was true for the low-hypnotizable women. Finally, the highly hypnotizable subjects showed less subjective difficulty during performance than the low scorers.  相似文献   

13.
The aim was to investigate differences by sex and music expertise in performance of a manual proprioceptive skill. Active left hand finger-movement discrimination for differences in string height was examined in a position similar to cello playing. Men and women who were experienced cellists and nonmusicians made active string depression movements and then made absolute judgments regarding which of five string positions were presented. Although no main effect was significant, analysis yielded a sex x musicianship crossover interaction (F(1,51) = 8.4, p = .006) wherein the female cellists performed better than the female nonmusicians, and the reverse occurred for males. These significant differences in active movement discrimination across sex and musicianship may be important in further understanding focal hand dystonia, a disorder wherein the interaction of sex and expertise is observed as a strong preponderance in experienced male musicians.  相似文献   

14.
Two experiments addressed the influences of harmonic relations, melody location, and relative frequency height on the perceptual organization of multivoiced music. In Experiment 1, listeners detected pitch changes in multivoiced piano music. Harmonically related pitch changes and those in the middle-frequency range were least noticeable. All pitch changes were noticeable in the high-frequency voice containing the melody (the most important voice), suggesting that melody can dominate harmonic relations. However, the presence of upper partials in the piano timbre used may have accounted for the harmonic effects. Experiment 2 employed pure sine tones, and replicated the effects of Experiment 1. In addition, the influence of the high-frequency melody on the noticeability of harmonically related pitches was lessened by the presence of a second melody. These findings suggest that harmonic, melodic, and relative frequency height relationships among voices interact in the perceptual organization of multivoiced music.  相似文献   

15.
欧阳玥  肖鑫  戴志强 《心理科学》2012,35(5):1071-1076
辨别节拍速度变化是音乐认知能力的重要组成部分。本文通过控制节拍速度的变化量(15%、10%、8%、5%、2%)、变化方向(提前与滞后)、节拍类型(2拍子和3拍子)3个变量,比较了音乐专业大学生和非音乐专业大学生对于节拍速度变化的感知能力。研究结果显示:人们对提前的变化比滞后的变化更敏感,并且这种知觉优势在3拍子条件下更为明显。同时音乐组被试对节拍速度变化的辨别能力显著高于非音乐组,并且这种优势在3拍子中更明显。  相似文献   

16.
This study explores oscillatory brain activity by means of event-related synchronization and desynchronization (%ERS/ERD) of EEG activity during the use of phonological and orthographic-morphological spelling strategies in L2 (English) and L1 (German) in native German speaking children. EEG was recorded while 33 children worked on a task requiring either phonological or orthographic-morphological spelling strategies. L2 processing elicited more theta %ERS than L1 processing (particularly at bilateral frontal and right posterior parietal sites) which might suggest a stronger involvement of semantic encoding and retrieval of the less familiar L2. The highest level of theta %ERS was revealed for the orthographic-morphological strategy in L2 which might indicate a more intense way of lexical retrieval compared to the phonological strategy in L2 and the orthographic-morphological strategy in L1. Analyses moreover revealed that phonological processing (both in L1 and L2) was associated with comparatively strong left-hemispheric %ERD in the upper alpha frequency band.  相似文献   

17.
The present study used a temporal bisection task to investigate whether music affects time estimation differently from a matched auditory neutral stimulus, and whether the emotional valence of the musical stimuli (i.e., sad vs. happy music) modulates this effect. The results showed that, compared to sine wave control music, music presented in a major (happy) or a minor (sad) key shifted the bisection function toward the right, thus increasing the bisection point value (point of subjective equality). This indicates that the duration of a melody is judged shorter than that of a non-melodic control stimulus, thus confirming that “time flies” when we listen to music. Nevertheless, sensitivity to time was similar for all the auditory stimuli. Furthermore, the temporal bisection functions did not differ as a function of musical mode.  相似文献   

18.
The hypothesis that melodies are recognized at moments when they exhibit a distinctive musical pattern was tested. In a melody recognition experiment, point-of-recognition (POR) data were gathered from 32 listeners (16 musicians and 16 nonmusicians) judging 120 melodies. A series of models of melody recognition were developed, resulting from a stepwise multiple regression of two classes of information relating to melodic familiarity and melodic distinctiveness. Melodic distinctiveness measures were assembled through statistical analyses of over 15,000 Western themes and melodies. A significant model, explaining 85% of the variance, entered measures primarily of timing distinctiveness and pitch distinctiveness, but excluding familiarity, as predictors of POR. Differences between nonmusician and musician models suggest a processing shift from momentary to accumulated information with increased exposure to music. Supplemental materials for this article may be downloaded from http://mc.psychonomic-journals.org/content/supplemental.  相似文献   

19.
This paper examines infants’ ability to perceive various aspects of musical material that are significant in music in general and in Western European music in particular: contour, intervals, exact pitches, diatonic structure, and rhythm. For the most part, infants focus on relational aspects of melodies, synthesizing global representations from local details. They encode the contour of a melody across variations in exact pitches and intervals. They extract information about pitch direction from the smallest musically relevant pitch change in Western music, the semitone. Under certain conditions, infants detect interval changes in the context of transposed sequences, their performance showing enhancement for sequences that conform to Western musical structure. Infants have difficulty retaining exact pitches except for sets of pitches that embody important musical relations. In the temporal domain, they group the elements of auditory sequences on the basis of similarity and they extract the temporal structure of a melody across variations in tempo.  相似文献   

20.
Abstract

In the present study, the EEG was recorded from the scalp of musicians while mentally active in their field. Analytic, creative and memory processes of the brain were observable using a special electrophysiological method called DC-potential recording. Music students listened to a sequence of four notes and subsequently were either to reverse the sequence (task 1 = analytic) or to compose a new continuation (task 2 = creative). In task 3, the initial segment of a well-known melody was presented and had to be continued (memory task). All tasks had to be solved mentally (imagery). In tasks 1 and 2, either tonal or atonal sequences were presented.

While processing, the results show that the analytic task elicited the highest brain activity. The analytic task involved mainly parieto-temporal areas of both hemispheres, the left hemisphere showing a tendency for domination. The memory task produced predominant activity over the right hemisphere. The creative task caused the lowest brain activation and elicited an unexpected lateralisation to the left, though we expected creativity to be a right hemispheric holistic-synthetic phenomenon.

Comparing listening with processing of the perceived music, we found a significant shift from an insignificant right hemispheric to an insignificant left hemispheric predominance (except with the memory task). This indicates that musicians do not lateralise to the left hemisphere per se when listening to music. Whether one finds a left hemispheric lateralisation in listening tasks or a right hemispheric one probably depends on the amount of simultaneous analytic-sequential processing the musician undertakes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号