首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Participants rated the perceived happiness, brightness, awkwardness, pitch velocity, and tempo change of ascending and descending musical scales in four modes (natural, melodic, and harmonic minor modes and the major mode). Only minor differences between ratings of natural, harmonic, or melodic minor scales or between ratings of parallel and relative major scales were found. Ascending scales were rated as happier, brighter, and more accelerating than were descending scales; ascending minor scales were rated as faster and more awkward than were descending minor scales. Musical keys in each mode were compared, and significant differences were found. Musical keys that started on a higher pitch were rated as happier, brighter, and faster and as speeding up more than were keys that started on a lower pitch. The data were consistent with previous findings and suggest that pitch and direction (contour), rather than mode or key, influence listeners' judgments of musical stimuli.  相似文献   

2.
Musically trained and untrained participants provided magnitude estimates of the size of melodic intervals. Each interval was formed by a sequence of two pitches that differed by between 50 cents (one half of a semitone) and 2,400 cents (two octaves) and was presented in a high or a low pitch register and in an ascending or a descending direction. Estimates were larger for intervals in the high pitch register than for those in the low pitch register and for descending intervals than for ascending intervals. Ascending intervals were perceived as larger than descending intervals when presented in a high pitch register, but descending intervals were perceived as larger than ascending intervals when presented in a low pitch register. For intervals up to an octave in size, differentiation of intervals was greater for trained listeners than for untrained listeners. We discuss the implications for psychophysical pitch scales and models of music perception.  相似文献   

3.
Music provides a useful domain in which to study how the different attributes of complex multidimensional stimuli are processed both separately and in combination. Much research has been devoted to addressing how the dimension of pitch and time are co-processed in music listening tasks. Neuropsychological studies have provided evidence for a certain degree of independence between pitch and temporal processing, although there are also many experimental reports favouring interactive models of pitch and temporal processing. Here we extended these investigations by examining the processing of pitch and temporal structures when music is presented in the visual modality (i.e. in the form of music notation). In two experiments, musician subjects were presented with visual musical stimuli containing both pitch and temporal information for a brief amount of time, and they were subsequently required to recall both the pitch and temporal information. In Experiment 1, we documented that concurrent, unattended, pitch and rhythmic auditory interference stimuli disrupted the recall of pitch, but not time. In Experiment 2, we showed that manipulating the tonal structure of the visual presentation stimuli affected the recall of pitch, but not time. On the other hand, manipulating the metrical properties of the visual stimuli affected recall of time, and pitch to a certain extent. Taken together, these results suggest that the processing of pitch is constrained by the processing of time, but the processing of time is not affected by the processing of pitch. These results do not support either strong independence or interactive models of pitch and temporal processing, but they suggest that the processing of time can occur independently from the processing of pitch when performing a written recall task.  相似文献   

4.
In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within individual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-ramp), and fading intensity (down-ramp) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.  相似文献   

5.
In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within individual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-ramp), and fading intensity (down-ramp) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.  相似文献   

6.
Arousal and valence (pleasantness) are considered primary dimensions of emotion. However, the degree to which these dimensions interact in emotional processing across sensory modalities is poorly understood. We addressed this issue by applying a crossmodal priming paradigm in which auditory primes (Romantic piano solo music) varying in arousal and/or pleasantness were sequentially paired with visual targets (IAPS pictures). In Experiment 1, the emotion spaces of 120 primes and 120 targets were explored separately in addition to the effects of musical training and gender. Thirty-two participants rated their felt pleasantness and arousal in response to primes and targets on equivalent rating scales as well as their familiarity with the stimuli. Musical training was associated with elevated familiarity ratings for high-arousing music and a trend for elevated arousal ratings, especially in response to unpleasant musical stimuli. Males reported higher arousal than females for pleasant visual stimuli. In Experiment 2, 40 nonmusicians rated their felt arousal and pleasantness in response to 20 visual targets after listening to 80 musical primes. Arousal associated with the musical primes modulated felt arousal in response to visual targets, yet no such transfer of pleasantness was observed between the two modalities. Experiment 3 sought to rule out the possibility of any order effect of the subjective ratings, and responses of 14 nonmusicians replicated results of Experiment 2. This study demonstrates the effectiveness of the crossmodal priming paradigm in basic research on musical emotions.  相似文献   

7.
The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4 s or 12 s. Linear correlation coefficients > .89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, ‘indirect’ loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.  相似文献   

8.
It has recently been demonstrated that the reported tastes/flavours of food/beverages can be modulated by means of external visual and auditory stimuli such as typeface, shapes, and music. The present study was designed to assess the role of the emotional valence of the product-extrinsic stimuli in such crossmodal modulations of taste. Participants evaluated samples of mixed fruit juice whilst simultaneously being presented with auditory or visual stimuli having either positive or negative valence. The soundtracks had either been harmonised with consonant (positive valence) or dissonant (negative valence) musical intervals. The visual stimuli consisted of images of emotional faces from the International Affective Picture System (IAPS) with valence ratings matched to the soundtracks. Each juice sample was rated on two computer-based scales: One anchored with the words sour and sweet, while the other scale required hedonic ratings. Those participants who tasted the juice sample while presented with the positively-valenced stimuli rated the juice as tasting sweeter compared to negatively-valenced stimuli, regardless of whether the stimuli were visual or auditory. These results suggest that the emotional valence of food-extrinsic stimuli can play a role in shaping food flavour evaluation and liking.  相似文献   

9.
绝对音高感是一种敏锐的音高知觉能力。拥有这种能力的人可以在没有标准音(A4)参照下, 对所听到的音高进行命名。本文通过对比绝对音高感被试与不具有绝对音高感被试对音乐句法基本规则的知觉以及对音乐句法结构划分能力的差异, 探讨绝对音高感与音乐句法加工能力之间的关系。结果表明, 绝对音高感被试对音乐句法基本规则的知觉水平高于控制组; 同时, 这种知觉上的优势也延伸到他们对音乐句法结构的划分。这一结果说明, 绝对音高感被试不仅可以对音高进行孤立命名, 而且表现出对调性音乐音高关系加工的优势。  相似文献   

10.
This study was aimed at examining whether pitch height and pitch change are mentally represented along spatial axes. A series of experiments explored, for isolated tones and 2-note intervals, the occurrence of effects analogous to the spatial numerical association of response codes (SNARC) effect. Response device orientation (horizontal vs. vertical), task, and musical expertise of the participants were manipulated. The pitch of isolated tones triggered the automatic activation of a vertical axis independently of musical expertise, but the contour of melodic intervals did not. By contrast, automatic associations with the horizontal axis seemed linked to music training for pitch and, to a lower extent, for intervals. These results, discussed in the light of studies on number representation, provide a new example of the effects of musical expertise on music cognition.  相似文献   

11.
To what extent do infants represent the absolute pitches of complex auditory stimuli? Two experiments with 8-month-old infants examined the use of absolute and relative pitch cues in a tone-sequence statistical learning task. The results suggest that, given unsegmented stimuli that do not conform to the rules of musical composition, infants are more likely to track patterns of absolute pitches than of relative pitches. A 3rd experiment tested adults with or without musical training on the same statistical learning tasks used in the infant experiments. Unlike the infants, adult listeners relied primarily on relative pitch cues. These results suggest a shift from an initial focus on absolute pitch to the eventual dominance of relative pitch, which, it is argued, is more useful for both music and speech processing.  相似文献   

12.
In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.  相似文献   

13.
This paper examines infants’ ability to perceive various aspects of musical material that are significant in music in general and in Western European music in particular: contour, intervals, exact pitches, diatonic structure, and rhythm. For the most part, infants focus on relational aspects of melodies, synthesizing global representations from local details. They encode the contour of a melody across variations in exact pitches and intervals. They extract information about pitch direction from the smallest musically relevant pitch change in Western music, the semitone. Under certain conditions, infants detect interval changes in the context of transposed sequences, their performance showing enhancement for sequences that conform to Western musical structure. Infants have difficulty retaining exact pitches except for sets of pitches that embody important musical relations. In the temporal domain, they group the elements of auditory sequences on the basis of similarity and they extract the temporal structure of a melody across variations in tempo.  相似文献   

14.
The ability to make accurate audiovisual synchrony judgments is affected by the "complexity" of the stimuli: We are much better at making judgments when matching single beeps or flashes as opposed to video recordings of speech or music. In the present study, we investigated whether the predictability of sequences affects whether participants report that auditory and visual sequences appear to be temporally coincident. When we reduced their ability to predict both the next pitch in the sequence and the temporal pattern, we found that participants were increasingly likely to report that the audiovisual sequences were synchronous. However, when we manipulated pitch and temporal predictability independently, the same effect did not occur. By altering the temporal density (items per second) of the sequences, we further determined that the predictability effect occurred only in temporally dense sequences: If the sequences were slow, participants' responses did not change as a function of predictability. We propose that reduced predictability affects synchrony judgments by reducing the effective pitch and temporal acuity in perception of the sequences.  相似文献   

15.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

16.
音高是音乐和言语领域中一个重要维度。失歌症是一种对音乐音高加工的障碍。探讨失歌症者对音乐和言语音高的加工有助于揭示音乐和言语音高加工是否共享特定的认知和神经机制。已有研究结果表明, 失歌症者对音乐音高加工存在障碍, 这种音高障碍在一定程度上影响到言语音高加工。同时, 声调语言背景无法弥补失歌症者的音高障碍。这些研究结果支持了资源-共享框架(resource-sharing framework), 即音乐和语言共享特定的认知和神经机制(Patel, 2003, 2008, in press), 并可能在一定程度上为失语症临床治疗提供借鉴。  相似文献   

17.
Five decades of research have shown clear links between exposure to violent visual media and subsequent aggression, however there has been little research that directly compares the effects of exposure to violent visual versus auditory media, or which has experimentally tested the effect of violent song lyrics with musical ‘tone’ held constant. In the current study 194 participants heard music either with or without lyrics, and with or without a violent music video, and were then given the chance to aggress using the hot sauce paradigm. Musical tone was held constant across groups, and a fifth (control) group had no media exposure at all. Experimental groups, on average, were significantly more aggressive than controls. The strongest effect was elicited by exposure to violent lyrics, regardless of whether violent imagery accompanied the music, and regardless of various person-based characteristics. Implications for theories of media violence and models of aggression are discussed.  相似文献   

18.
Fetal hearing experiences shape the linguistic and musical preferences of neonates. From the very first moment after birth, newborns prefer their native language, recognize their mother's voice, and show a greater responsiveness to lullabies presented during pregnancy. Yet, the neural underpinnings of this experience inducing plasticity have remained elusive. Here we recorded the frequency-following response (FFR), an auditory evoked potential elicited to periodic complex sounds, to show that prenatal music exposure is associated to enhanced neural encoding of speech stimuli periodicity, which relates to the perceptual experience of pitch. FFRs were recorded in a sample of 60 healthy neonates born at term and aged 12–72 hours. The sample was divided into two groups according to their prenatal musical exposure (29 daily musically exposed; 31 not-daily musically exposed). Prenatal exposure was assessed retrospectively by a questionnaire in which mothers reported how often they sang or listened to music through loudspeakers during the last trimester of pregnancy. The FFR was recorded to either a /da/ or an /oa/ speech-syllable stimulus. Analyses were centered on stimuli sections of identical duration (113 ms) and fundamental frequency (F0 = 113 Hz). Neural encoding of stimuli periodicity was quantified as the FFR spectral amplitude at the stimulus F0. Data revealed that newborns exposed daily to music exhibit larger spectral amplitudes at F0 as compared to not-daily musically-exposed newborns, regardless of the eliciting stimulus. Our results suggest that prenatal music exposure facilitates the tuning to human speech fundamental frequency, which may support early language processing and acquisition.

Research Highlights

  • Frequency-following responses to speech were collected from a sample of neonates prenatally exposed to music daily and compared to neonates not-daily exposed to music.
  • Neonates who experienced daily prenatal music exposure exhibit enhanced frequency-following responses to the periodicity of speech sounds.
  • Prenatal music exposure is associated with a fine-tuned encoding of human speech fundamental frequency, which may facilitate early language processing and acquisition.
  相似文献   

19.
This paper studied music in 14 children and adolescents with Williams-Beuren syndrome (WBS), a multi-system neurodevelopmental disorder, and 14 age-matched controls. Five aspects of music were tested. There were two tests of core music domains, pitch discrimination and rhythm discrimination. There were two tests of musical expressiveness, melodic imagery and phrasing. There was one test of musical interpretation, the ability to identify the emotional resonance of a musical excerpt. Music scores were analyzed by means of logistic regressions that modeled outcome (higher or lower music scores) as a function of group membership (WBS or Control) and cognitive age. Compared to age peers, children with WBS had similar levels of musical expressiveness, but were less able to discriminate pitch and rhythm, or to attach a semantic interpretation to emotion in music. Music skill did not vary with cognitive age. Musical strength in individuals with WBS involves not so much formal analytic skill in pitch and rhythm discrimination as a strong engagement with music as a means of expression, play, and, perhaps, improvisation.  相似文献   

20.
This paper studied music in 14 children and adolescents with Williams-Beuren syndrome (WBS), a multi-system neurodevelopmental disorder, and 14 age-matched controls. Five aspects of music were tested. There were two tests of core music domains, pitch discrimination and rhythm discrimination. There were two tests of musical expressiveness, melodic imagery and phrasing. There was one test of musical interpretation, the ability to identify the emotional resonance of a musical excerpt. Music scores were analyzed by means of logistic regressions that modeled outcome (higher or lower music scores) as a function of group membership (WBS or Control) and cognitive age. Compared to age peers, children with WBS had similar levels of musical expressiveness, but were less able to discriminate pitch and rhythm, or to attach a semantic interpretation to emotion in music. Music skill did not vary with cognitive age. Musical strength in individuals with WBS involves not so much formal analytic skill in pitch and rhythm discrimination as a strong engagement with music as a means of expression, play, and, perhaps, improvisation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号