首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although much research has explored emotional responses to music using single musical elements, none has explored the interactive effects of mode, texture, and tempo in a single experiment. To this end, a 2 (mode: major vs. minor) × 2 (texture: nonharmonized vs. harmonized) × 3 (tempo: 72, 108, 144 beats per min) within-participants experimental design was employed, in which 177 college students rated four, brief musical phrases on continuous happy-sad scales. Major keys, nonharmonized melodies, and faster tempos were associated with happier responses, whereas their respective opposites were associated with sadder responses. These effects were also interactive, such that the typically positive association between tempo and happiness was inverted among minor, nonharmonized phrases. Some of these effects were moderated by the gender and amount of musical experience of participants. A principal components analysis of responses to the stimuli revealed one negatively and one positively valenced factor of emotional musical stimuli.  相似文献   

2.
In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within individual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-ramp), and fading intensity (down-ramp) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.  相似文献   

3.
In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within individual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-ramp), and fading intensity (down-ramp) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.  相似文献   

4.
Musically trained and untrained participants provided magnitude estimates of the size of melodic intervals. Each interval was formed by a sequence of two pitches that differed by between 50 cents (one half of a semitone) and 2,400 cents (two octaves) and was presented in a high or a low pitch register and in an ascending or a descending direction. Estimates were larger for intervals in the high pitch register than for those in the low pitch register and for descending intervals than for ascending intervals. Ascending intervals were perceived as larger than descending intervals when presented in a high pitch register, but descending intervals were perceived as larger than ascending intervals when presented in a low pitch register. For intervals up to an octave in size, differentiation of intervals was greater for trained listeners than for untrained listeners. We discuss the implications for psychophysical pitch scales and models of music perception.  相似文献   

5.
绝对音高感是一种敏锐的音高知觉能力。拥有这种能力的人可以在没有标准音(A4)参照下, 对所听到的音高进行命名。本文通过对比绝对音高感被试与不具有绝对音高感被试对音乐句法基本规则的知觉以及对音乐句法结构划分能力的差异, 探讨绝对音高感与音乐句法加工能力之间的关系。结果表明, 绝对音高感被试对音乐句法基本规则的知觉水平高于控制组; 同时, 这种知觉上的优势也延伸到他们对音乐句法结构的划分。这一结果说明, 绝对音高感被试不仅可以对音高进行孤立命名, 而且表现出对调性音乐音高关系加工的优势。  相似文献   

6.
In two experiments we addressed the roles of temporal and pitch structures in judgments of melodic phrases. Musical excerpts were rated on how good or complete a phrase they made. In Experiment 1, trials in the temporal condition retained the original temporal pattern but were equitonal; trials in the pitch condition retained the original pitch pattern but were equitemporal; and trials in the melody condition contained both temporal and pitch patterns. In Experiment 2, one pattern (pitch or temporal) was shifted in phase and recombined with the other pattern to create the pitch and temporal conditions. In the melody condition, both patterns were shifted together. In both experiments, ratings in the temporal and pitch conditions were uncorrelated, and the melody condition ratings were accurately predicted by a linear combination of the pitch and temporal condition ratings. These results were consistent across musicians with varying levels of experience.  相似文献   

7.
The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4 s or 12 s. Linear correlation coefficients > .89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, ‘indirect’ loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.  相似文献   

8.
This paper studied music in 14 children and adolescents with Williams-Beuren syndrome (WBS), a multi-system neurodevelopmental disorder, and 14 age-matched controls. Five aspects of music were tested. There were two tests of core music domains, pitch discrimination and rhythm discrimination. There were two tests of musical expressiveness, melodic imagery and phrasing. There was one test of musical interpretation, the ability to identify the emotional resonance of a musical excerpt. Music scores were analyzed by means of logistic regressions that modeled outcome (higher or lower music scores) as a function of group membership (WBS or Control) and cognitive age. Compared to age peers, children with WBS had similar levels of musical expressiveness, but were less able to discriminate pitch and rhythm, or to attach a semantic interpretation to emotion in music. Music skill did not vary with cognitive age. Musical strength in individuals with WBS involves not so much formal analytic skill in pitch and rhythm discrimination as a strong engagement with music as a means of expression, play, and, perhaps, improvisation.  相似文献   

9.
This paper studied music in 14 children and adolescents with Williams-Beuren syndrome (WBS), a multi-system neurodevelopmental disorder, and 14 age-matched controls. Five aspects of music were tested. There were two tests of core music domains, pitch discrimination and rhythm discrimination. There were two tests of musical expressiveness, melodic imagery and phrasing. There was one test of musical interpretation, the ability to identify the emotional resonance of a musical excerpt. Music scores were analyzed by means of logistic regressions that modeled outcome (higher or lower music scores) as a function of group membership (WBS or Control) and cognitive age. Compared to age peers, children with WBS had similar levels of musical expressiveness, but were less able to discriminate pitch and rhythm, or to attach a semantic interpretation to emotion in music. Music skill did not vary with cognitive age. Musical strength in individuals with WBS involves not so much formal analytic skill in pitch and rhythm discrimination as a strong engagement with music as a means of expression, play, and, perhaps, improvisation.  相似文献   

10.
Musical knowledge is largely implicit. It is acquired without awareness of its complex rules, through interaction with a large number of samples during musical enculturation. Whereas several studies explored implicit learning of mostly abstract and less ecologically valid features of Western music, very little work has been done with respect to ecologically valid stimuli as well as non‐Western music. The present study investigated implicit learning of modal melodic features in North Indian classical music in a realistic and ecologically valid way. It employed a cross‐grammar design, using melodic materials from two modes (rāgas) that use the same scale. Findings indicated that Western participants unfamiliar with Indian music incidentally learned to identify distinctive features of each mode. Confidence ratings suggest that participants' performance was consistently correlated with confidence, indicating that they became aware of whether they were right in their responses; that is, they possessed explicit judgment knowledge. Altogether our findings show incidental learning in a realistic ecologically valid context during only a very short exposure, they provide evidence that incidental learning constitutes a powerful mechanism that plays a fundamental role in musical acquisition.  相似文献   

11.
The present study tested quantified predictors based on the bottom-up principles of Narmour’s (1990) implication-realization model of melodic expectancy against continuity ratings collected for a tone that followed a two-tone melodic beginning. Twenty-four subjects (12 musically trained, 12 untrained) were presented with each of eight melodic intervals—two successive tones which they were asked to consider as the beginning of a melody. On each trial, a melodic interval was followed by a third tone, one of the 25 chromatic notes within the range one octave below to one octave above the second tone of the interval. The subjects were asked to rate how well the third tone continued the melody. A series of regression analyses was performed on the continuation ratings, and a final model to account for the variance in the ratings is proposed. Support was found for three of Narmour’s principles and a modified version of a fourth. Support was also found for predictor variables based on the pitch organization of tonal harmonic music. No significant differences between the levels of musical training were encountered.  相似文献   

12.
Arousal and valence (pleasantness) are considered primary dimensions of emotion. However, the degree to which these dimensions interact in emotional processing across sensory modalities is poorly understood. We addressed this issue by applying a crossmodal priming paradigm in which auditory primes (Romantic piano solo music) varying in arousal and/or pleasantness were sequentially paired with visual targets (IAPS pictures). In Experiment 1, the emotion spaces of 120 primes and 120 targets were explored separately in addition to the effects of musical training and gender. Thirty-two participants rated their felt pleasantness and arousal in response to primes and targets on equivalent rating scales as well as their familiarity with the stimuli. Musical training was associated with elevated familiarity ratings for high-arousing music and a trend for elevated arousal ratings, especially in response to unpleasant musical stimuli. Males reported higher arousal than females for pleasant visual stimuli. In Experiment 2, 40 nonmusicians rated their felt arousal and pleasantness in response to 20 visual targets after listening to 80 musical primes. Arousal associated with the musical primes modulated felt arousal in response to visual targets, yet no such transfer of pleasantness was observed between the two modalities. Experiment 3 sought to rule out the possibility of any order effect of the subjective ratings, and responses of 14 nonmusicians replicated results of Experiment 2. This study demonstrates the effectiveness of the crossmodal priming paradigm in basic research on musical emotions.  相似文献   

13.
The effects of harmony and rhythm on expectancy formation were studied in two experiments. In both studies, we generated musical passages consisting of a melodic line accompanied by four harmonic (chord) events. These sequences varied in their harmonic content, the rhythmic periodicity of the three context chords prior to the final chord, and the ending time of the final chord itself. In Experiment 1, listeners provided ratings for how well the final chord in a chord sequence fit their expectations for what was to come next; analyses revealed subtle changes in ratings as a function of both harmonic and rhythmic variation. Experiment 2 extended these results; listeners made a speeded reaction time judgment on whether the final chord of a sequence belonged with its set of context chords. Analysis of the reaction time data suggested that harmonic and rhythmic variation also influenced the speed of musical processing. These results are interpreted with reference to current models of music cognition, and they highlight the need for rhythmical weighting factors within the psychological representation of tonal/pitch information.  相似文献   

14.
We present the results of a study testing the often-theorized role of musical expectations in inducing listeners’ emotions in a live flute concert experiment with 50 participants. Using an audience response system developed for this purpose, we measured subjective experience and peripheral psychophysiological changes continuously. To confirm the existence of the link between expectation and emotion, we used a threefold approach. (1) On the basis of an information-theoretic cognitive model, melodic pitch expectations were predicted by analyzing the musical stimuli used (six pieces of solo flute music). (2) A continuous rating scale was used by half of the audience to measure their experience of unexpectedness toward the music heard. (3) Emotional reactions were measured using a multicomponent approach: subjective feeling (valence and arousal rated continuously by the other half of the audience members), expressive behavior (facial EMG), and peripheral arousal (the latter two being measured in all 50 participants). Results confirmed the predicted relationship between high-information-content musical events, the violation of musical expectations (in corresponding ratings), and emotional reactions (psychologically and physiologically). Musical structures leading to expectation reactions were manifested in emotional reactions at different emotion component levels (increases in subjective arousal and autonomic nervous system activations). These results emphasize the role of musical structure in emotion induction, leading to a further understanding of the frequently experienced emotional effects of music.  相似文献   

15.
16.
The notion that the melody (i.e., pitch structure) of familiar music is more recognizable than its accompanying rhythm (i.e., temporal structure) was examined with the same set of nameable musical excerpts in three experiments. In Experiment 1, the excerpts were modified so as to keep either their original pitch variations, whereas durations were set to isochrony (melodic condition) or their original temporal pattern while played on a single constant pitch (rhythmic condition). The subjects, who were selected without regard to musical training, were found to name more tunes and to rate their feeling of knowing the musical excerpts far higher in the melodic condition than in the rhythmic condition. These results were replicated in Experiment 2, wherein the melodic and rhythmic patterns of the musical excerpts were interchanged to create chimeric mismatched tunes. The difference in saliency of the melodic pattern and the rhythmic pattern also emerged with a music-title-verification task in Experiment 3, hence discarding response selection as the main source of the discrepancy. The lesser effectiveness of rhythmic structure appears to be related to its lesser encoding distinctiveness relative to melodic structure. In general, rhythm was found to be a poor cue for the musical representations that are stored in long-term memory. Nevertheless, in all three experiments, the most effective cue for music identification involved the proper combination of pitches and durations. Therefore, the optimal code of access to long-term memory for music resides in a combination of rhythm and melody, of which the latter would be the most informative.  相似文献   

17.
In two experiments we explored how the dimensions of pitch and time contribute to the perception and production of musical sequences. We tested how dimensional diversity (the number of unique categories in each dimension) affects how pitch and time combine. In Experiment 1, 18 musically trained participants rated the complexity of sequences varying only in their diversity in pitch or time; a separate group of 18 pianists reproduced these sequences after listening to them without practice. Overall, sequences with more diversity were perceived as more complex, but pitch diversity influenced ratings more strongly than temporal diversity. Further, although participants perceived sequences with high levels of pitch diversity as more complex, errors were more common in the sequences with higher diversity in time. Sequences in Experiment 2 exhibited diversity in both pitch and time; diversity levels were a subset of those tested in Experiment 1. Again diversity affected complexity ratings and errors, but there were no statistical interactions between dimensions. Nonetheless, pitch diversity was the primary factor in determining perceived complexity, and again temporal errors occurred more often than pitch errors. Additionally, diversity in one dimension influenced error rates in the other dimension in that both error types were more frequent relative to Experiment 1. These results suggest that although pitch and time do not interact directly, they are nevertheless not processed in an informationally encapsulated manner. The findings also align with a dimensional salience hypothesis, in which pitch is prioritised in the processing of typical Western musical sequences.  相似文献   

18.
Responsiveness of musically trained and untrained adults to pitch-distributional information in melodic contexts was assessed. In Experiment 1, melodic contexts were pure-tone sequences, generated from either a diatonic or one of four nondiatonic tonesets, in which pitch-distributional information was manipulated by variation of the relative frequency of occurrence of tones from the toneset. Both the assignment of relative frequency of occurrence to tones and the construction of the (fixed) temporal order of tones within the sequences contravened the conventions of western tonal music. A probe-tone technique was employed. Each presentation of a sequence was followed by a probe tone, one of the 12 chromatic notes within the octave. Listeners rated the goodness of musical fit of the probe tone to the sequence. Probe-tone ratings were significantly related to frequency of occurrence of the probe tone in the sequence for both trained and untrained listeners. In addition, probe-tone ratings decreased as the pitch distance between the probe tone and the final tone of the sequence increased. For musically trained listeners, probe-tone ratings for diatonic sequences tended also to reflect the influence of an internalized tonal schema. Experiment 2 demonstrated that the temporal location of tones in the sequences could not alone account for the effect of frequency of occurrence in Experiment 1. Experiment 3 tested musically untrained listeners under the conditions of Experiment 1, with the exception that the temporal order of tones in each sequence was randomized across trials. The effect of frequency of occurrence found in Experiment 1 was replicated and strengthened.  相似文献   

19.
Summary Four experiments investigated the perception of tonal structure in polytonal music. The experiments used musical excerpts in which the upper stave of the music suggested a different key than the lower stave. In Experiment 1, listeners rated the goodness of fit of probe tones following an excerpt from Dubois's Circus. Results suggested that listeners were sensitive to two keys, and weighted them according to their perceived importance within the excerpt. Experiment 2 confirmed that music within each stave reliably conveyed key structure on its own. In Experiment 3, listeners rated probe tones following an excerpt from Milhaud's Sonata No. 1 for Piano, in which different keys were conveyed in widely separate pitch registers. Ratings were collected across three octaves. Listeners did not associate each key with a specific register. Rather, ratings for all three octave registers reflected only the key associated with the upper stave. Experiment 4 confirmed that the music within each stave reliably conveyed key structure on its own. It is suggested that when one key predominates in a polytonal context, other keys may not contribute to the overall perceived tonal structure. The influence of long-term knowledge and immediate context on the perception of tonal structure in polytonal music is discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号