首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Quinn S  Watt R 《Perception》2006,35(2):267-280
Tempo is one factor that is frequently associated with the expressive nature of a piece of music. Composers often indicate the tempo of a piece of music through the use of numerical markings (beats min(-1)) and subjective terms (adagio, allegro). Three studies were conducted to assess whether listeners were able to make consistent judgments about tempo that varied from piece to piece. Listeners heard short extracts of Scottish music played at a range of tempi and were asked to make a two-alternative forced choice of "too fast" or "too slow" for each extract. The responses for each study were plotted as proportion of too fast responses as a function of tempo for each piece, and cumulative normal curves were fitted to each data set. The point where these curves cross 0.5 is the tempo at which the music sounds right to the listeners, referred to as the optimal tempo. The results from each study show that listeners are capable of making consistent tempo judgments and that the optimal tempo varies across extracts. The results also revealed that rhythm plays a role, but not the only role in making temporal judgments.  相似文献   

2.
In four experiments, listeners’ sensitivity to combinations of pitch and duration was investigated. Experiments 1–3 involved “textures” of notes, which were created by repeatedly sounding one of two notes (e.g., C4 quarter note; D4 eighth note), so that each note had an equal chance of occurring at any point within a texture. Experiment 1 showed that if a texture change was effected by introducing a pitch or duration that was not in the initial texture, the change was perceived by both attentive and distracted listeners. If a texture change was effected by combining the pitch of one note with the duration of the other note in the initial texture, and vice versa, it was perceived only if the listeners were attentive. Sensitivity to pitch/duration combinations was poorer when the pitch difference between component notes of textures was increased (Experiment 2), but it was better when the difference in duration between component notes was increased (Experiment 3). In Experiment 4, listeners’ sensitivity to combinations of pitch pattern and durational pattern in brief sequences was examined. Listeners were sensitive to the manner in which parameter patterns were combined when they were attentive, but not when they were distracted. The results are discussed in view of featureintegration theory and its application to music cognition.  相似文献   

3.
A melody’s identity is determined by relations between consecutive tones in terms of pitch and duration, whereas surface features (i.e., pitch level or key, tempo, and timbre) are irrelevant. Although surface features of highly familiar recordings are encoded into memory, little is known about listeners’ mental representations of melodies heard once or twice. It is also unknown whether musical pitch is represented additively or interactively with temporal information. In two experiments, listeners heard unfamiliar melodies twice in an initial exposure phase. In a subsequent test phase, they heard the same (old) melodies interspersed with new melodies. Some of the old melodies were shifted in key, tempo, or key and tempo. Listeners’ task was to rate how well they recognized each melody from the exposure phase while ignoring changes in key and tempo. Recognition ratings were higher for old melodies that stayed the same compared to those that were shifted in key or tempo, and detrimental effects of key and tempo changes were additive in between-subjects (Experiment 1) and within-subjects (Experiment 2) designs. The results confirm that surface features are remembered for melodies heard only twice. They also imply that key and tempo are processed and stored independently.  相似文献   

4.
We examined emotional responding to music after mood induction. On each trial, listeners heard a 30-s music excerpt and rated how much they liked it, whether it sounded happy or sad, and how familiar it was. When the excerpts sounded unambiguously happy or sad (Experiment 1), the typical preference for happy-sounding music was eliminated after inducing a sad mood. When the excerpts sounded ambiguous with respect to happiness and sadness (Experiment 2), listeners perceived more sadness after inducing a sad mood. Sad moods had no influence on familiarity ratings (Experiments 1 and 2). These findings imply that "misery loves company." Listeners in a sad mood fail to show the typical preference for happy-sounding music, and they perceive more sadness in music that is ambiguous with respect to mood.  相似文献   

5.
Linguistic background has been identified as important in the perception of pitch, particularly between tonal versus nontonal languages. In addition, a link between native language and the perception of musical pitch has also been established. This pilot study examined the perception of pitch between listeners from tonal and nontonal linguistic cultures where two different styles of music originate. Listeners were 10 individuals born in China who ranged in age from 25 to 37 years and had spent on the average 30 mo. in the USA and 10 individuals, born on the Indian subcontinent, who ranged in age from 22 to 31 years, and had spent an average of 13 mo. in the USA. Listeners from both groups participated in two conditions. One condition involved listening to a selection of music characteristic of the individual's culture (China, pentatonic scale; Indian subcontinent, microtones), and one condition involved no music. All listeners within each condition participated in two voice pitch-matching tasks. One task involved matching the lowest and highest pitch of tape-recorded voices to a note on an electronic keyboard. Another task involved matching the voice pitch of tape-recorded orally read words to a note on the keyboard. There were no differences between the two linguistic groups. Methodological limitations preclude generalization but provide the basis for further research.  相似文献   

6.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

7.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

8.
Musically knowledgeable listeners heard auditory patterns based on sets of six (Study 1) or eight tones (Study 2). In the first study, listeners ordered events from patterns generated by hierarchical rule trees and which possessed different pitch space and time structures: one type (nondistance nested) was more likely to produce auditory streaming than the other (distance nested). In the second study, different listeners reconstructed pitch intervals contained in one of eight patterns. Patterns differed according to (1) levels of pitch distance occur), (2) levels of pattern contour (two), and (3) rate (two). In both studies, fast patterns with many large pitch distances were more difficult to recollect. Listeners in the second study telided to “telescope” pitch distances. Most difficult were those rapid sequences with large patch intervals combined into a changing contour (nondistance nested); these patterns streamed. A third study replicated effects due to differences in pitch distances observed in Study 2. Results were interpreted in terms of a rhythmic theory of memory.  相似文献   

9.
Summary Four experiments investigated the perception of tonal structure in polytonal music. The experiments used musical excerpts in which the upper stave of the music suggested a different key than the lower stave. In Experiment 1, listeners rated the goodness of fit of probe tones following an excerpt from Dubois's Circus. Results suggested that listeners were sensitive to two keys, and weighted them according to their perceived importance within the excerpt. Experiment 2 confirmed that music within each stave reliably conveyed key structure on its own. In Experiment 3, listeners rated probe tones following an excerpt from Milhaud's Sonata No. 1 for Piano, in which different keys were conveyed in widely separate pitch registers. Ratings were collected across three octaves. Listeners did not associate each key with a specific register. Rather, ratings for all three octave registers reflected only the key associated with the upper stave. Experiment 4 confirmed that the music within each stave reliably conveyed key structure on its own. It is suggested that when one key predominates in a polytonal context, other keys may not contribute to the overall perceived tonal structure. The influence of long-term knowledge and immediate context on the perception of tonal structure in polytonal music is discussed.  相似文献   

10.
Good pitch memory is widespread   总被引:1,自引:0,他引:1  
Abstract - Here we show that good pitch memory is widespread among adults with no musical training. We tested unselected college students on their memory for the pitch level of instrumental soundtracks from familiar television programs. Participants heard 5-s excerpts either at the original pitch level or shifted upward or downward by 1 or 2 semitones. They successfully identified the original pitch levels. Other participants who heard comparable excerpts from unfamiliar recordings could not do so. These findings reveal that ordinary listeners retain fine-grained information about pitch level over extended periods. Adults' reportedly poor memory for pitch is likely to be a by-product of their inability to name isolated pitches.  相似文献   

11.
Two experiments were conducted to examine the effect of absolute-pitch possession on relativepitch processing. Listeners attempted to identify melodic intervals ranging from a semitone to an octave with different reference tones. Listeners with absolute pitch showed declined performance when the reference was out-of-tune C, out-of-tune E, or F#, relative to when the reference was C. In contrast, listeners who had no absolute pitch maintained relatively high performance in all reference conditions. These results suggest that absolute-pitch listeners are weak in relative-pitch processing and show a tendency to rely on absolute pitch in relative-pitch tasks.  相似文献   

12.
Three experiments showed that dynamic frequency change influenced loudness. Listeners heard tones that had concurrent frequency and intensity change and tracked loudness while ignoring pitch. Dynamic frequency change significantly influenced loudness. A control experiment showed that the effect depended on dynamic change and was opposite that predicted by static equal loudness contours. In a 3rd experiment, listeners heard white noise intensity change in one ear and harmonic frequency change in the other and tracked the loudness of the noise while ignoring the harmonic tone. Findings suggest that the dynamic interaction of pitch and loudness occurs centrally in the auditory system; is an analytic process; has evolved to take advantage of naturally occurring covariation of frequency and intensity; and reflects a shortcoming of traditional static models of loudness perception in a dynamic natural setting.  相似文献   

13.
Two experiments examined the effects of repetition on listeners' emotional response to music. Listeners heard recordings of orchestral music that contained a large section repeated twice. The music had a symmetric phrase structure (same-length phrases) in Experiment 1 and an asymmetric phrase structure (different-length phrases) in Experiment 2, hypothesized to alter the predictability of sensitivity to musical repetition. Continuous measures of arousal and valence were compared across music that contained identical repetition, variation (related), or contrasting (unrelated) structure. Listeners' emotional arousal ratings differed most for contrasting music, moderately for variations, and least for repeating musical segments. A computational model for the detection of repeated musical segments was applied to the listeners' emotional responses. The model detected the locations of phrase boundaries from the emotional responses better than from performed tempo or physical intensity in both experiments. These findings indicate the importance of repetition in listeners' emotional response to music and in the perceptual segmentation of musical structure.  相似文献   

14.
The acquisition of the hierarchy of tonal stabilities in music is investigated in children of elementary school age. Listeners judge how good short tone sequences sound as melodies. The ratings show a pattern of increasing differentiation of the pitches in an octave range. The youngest listeners distinguish between scale and nonscale tones; older listeners distinguish between the tonic triad tones and other scale components. A group of adult listeners show octave equivalence and temporal asymmetries, with a preference for sequences ending on the more stable tones within the hierarchy. Pitch height effects do not interact with the age of the listener. These results are discussed in terms of the primacy of physical variables, novice-expert differences, and general cognitive principles governing the acquisition and development of internal representations of pitch relationships.  相似文献   

15.
How can we understand the uses of music in daily life? Music is a universal phenomenon but with significant interindividual and cultural variability. Listeners’ gender and cultural background may influence how and why music is used in daily life. This paper reports the first investigation of a holistic framework and a new measure of music functions (RESPECT‐music) across genders and six diverse cultural samples (students from Germany, Kenya, Mexico, New Zealand, Philippines, and Turkey). Two dimensions underlie the mental representation of music functions. First, music can be used for contemplation or affective functions. Second, music can serve intrapersonal, social, and sociocultural functions. Results reveal that gender differences occur for affective functions, indicating that female listeners use music more for affective functions, i.e., emotional expression, dancing, and cultural identity. Country differences are moderate for social functions (values, social bonding, dancing) and strongest for sociocultural function (cultural identity, family bonding, political attitudes). Cultural values, such as individualism–collectivism and secularism–traditionalism, can help explain cross‐cultural differences in the uses of music. Listeners from more collectivistic cultures use music more frequently for expressing values and cultural identity. Listeners from more secular and individualistic cultures like to dance more. Listeners from more traditional cultures use music more for expressing values and cultural identity, and they bond more frequently with their families over music. The two dimensions of musical functions seem systematically underpinned by listeners’ gender and cultural background. We discuss the uses of music as behavioral expressions of affective and contemplative as well as personal, social, and sociocultural aspects in terms of affect proneness and cultural values.  相似文献   

16.
Due to extensive variability in the phonetic realizations of words, there may be few or no proximal spectro-temporal cues that identify a word’s onset or even its presence. Dilley and Pitt (2010) showed that the rate of context speech, distal from a to-be-recognized word, can have a sizeable effect on whether or not a word is perceived. This investigation considered whether there is a distinct role for distal rhythm in the disappearing word effect. Listeners heard sentences that had a grammatical interpretation with or without a critical function word (FW) and transcribed what they heard (e.g., are in Jill got quite mad when she heard there are birds can be removed and Jill got quite mad when she heard their birds is still grammatical). Consistent with a perceptual grouping hypothesis, participants were more likely to report critical FWs when distal rhythm (repeating ternary or binary pitch patterns) matched the rhythm in the FW-containing region than when it did not. Notably, effects of distal rhythm and distal rate were additive. Results demonstrate a novel effect of distal rhythm on the amount of lexical material listeners hear, highlighting the importance of distal timing information and providing new constraints for models of spoken word recognition.  相似文献   

17.
We examined a variety of real-time responses evoked by a single piece of music, the organ Duetto BWV 805 by J S Bach. The primary data came from a concurrent probe-tone method in which the probe tone is sounded continuously with the music. Listeners judged how well the probe tone fit with the music at each point in time. The process was repeated for all probe tones of the chromatic scale. A self-organizing map (SOM) [Kohonen 1997 Self-organizing Maps (Berlin: Springer)] was used to represent the developing and changing sense of key reflected in these judgments. The SOM was trained on the probe-tone profiles for 24 major and minor keys (Krumhansl and Kessler 1982 Psychological Review 89 334-368). Projecting the concurrent probe-tone data onto the map showed changes both in the perceived keys and in their strengths. Two dynamic models of tonality induction were tested. Model 1 is based on pitch class distributions. Model 2 is based on the tone-transition distributions; it tested the idea that the order of tones might provide additional information about tonality. Both models contained dynamic components for characterizing pitch strength and creating pitch memory representations. Both models produced results closely matching those of the concurrent probe-tone data. Finally real-time judgments of tension were measured. Tension correlated with distance away from the predominant key in the direction of keys built on the dominant and supertonic tones, and also correlated with dissonance.  相似文献   

18.
Music presents information both sequentially, in the form of musical phrases, and simultaneously, in the form of chord structure. The ability to abstract musical structure presented sequentially and simultaneously was investigated using modified versions of the Bransford and Franks’ (1971) paradigm. Listeners heard subsets of musical ideas. The abstraction hypothesis predicted (1) false recognition of novel instances of the abstracted musical idea, (2) confidence of “recognition” should increase as recognition items approximate the complete musical idea, (3) correct rejection of “noncases,” which deviate from the acquired musical structure. Experiment 1 investigated sequential abstraction by using four-phrase folk melodies as musical ideas. Predictions 1 and 3 were confirmed, but the false recognition rate decreased as the number of phrases increased. Listeners were sensitive to improper combinations of phrases and to novel melodies different from melodies presented during acquisition. Experiment 2 investigated simultaneous abstraction using four-voice Bach chorales as musical ideas. Listeners spontaneously integrated choral subsets into holistic musical ideas. Musically trained listeners were better than naive listeners at identifying noncases.  相似文献   

19.
Previous studies have examined verbal rather than vocal aspects of irony. The present study considers how vocal features may cue listeners to one form of irony--sarcasm. Speakers were recorded reading sentences in three conditions (nonsarcasm, spontaneous sarcasm, posed sarcasm) with the resulting utterances filtered to remove verbal content. Listeners (n = 127) then rated these filtered utterances on amount of sarcasm. Results indicated that listeners were able to discriminate posed sarcasm from nonsarcasm but not spontaneous sarcasm from nonsarcasm. An analysis of the vocal features of the utterances as determined by perceptual coding indicated that a slower tempo, greater intensity, and a lower pitch level were significant indicators of sarcasm.  相似文献   

20.
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号