首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Pitch perception is fundamental to melody in music and prosody in speech. Unlike many animals, the vast majority of human adults store melodic information primarily in terms of relative not absolute pitch, and readily recognize a melody whether rendered in a high or a low pitch range. We show that at 6 months infants are also primarily relative pitch processors. Infants familiarized with a melody for 7 days preferred, on the eighth day, to listen to a novel melody in comparison to the familiarized one, regardless of whether the melodies at test were presented at the same pitch as during familiarization or transposed up or down by a perfect fifth (7/12th of an octave) or a tritone (1/2 octave). On the other hand, infants showed no preference for a transposed over original-pitch version of the familiarized melody, indicating that either they did not remember the absolute pitch, or it was not as salient to them as the relative pitch.  相似文献   

2.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

3.
绝对音高感是一种特殊的音高命名能力。通过论述绝对音高能力与音乐加工的关系,发现绝对音高者具有对音高、音程和旋律的加工优势,但他们对相对音高的加工存在劣势。同时,与非绝对音高者相比,绝对音高者大脑结构和功能都表现出特殊性。未来研究应进一步厘清音乐训练对绝对音高者音乐加工的影响。  相似文献   

4.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

5.
The hypothesis that melodies are recognized at moments when they exhibit a distinctive musical pattern was tested. In a melody recognition experiment, point-of-recognition (POR) data were gathered from 32 listeners (16 musicians and 16 nonmusicians) judging 120 melodies. A series of models of melody recognition were developed, resulting from a stepwise multiple regression of two classes of information relating to melodic familiarity and melodic distinctiveness. Melodic distinctiveness measures were assembled through statistical analyses of over 15,000 Western themes and melodies. A significant model, explaining 85% of the variance, entered measures primarily of timing distinctiveness and pitch distinctiveness, but excluding familiarity, as predictors of POR. Differences between nonmusician and musician models suggest a processing shift from momentary to accumulated information with increased exposure to music. Supplemental materials for this article may be downloaded from http://mc.psychonomic-journals.org/content/supplemental.  相似文献   

6.
The authors explore priming effects of pitch repetition in music in 3 experiments. Musically untrained participants heard a short melody and sang the last pitch of the melody as quickly as possible. Each experiment manipulated (a) whether or not the tone to be sung (target) was heard earlier in the melody (primed) and (b) the prime-target distance (measured in events). Experiment 1 used variable-length melodies, whereas Experiments 2 and 3 used fixed-length melodies. Experiment 3 changed the timbre of the target tone. In all experiments, fast-responding participants produced repeated tones faster than nonrepeated tones, and this repetition benefit decreased as prime-target distances increased. All participants produced expected tonic endings faster than less expected nontonic endings. Repetition and tonal priming effects are compared with harmonic priming effects in music and with repetition priming effects in language.  相似文献   

7.
If the notes of two melodies whose pitch ranges do not overlap are interleaved in time so that successive tones come from the different melodies, the resulting sequence of tones is perceptually divided into groups that correspond to the two melodies. Such “melodic fission” demonstrates perceptual grouping based on pitch alone, and has been used extensively in music.Experiment I showed that the identification of interleaved pairs of familiar melodies is possible if their pitch ranges do not overlap, but difficult otherwise. A short-term recognition-memory paradigm (Expt II) showed that interleaving a “background” melody with an unfamiliar melody interferes with same-different judgments regardless of the separation of their pitch ranges, but that range separation attenuates the interference effect. When pitch ranges overlap, listeners can overcome the interference effect and recognize a familiar target melody if the target is prespecified, thereby permitting them to search actively for it (Expt III). But familiarity or prespecification of the interleaved background melody appears not to reduce its interfering effects on same-different judgments concerning unfamiliar target melodies (Expt IV).  相似文献   

8.
绝对音高感是一种敏锐的音高知觉能力。拥有这种能力的人可以在没有标准音(A4)参照下, 对所听到的音高进行命名。本文通过对比绝对音高感被试与不具有绝对音高感被试对音乐句法基本规则的知觉以及对音乐句法结构划分能力的差异, 探讨绝对音高感与音乐句法加工能力之间的关系。结果表明, 绝对音高感被试对音乐句法基本规则的知觉水平高于控制组; 同时, 这种知觉上的优势也延伸到他们对音乐句法结构的划分。这一结果说明, 绝对音高感被试不仅可以对音高进行孤立命名, 而且表现出对调性音乐音高关系加工的优势。  相似文献   

9.
Pitch can be conceptualized as a bidimensional quantity, reflecting both the overall pitch level of a tone (tone height) and its position in the octave (tone chroma). Though such a conceptualization has been well supported for perception of a single tone, it has been argued that the dimension of tone chroma is irrelevant in melodic perception. In the current study, melodies were subjected to structural transformations designed to evaluate the effects of interval magnitude, contour, tone height, and tone chroma. In two transformations, the component tones of a melody were displaced by octave intervals, either preserving or violating the pattern of changes in pitch direction (melodic contour). Replicating previous work, when contour was violated perception of the melody was severely disrupted. In contrast, when contour was preserved the melodies were identified as accurately as the untransformed melodies. In other transformations, a variety of forms of contour information were preserved, while eliminating information for absolute pitch and interval magnitude. The level of performance on all such transformations fell between the levels observed in the other two conditions. These results suggest that the bidimensional model of pitch is applicable to recognition of melodies as well as single tones. Moreover, the results argue that contour, as well as interval magnitude, is providing essential information for melodic perception.  相似文献   

10.
What is the involvement of what we know in what we perceive? In this article, the contribution of melodic schema-based processes to the perceptual organization of tone sequences is examined. Two unfamiliar six-tone melodies, one of which was interleaved with distractor tones, were presented successively to listeners who were required to decide whether the melodies were identical or different. In one condition, the comparison melody was presented after the mixed sequence: a target melody interleaved with distractor tones. In another condition, it was presented beforehand, so that the listeners had precise knowledge about the melody to be extracted from the mixture. In the latter condition, recognition performance was better and a bias toward same responses was reduced, as compared with the former condition. A third condition, in which the comparison melody presented beforehand was transposed up in frequency, revealed that whereas the performance improvement was explained in part by absolute pitch or frequency priming, relative pitch representation (interval and/or contour structure) may also have played a role. Differences in performance as a function of mean frequency separation between target and distractor sequences, when listeners did or did not have prior knowledge about the target melody, argue for a functional distinction between primitive and schema-based processes in auditory scene analysis.  相似文献   

11.
Indexical effects refer to the influence of surface variability of the to-be-remembered items, such as different voices speaking the same words or different timbres (musical instruments) playing the same melodies, on recognition memory performance. The nature of timbre effects in melody recognition was investigated in two experiments. Experiment 1 showed that melodies that remained in the same timbre from study to test were discriminated better than melodies presented in a previously studied but different, or unstudied timbre at test. Timbre effects are attributed solely to instance-specific matching, rather than timbre-specific familiarity. In Experiment 2, when a previously unstudied timbre was similar to the original timbre and it played the melodies at test, performance was comparable to the condition when the exact same timbre was repeated at test. The use of a similar timbre at test enabled the listener to discriminate old from new melodies reliably. Overall, our data suggest that timbre-specific information is encoded and stored in long-term memory. Analogous indexical effects arising from timbre (nonmusical) and voice (nonlexical) attributes in music and speech processing respectively are implied and discussed.  相似文献   

12.
This paper examines infants’ ability to perceive various aspects of musical material that are significant in music in general and in Western European music in particular: contour, intervals, exact pitches, diatonic structure, and rhythm. For the most part, infants focus on relational aspects of melodies, synthesizing global representations from local details. They encode the contour of a melody across variations in exact pitches and intervals. They extract information about pitch direction from the smallest musically relevant pitch change in Western music, the semitone. Under certain conditions, infants detect interval changes in the context of transposed sequences, their performance showing enhancement for sequences that conform to Western musical structure. Infants have difficulty retaining exact pitches except for sets of pitches that embody important musical relations. In the temporal domain, they group the elements of auditory sequences on the basis of similarity and they extract the temporal structure of a melody across variations in tempo.  相似文献   

13.
The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.  相似文献   

14.
People easily recognize a familiar melody in a previously unheard key, but they also retain some key-specific information. Does the recognition of a transposed melody depend on either pitch distance or harmonic distance from the initially learned instances? Previous research has shown a stronger effect of pitch closeness than of harmonic similarity, but did not directly test for an additional effect of the latter variable. In the present experiment, we familiarized participants with a simple eight-note melody in two different keys (C and D) and then tested their ability to discriminate the target melody from foils in other keys. The transpositions included were to the keys of C# (close in pitch height, but harmonically distant), G (more distant in pitch, but harmonically close), and F# (more distant in pitch and harmonically distant). Across participants, the transpositions to F# and G were either higher or lower than the initially trained melodies, so that their average pitch distances from C and D were equated. A signal detection theory analysis confirmed that discriminability (d′) was better for targets and foils that were close in pitch distance to the studied exemplars. Harmonic similarity had no effect on discriminability, but it did affect response bias (c), in that harmonic similarity to the studied exemplars increased both hits and false alarms. Thus, both pitch distance and harmonic distance affect the recognition of transposed melodies, but with dissociable effects on discrimination and response bias.  相似文献   

15.
Tonal structure is musical organization on the basis of pitch, in which pitches vary in importance and rate of occurrence according to their relationship to a tonal center. Experiment 1 evaluated the maximum key-profile correlation (MKC), a product of Krumhansl and Schmuckler’s key-finding algorithm (Krumhansl, 1990), as a measure of tonal structure. The MKC is the maximum correlation coefficient between the pitch class distribution in a musical sample and key profiles,which indicate the stability of pitches with respect to particular tonal centers. The MKC values of melodies correlated strongly with listeners’ ratings of tonal structure. To measure the influence of the temporal order of pitches on perceived tonal structure, three measures (fifth span, semitone span, and pitch contour) taken from previous studies of melody perception were also correlated with tonal structure ratings. None of the temporal measures correlated as strongly or as consistently with tonal structure ratings as did the MKC, and nor did combining them with the MKC improve prediction of tonal structure ratings. In Experiment 2, the MKC did not correlate with recognition memory of melodies. However, melodies with very low MKC values were recognized less accurately than melodies with very high MKC values. Although it does not incorporate temporal, rhythmic, or harmonic factors that may influence perceived tonal structure, the MKC can be interpreted as a measure of tonal structure, at least for brief melodies.  相似文献   

16.
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.  相似文献   

17.
Abstract: A global property (i.e., pitch set) of a melody appears to serve as a primary cue for key identification. Previous studies have proposed specific local properties in a melody (e.g., the augmented fourth, the perfect fifth, etc.) that may function as further cues. However, the role of the latter in key identification is controversial. The present study was designed to investigate what kinds of local properties, if any, function as reliable cues for key identification. Listeners were asked to identify keys for 450 melodies that consisted of the same pitch set, but which differed in sequential constraints. Using multiple discriminant analyses, we evaluated relative contributions of as many kinds of local properties as possible (e.g., single intervals, single pitch classes in each sequential position, etc.). The results showed that, except for the pitch class of the final tone, for which interpretation should be taken cautiously, none of the specific local properties examined contributed significantly to key identification. This finding suggests that, contrary to prior findings, key identification is derived from unidentified properties other than the specific local properties.  相似文献   

18.
Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.  相似文献   

19.
One critical step in the processing of complex auditory information (i.e., language and music) involves organizing such information into hierarchical units, such as phrases. In this study, musically trained and untrained listeners' recognition memory for short, naturalistic melodies varying in their phrase structure was tested. For musically trained subjects, memory for information preceding a phrase boundary was disrupted and memory for information subsequent to a phrase boundary was enhanced relative to memory in similar temporal locations for excerpts not containing a phrase boundary. Musically untrained listeners, in contrast, showed no such differences as a function of the phrasing of the melody. These findings conform with previous results in both psycholinguistics and musical cognition and suggest that the phrase serves as a functional unit in musical processing, guiding the parsing of musical sequences during perception, along with the structuring of memory for musical passages.  相似文献   

20.
A melody’s identity is determined by relations between consecutive tones in terms of pitch and duration, whereas surface features (i.e., pitch level or key, tempo, and timbre) are irrelevant. Although surface features of highly familiar recordings are encoded into memory, little is known about listeners’ mental representations of melodies heard once or twice. It is also unknown whether musical pitch is represented additively or interactively with temporal information. In two experiments, listeners heard unfamiliar melodies twice in an initial exposure phase. In a subsequent test phase, they heard the same (old) melodies interspersed with new melodies. Some of the old melodies were shifted in key, tempo, or key and tempo. Listeners’ task was to rate how well they recognized each melody from the exposure phase while ignoring changes in key and tempo. Recognition ratings were higher for old melodies that stayed the same compared to those that were shifted in key or tempo, and detrimental effects of key and tempo changes were additive in between-subjects (Experiment 1) and within-subjects (Experiment 2) designs. The results confirm that surface features are remembered for melodies heard only twice. They also imply that key and tempo are processed and stored independently.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号