首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A melody’s identity is determined by relations between consecutive tones in terms of pitch and duration, whereas surface features (i.e., pitch level or key, tempo, and timbre) are irrelevant. Although surface features of highly familiar recordings are encoded into memory, little is known about listeners’ mental representations of melodies heard once or twice. It is also unknown whether musical pitch is represented additively or interactively with temporal information. In two experiments, listeners heard unfamiliar melodies twice in an initial exposure phase. In a subsequent test phase, they heard the same (old) melodies interspersed with new melodies. Some of the old melodies were shifted in key, tempo, or key and tempo. Listeners’ task was to rate how well they recognized each melody from the exposure phase while ignoring changes in key and tempo. Recognition ratings were higher for old melodies that stayed the same compared to those that were shifted in key or tempo, and detrimental effects of key and tempo changes were additive in between-subjects (Experiment 1) and within-subjects (Experiment 2) designs. The results confirm that surface features are remembered for melodies heard only twice. They also imply that key and tempo are processed and stored independently.  相似文献   

2.
Pitch perception is fundamental to melody in music and prosody in speech. Unlike many animals, the vast majority of human adults store melodic information primarily in terms of relative not absolute pitch, and readily recognize a melody whether rendered in a high or a low pitch range. We show that at 6 months infants are also primarily relative pitch processors. Infants familiarized with a melody for 7 days preferred, on the eighth day, to listen to a novel melody in comparison to the familiarized one, regardless of whether the melodies at test were presented at the same pitch as during familiarization or transposed up or down by a perfect fifth (7/12th of an octave) or a tritone (1/2 octave). On the other hand, infants showed no preference for a transposed over original-pitch version of the familiarized melody, indicating that either they did not remember the absolute pitch, or it was not as salient to them as the relative pitch.  相似文献   

3.
Various surface features—timbre, tempo, and pitch—influence melody recognition memory, but articulation format effects, if any, remain unknown. For the first time, these effects were examined. In Experiment 1, melodies that remained in the same, or appeared in a different but similar, articulation format from study to test were recognized better than were melodies that were presented in a distinct format at test. A similar articulation format adequately induced matching processes to enhance recognition. Experiment 2 revealed that melodies rated as perceptually dissimilar on the basis of the location of the articulation mismatch did not impair recognition performance, suggesting an important boundary condition for articulation format effects on memory recognition—the matching of the memory trace and recognition probe may depend more on the overall proportion, rather than the temporal location, of the mismatch. The present findings are discussed in terms of a global matching advantage hypothesis.  相似文献   

4.
Indexical effects refer to the influence of surface variability of the to-be-remembered items, such as different voices speaking the same words or different timbres (musical instruments) playing the same melodies, on recognition memory performance. The nature of timbre effects in melody recognition was investigated in two experiments. Experiment 1 showed that melodies that remained in the same timbre from study to test were discriminated better than melodies presented in a previously studied but different, or unstudied timbre at test. Timbre effects are attributed solely to instance-specific matching, rather than timbre-specific familiarity. In Experiment 2, when a previously unstudied timbre was similar to the original timbre and it played the melodies at test, performance was comparable to the condition when the exact same timbre was repeated at test. The use of a similar timbre at test enabled the listener to discriminate old from new melodies reliably. Overall, our data suggest that timbre-specific information is encoded and stored in long-term memory. Analogous indexical effects arising from timbre (nonmusical) and voice (nonlexical) attributes in music and speech processing respectively are implied and discussed.  相似文献   

5.
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.  相似文献   

6.
In three experiments, the effects of exposure to melodies on their subsequent liking and recognition were explored. In each experiment, the subjects first listened to a set of familiar and unfamiliar melodies in a study phase. In the subsequent test phase, the melodies were repeated, along with a set of distractors matched in familiarity. Half the subjects were required to rate their liking of each melody, and half had to identify the melodies they had heard earlier in the study phase. Repetition of the studied melodies was found to increase liking of the unfamiliar melodies in the affect task and to be best for detection of familiar melodies in the recognition task (Experiments 1, 2, and 3). These memory effects were found to fade at different time delays between study and test in the affect and recognition tasks, with the latter leading to the most persistent effects (Experiment 2). Both study-to-test changes in melody timbre and manipulation of study tasks had a marked impact on recognition and little influence on liking judgments (Experiment 3). Thus, all manipulated variables were found to dissociate the memory effects in the two tasks. The results are consistent with the view that memory effects in the affect and recognition tasks pertain to the implicit and explicit forms of memory, respectively. Part of the results are, however, at variance with the literature on implicit and explicit memory in the auditory domain. Attribution of these differences to the use of musical material is discussed.  相似文献   

7.
This paper examines infants’ ability to perceive various aspects of musical material that are significant in music in general and in Western European music in particular: contour, intervals, exact pitches, diatonic structure, and rhythm. For the most part, infants focus on relational aspects of melodies, synthesizing global representations from local details. They encode the contour of a melody across variations in exact pitches and intervals. They extract information about pitch direction from the smallest musically relevant pitch change in Western music, the semitone. Under certain conditions, infants detect interval changes in the context of transposed sequences, their performance showing enhancement for sequences that conform to Western musical structure. Infants have difficulty retaining exact pitches except for sets of pitches that embody important musical relations. In the temporal domain, they group the elements of auditory sequences on the basis of similarity and they extract the temporal structure of a melody across variations in tempo.  相似文献   

8.
We report evidence that long-term memory retains absolute (accurate) features of perceptual events. Specifically, we show that memory for music seems to preserve the absolute tempo of the musical performance. In Experiment 1, 46 subjects sang two different popular songs from memory, and their tempos were compared with recorded versions of the songs. Seventy-two percent of the productions on two consecutive trials came within 8% of the actual tempo, demonstrating accuracy near the perceptual threshold (JND) for tempo. In Experiment 2, a control experiment, we found that folk songs lacking a tempo standard generally have a large variability in tempo; this counters arguments that memory for the tempo of remembered songs is driven by articulatory constraints. The relevance of the present findings to theories of perceptual memory and memory for music is discussed.  相似文献   

9.
Episodic recognition of novel and familiar melodies was examined by asking participants to make judgments about the recency and frequency of presentation of melodies over the course of two days of testing. For novel melodies, recency judgments were poor and participants often confused the number of presentations of a melody with its day of presentation; melodies heard frequently were judged as have been heard more recently than they actually were. For familiar melodies, recency judgments were much more accurate and the number of presentations of a melody helped rather than hindered performance. Frequency judgments were generally more accurate than recency judgments and did not demonstrate the same interaction with musical familiarity. Overall, these findings suggest that (1) episodic recognition of novel melodies is based more on a generalized "feeling of familiarity" than on a specific episodic memory, (2) frequency information contributes more strongly to this generalized memory than recency information, and (3) the formation of an episodic memory for a melody depends either on the overall familiarity of the stimulus or the availability of a verbal label.  相似文献   

10.
Infants 7 to 8.5 months of age were tested for their discrimination of timbre or sound quality differences in the context of variable exemplars. They were familiarized with a set of complex tones with specified spectral structure; members of the set varied in fundamental frequency, intensity, or duration. Infants were then tested for their detection of tones that contrasted in spectral structure but were similar in other respects. They successfully differentiated the two spectral structures in the context of these variations, indicating that they can classify tonal stimuli on the basis of timbre. When the stimuli were organized into arbitrary categories, infants were unable to differentiate these categories, indicating that their performance with nonarbitrary categories was not attributable to memorization of the familiarized set.  相似文献   

11.
Unlike the visual stimuli used in most object identification experiments, melodies are organized temporally rather than spatially. Therefore, they may be particularly sensitive to manipulations of the order in which information is revealed. Two experiments examined whether the initial elements of a melody are differentially important for identification. Initial exposures to impoverished versions of a melody significantly decreased subsequent identification, especially when the early exposures did not include the initial notes of the melody. Analyses of the initial notes indicated that they are differentially important for melody identification because they help the listener detect the overall structure of the melody. Confusion errors tended to be songs that either were drawn from the same genre or shared similar phrasing. These data indicate that conceptual processing influences melody identification, that phrase-level information is used to organize melodies in semantic memory, and that phrase-level information is required to effectively search semantic memory.  相似文献   

12.
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract.  相似文献   

13.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

14.
Two experiments examined whether the memory representation for songs consists of independent or integrated components (melody and text). Subjects heard a serial presentation of excerpts from largely unfamiliar folksongs, followed by a recognition test. The test required subjects to recognize songs, melodies, or texts and consisted of five types of items: (a) exact songs heard in the presentation; (b) new songs; (c) old tunes with new words; (d) new tunes with old words; and (e) old tunes with old words of a different song from the same presentation (‘mismatch songs’). Experiment 1 supported the integration hypothesis: Subjects' recognition of components was higher in exact songs (a) than in songs with familiar but mismatched components (e). Melody recognition, in particular, was near chance unless the original words were present. Experiment 2 showed that this integration of melody and text occurred also across different performance renditions of a song and that it could not be eliminated by voluntary attention to the melody.  相似文献   

15.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

16.
Recognition memory for previously novel melodies was tested in three experiments in which subjects usedremember andknow responses to report experiences of recollection, or of familiarity in the absence of recollection, for each melody they recognized. Some of the melodies were taken from Polish folk songs and presented vocally, but without the words. Others were taken from obscure pieces of classical music, presented as single-line melodies. Prior to the test, the melodies were repeated for varying numbers of study trials. Repetition of the Polish melodies increased both remember and know responses, while repetition of classical melodies increased remember but not know responses. When subjects were instructed to report guesses, guess responses were inversely related to remember and know responses and there were more guesses to lures than to targets. These findings establish that remembering and knowing are fully independent functionally and, by the same token, they provide further evidence against the idea that response exclusivity causes increases in remembering to force decreases in knowing. The findings also suggest that simultaneous increases in remembering and knowing occurred because the Polish melodies came from a genre for which the subjects had relatively little previous experience.  相似文献   

17.
We investigated the effects of different encoding tasks and of manipulations of two supposedly surface parameters of music on implicit and explicit memory for tunes. In two experiments, participants were first asked to either categorize instrument or judge familiarity of 40 unfamiliar short tunes. Subsequently, participants were asked to give explicit and implicit memory ratings for a list of 80 tunes, which included 40 previously heard. Half of the 40 previously heard tunes differed in timbre (Experiment 1) or tempo (Experiment 2) in comparison with the first exposure. A third experiment compared similarity ratings of the tunes that varied in timbre or tempo. Analysis of variance (ANOVA) results suggest first that the encoding task made no difference for either memory mode. Secondly, timbre and tempo change both impaired explicit memory, whereas tempo change additionally made implicit tune recognition worse. Results are discussed in the context of implicit memory for nonsemantic materials and the possible differences in timbre and tempo in musical representations.  相似文献   

18.
How do perceivers apply knowledge to instances they have never experienced before? On one hand, listeners might use idealized representations that do not contain specific details. On the other, they might recognize and process information based on more detailed memory representations. The current study examined the latter possibility with respect to musical meter perception, previously thought to be computed based on highly-idealized (isochronous) internal representations. In six experiments, listeners heard sets of metrically-ambiguous melodies. Each melody was played in a simultaneous musical context with unambiguous metrical cues (3/4 or 6/8). Cross-melody similarity was manipulated by pairing certain cues-timbre (musical instrument) and motif content (2-6-note patterns)-with each meter, or distributing cues across meters. After multiple exposures, listeners heard each melody without context, and judged metrical continuations (all Experiments) or familiarity (Experiments 5-6). Responses were assessed for "metrical restoration"-the tendency to make metrical judgments that fit the melody's previously-heard metrical context. Cross-melody similarity affected the presence and degree of metrical restoration, and timbre affected familiarity. Results suggest that metrical processing may be calculated based on fairly detailed representations rather than idealized isochronous pulses, and is dissociated somewhat from familiarity judgments. Implications for theories of meter perception are discussed.  相似文献   

19.
Quinn PC  Yahr J  Kuhn A  Slater AM  Pascalils O 《Perception》2002,31(9):1109-1121
Six experiments based on visual preference procedures were conducted to examine gender categorization of female versus male faces by infants aged 3 to 4 months. In experiment 1, infants familiarized with male faces preferred a female face over a novel male face, but infants familiarized with female faces divided their attention between a male face and a novel female face. Experiment 2 demonstrated that these asymmetrical categorization results were likely due to a spontaneous preference for females. Experiments 3 and 4 showed that the preference for females was based on processing of the internal facial features in their upright orientation, and not the result of external hair cues or higher-contrast internal facial features. While experiments 1 through 4 were conducted with infants reared with female primary caregivers, experiment 5 provided evidence that infants reared with male primary caregivers tend to show a spontaneous preference for males. Experiment 6 showed that infants reared with female primary caregivers displayed recognition memory for individual females, but not males. These results suggest that representation of information about human faces by young infants may be influenced by the gender of the primary caregiver.  相似文献   

20.
In two experiments, we investigated how auditory–motor learning influences performers’ memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory–motor (normal performance), and weakly coupled auditory–motor (performing along with auditory recordings). Pianists’ recognition of the learned melodies was better following auditory-only or auditory–motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory–motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory–motor learning. These findings suggest that motor learning can aid performers’ auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号