首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Episodic recognition of novel and familiar melodies was examined by asking participants to make judgments about the recency and frequency of presentation of melodies over the course of two days of testing. For novel melodies, recency judgments were poor and participants often confused the number of presentations of a melody with its day of presentation; melodies heard frequently were judged as have been heard more recently than they actually were. For familiar melodies, recency judgments were much more accurate and the number of presentations of a melody helped rather than hindered performance. Frequency judgments were generally more accurate than recency judgments and did not demonstrate the same interaction with musical familiarity. Overall, these findings suggest that (1) episodic recognition of novel melodies is based more on a generalized "feeling of familiarity" than on a specific episodic memory, (2) frequency information contributes more strongly to this generalized memory than recency information, and (3) the formation of an episodic memory for a melody depends either on the overall familiarity of the stimulus or the availability of a verbal label.  相似文献   

2.
Participants heard music snippets of varying melodic and instrumental familiarity paired with animal-name titles. They then recalled the target when given either the melody or the title as a cue, or they gave name feeling-of-knowing (FOK) ratings. In general, recall for titles was better than it was for melodies, and recall was enhanced with increasing melodic familiarity of both the cues and the targets. Accuracy of FOK ratings, but not magnitude, also increased with increasing familiarity. Although similar ratings were given after melody and title cues, accuracy was better with title cues. Finally, knowledge of the real titles of the familiar music enhanced recall but had, by and large, no effect on the FOK ratings.  相似文献   

3.
Pitch can be conceptualized as a bidimensional quantity, reflecting both the overall pitch level of a tone (tone height) and its position in the octave (tone chroma). Though such a conceptualization has been well supported for perception of a single tone, it has been argued that the dimension of tone chroma is irrelevant in melodic perception. In the current study, melodies were subjected to structural transformations designed to evaluate the effects of interval magnitude, contour, tone height, and tone chroma. In two transformations, the component tones of a melody were displaced by octave intervals, either preserving or violating the pattern of changes in pitch direction (melodic contour). Replicating previous work, when contour was violated perception of the melody was severely disrupted. In contrast, when contour was preserved the melodies were identified as accurately as the untransformed melodies. In other transformations, a variety of forms of contour information were preserved, while eliminating information for absolute pitch and interval magnitude. The level of performance on all such transformations fell between the levels observed in the other two conditions. These results suggest that the bidimensional model of pitch is applicable to recognition of melodies as well as single tones. Moreover, the results argue that contour, as well as interval magnitude, is providing essential information for melodic perception.  相似文献   

4.
If the notes of two melodies whose pitch ranges do not overlap are interleaved in time so that successive tones come from the different melodies, the resulting sequence of tones is perceptually divided into groups that correspond to the two melodies. Such “melodic fission” demonstrates perceptual grouping based on pitch alone, and has been used extensively in music.Experiment I showed that the identification of interleaved pairs of familiar melodies is possible if their pitch ranges do not overlap, but difficult otherwise. A short-term recognition-memory paradigm (Expt II) showed that interleaving a “background” melody with an unfamiliar melody interferes with same-different judgments regardless of the separation of their pitch ranges, but that range separation attenuates the interference effect. When pitch ranges overlap, listeners can overcome the interference effect and recognize a familiar target melody if the target is prespecified, thereby permitting them to search actively for it (Expt III). But familiarity or prespecification of the interleaved background melody appears not to reduce its interfering effects on same-different judgments concerning unfamiliar target melodies (Expt IV).  相似文献   

5.
Abstract

Six highly familiar melodies were submitted to three transformations: reduction and two rhythmic group transformations. These three transformations offered the opportunity to compare the role of various means of melody recognition: melodic contour, harmonic structure, local surface cues. If melody recognition relies on melodic contour, an original melody would be easier to recognise after rhythmic group transformation than after reduction; the rhythmic group transformation, but not the reduction, preserves the melodic contour. If melody recognition depends on the harmonic structure, an original melody would be easier to recognise after reduction than after a rhythmic group transformation; the reduction, but not the rhythmic group transformation, respects the underlying harmonic structure. The results of two experiments, one with children and one with adults, showed that recognition was better for rhythmic group transformation but only when local surface cues were preserved, a result that could neither be predicted by the melodic contour hypothesis nor by the harmonic structure hypothesis. The results give support to the cue abstraction hypothesis, which suggests that melody recognition relies on the recognition of certain surface cues abstracted while hearing and which are then memorised. Recognition performances and speed of recognition served as dependent variables.  相似文献   

6.
What is the involvement of what we know in what we perceive? In this article, the contribution of melodic schema-based processes to the perceptual organization of tone sequences is examined. Two unfamiliar six-tone melodies, one of which was interleaved with distractor tones, were presented successively to listeners who were required to decide whether the melodies were identical or different. In one condition, the comparison melody was presented after the mixed sequence: a target melody interleaved with distractor tones. In another condition, it was presented beforehand, so that the listeners had precise knowledge about the melody to be extracted from the mixture. In the latter condition, recognition performance was better and a bias toward same responses was reduced, as compared with the former condition. A third condition, in which the comparison melody presented beforehand was transposed up in frequency, revealed that whereas the performance improvement was explained in part by absolute pitch or frequency priming, relative pitch representation (interval and/or contour structure) may also have played a role. Differences in performance as a function of mean frequency separation between target and distractor sequences, when listeners did or did not have prior knowledge about the target melody, argue for a functional distinction between primitive and schema-based processes in auditory scene analysis.  相似文献   

7.
Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.  相似文献   

8.
Three experiments were conducted to study motor programs used by expert singers to produce short tonal melodies. Each experiment involved a response-priming procedure in which singers prepared to sing a primary melody but on 50% of trials had to switch and sing a different (secondary) melody instead. In Experiment 1, secondary melodies in the same key as the primary melody were easier to produce than secondary melodies in a different key. Experiment 2 showed that it was the initial note rather than key per se that affected production of secondary melodies. In Experiment 3, secondary melodies involving exact transpositions were easier to sing than secondary melodies with a different contour than the primary melody. Also, switches between the keys of C and G were easier than those between C and E. Taken together, these results suggest that the initial note of a melody may be the most important element in the motor program, that key is represented in a hierarchical form, and that melodic contour is represented as a series of exact semitone offsets.  相似文献   

9.
This study presents a probabilistic model of melody perception, which infers the key of a melody and also judges the probability of the melody itself. The model uses Bayesian reasoning: For any "surface" pattern and underlying "structure," we can infer the structure maximizing P (structure|surface) based on knowledge of P (surface, structure). The probability of the surface can then be calculated as ∑ P (surface, structure), summed over all structures. In this case, the surface is a pattern of notes; the structure is a key. A generative model is proposed, based on three principles: (a) melodies tend to remain within a narrow pitch range; (b) note-to-note intervals within a melody tend to be small; and (c) notes tend to conform to a distribution (or key profile) that depends on the key. The model is tested in three ways. First, it is tested on its ability to identify the keys of a set of folksong melodies. Second, it is tested on a melodic expectation task in which it must judge the probability of different notes occurring given a prior context; these judgments are compared with perception data from a melodic expectation experiment. Finally, the model is tested on its ability to detect incorrect notes in melodies by assigning them lower probabilities than the original versions.  相似文献   

10.
Seven experiments explored the time course of recognition of brief novel melodies. In a continuous-running-memory task, subjects recognized melodic transpositions following delays up to 2.0 min. The delays were either empty or filled with other melodies. Test items included exact transpositions (T), same-contour lures (SC) with altered pitch intervals, and different-contour lures (DC). DCs differed from Ts in the pattern of ups and downs of pitch. With this design, we assessed subjects’ discrimination of detailed changes in pitch intervals (T/SC discrimination) as well as their discrimination of contour changes (T/DC). We used both artificial and “real” melodies. Artificial melodies differed in conformity to a musical key, being tonal or atonal. After empty delays, T/DC discrimination was superior to T/SC discrimination. Surprisingly, after filled delays, T/SC discrimination was superior to T/DC. When only filled delays were tested, T/SC discrimination did not decline over the longest delays. T/DC performance declined more than did T/SC performance across both empty and filled delays. Tonality was an important factor only for T/SC discrimination after filled delays. T/DC performance was better with rhythmically intact folk melodies than with artificial isochronous melodies. Although T/SC performance improved over filled delays, it did not overtake T/DC performance. These results suggest that (1) contour and pitch-interval information make different contributions to recognition, with contour dominating performance after brief empty delays and pitch intervals dominating after longer filled delays; (2) a coherent tonality facilitates the encoding of pitch-interval patterns of melodies; and (3) the rich melodic—rhythmic contours of real melodies facilitate T/DC discrimination. These results are discussed in terms of automatic and controlled processing of melodic information.  相似文献   

11.
In three experiments, the effects of exposure to melodies on their subsequent liking and recognition were explored. In each experiment, the subjects first listened to a set of familiar and unfamiliar melodies in a study phase. In the subsequent test phase, the melodies were repeated, along with a set of distractors matched in familiarity. Half the subjects were required to rate their liking of each melody, and half had to identify the melodies they had heard earlier in the study phase. Repetition of the studied melodies was found to increase liking of the unfamiliar melodies in the affect task and to be best for detection of familiar melodies in the recognition task (Experiments 1, 2, and 3). These memory effects were found to fade at different time delays between study and test in the affect and recognition tasks, with the latter leading to the most persistent effects (Experiment 2). Both study-to-test changes in melody timbre and manipulation of study tasks had a marked impact on recognition and little influence on liking judgments (Experiment 3). Thus, all manipulated variables were found to dissociate the memory effects in the two tasks. The results are consistent with the view that memory effects in the affect and recognition tasks pertain to the implicit and explicit forms of memory, respectively. Part of the results are, however, at variance with the literature on implicit and explicit memory in the auditory domain. Attribution of these differences to the use of musical material is discussed.  相似文献   

12.
Pitch perception is fundamental to melody in music and prosody in speech. Unlike many animals, the vast majority of human adults store melodic information primarily in terms of relative not absolute pitch, and readily recognize a melody whether rendered in a high or a low pitch range. We show that at 6 months infants are also primarily relative pitch processors. Infants familiarized with a melody for 7 days preferred, on the eighth day, to listen to a novel melody in comparison to the familiarized one, regardless of whether the melodies at test were presented at the same pitch as during familiarization or transposed up or down by a perfect fifth (7/12th of an octave) or a tritone (1/2 octave). On the other hand, infants showed no preference for a transposed over original-pitch version of the familiarized melody, indicating that either they did not remember the absolute pitch, or it was not as salient to them as the relative pitch.  相似文献   

13.
We investigated the effect of level-of-processing manipulations on "remember" and "know" responses in episodic melody recognition (Experiments 1 and 2) and how this effect is modulated by item familiarity (Experiment 2). In Experiment 1, participants performed 2 conceptual and 2 perceptual orienting tasks while listening to familiar melodies: judging the mood, continuing the tune, tracing the pitch contour, and counting long notes. The conceptual mood task led to higher d' rates for "remember" but not "know" responses. In Experiment 2, participants either judged the mood or counted long notes of tunes with high and low familiarity. A level-of-processing effect emerged again in participants' "remember" d' rates regardless of melody familiarity. Results are discussed within the distinctive processing framework.  相似文献   

14.
Indexical effects refer to the influence of surface variability of the to-be-remembered items, such as different voices speaking the same words or different timbres (musical instruments) playing the same melodies, on recognition memory performance. The nature of timbre effects in melody recognition was investigated in two experiments. Experiment 1 showed that melodies that remained in the same timbre from study to test were discriminated better than melodies presented in a previously studied but different, or unstudied timbre at test. Timbre effects are attributed solely to instance-specific matching, rather than timbre-specific familiarity. In Experiment 2, when a previously unstudied timbre was similar to the original timbre and it played the melodies at test, performance was comparable to the condition when the exact same timbre was repeated at test. The use of a similar timbre at test enabled the listener to discriminate old from new melodies reliably. Overall, our data suggest that timbre-specific information is encoded and stored in long-term memory. Analogous indexical effects arising from timbre (nonmusical) and voice (nonlexical) attributes in music and speech processing respectively are implied and discussed.  相似文献   

15.
Children's perception of scale and contour in melodies was investigated in five studies. Experimental tasks included judging transposed renditions of melodies (Studies 1 and 3), discriminating between transposed renditions of a melody (Study 2), judging contour-preserving transformations of melodies (Study 4), and judging similarity to a familiar target melody of transformations preserving rhythm or rhythm and contour (Study 5). The first and second studies showed that young children detect key transposition changes even in familiar melodies and they perceive similarity over key transpositions even in unfamiliar melodies. Young children also are sensitive to melodic contour over transformations that preserve it (Study 5), yet they distinguish spontaneously between melodies with the same contour and different intervals (Study 4). The key distance effect reported in the literature did not occur in the tasks of this investigation (Studies 1 and 3), and it may be apparent only for melodies shorter or more impoverished than those used here.  相似文献   

16.
We explored the differences between metamemory judgments for titles as well as for melodies of instrumental music and those for songs with lyrics. Participants were given melody or title cues and asked to provide the corresponding titles or melodies or feeling of knowing (FOK) ratings. FOK ratings were higher but less accurate for titles with melody cues than vice versa, but only in instrumental music, replicating previous findings. In a series of seven experiments, we ruled out style, instrumentation, and strategy differences as explanations for this asymmetry. A mediating role of lyrics between the title and the melody in songs was also ruled out. What emerged as the main explanation was the degree of familiarity with the musical pieces, which was manipulated either episodically or semantically, and within this context, lyrics appeared to serve as an additional source of familiarity. Results are discussed using the Interactive Theory of how FOK judgments are made.  相似文献   

17.
Rhythm (a pattern of onset times and duration of sounds) and melody (a pattern of sound pitches) were studied in 22 children and adolescents several years after temporal lobectomy for intractable epilepsy. Left and right lobectomy groups discriminated rhythms equally well, but the right lobectomy group was poorer at discriminating melodies. Children and adolescents with right lobectomy, but not those with left temporal lobectomy, had higher melody scores with increasing age. Rhythm but not melody was related to memory for the right lobectomy group. In neither group was melody related to age at onset of non-febrile seizures, time from surgery to music tests, or the linear amount of temporal lobe resection. Pitch and melodic contour show different patterns of lateralization after temporal lobectomy in childhood or adolescence.  相似文献   

18.
The perceptual restoration of musical sounds was investigated in 5 experiments with Samuel's (1981a) discrimination methodology. Restoration in familiar melodies was compared to phonemic restoration in Experiment 1. In the remaining experiments, we examined the effect of expectations (generated by familiarity, predictability, and musical schemata) on musical restoration. We investigated restoration in melodies by comparing familiar and unfamiliar melodies (Experiment 2), as well as unfamiliar melodies varying in tonal and rhythmic predictability (Experiment 3). Expectations based on both familiarity and predictability were found to reduce restoration at the melodic level. Restoration at the submelodic level was investigated with scales and chords in Experiments 4 and 5. At this level, key-based expectations were found to increase restoration. Implications for music perception, as well as similarities between restoration in music and speech, are discussed.  相似文献   

19.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

20.
Musically trained listeners compared a notated melody presented visually and a comparison melody presented auditorily, and judged whether they were exactly the same or not, with respect to relative pitch. Listeners who had absolute pitch showed the poorest performance for melodies transposed to different pitch levels from the notated melodies, whereas they exhibited the highest performance for untransposed melodies. By comparison, the performance of melody recognition by listeners who did not have absolute pitch was not influenced by the actual pitch level at which melodies were played. These results suggest that absolute-pitch listeners tend to rely on absolute pitch even in recognizing transposed melodies, for which the absolute-pitch strategy is not useful.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号