首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nonmusicians remember vocal melodies (i.e., sung to la la) better than instrumental melodies. If greater exposure to the voice contributes to those effects, then long-term experience with instrumental timbres should elicit instrument-specific advantages. Here we evaluate this hypothesis by comparing pianists with other musicians and nonmusicians. We also evaluate the possibility that absolute pitch (AP), which involves exceptional memory for isolated pitches, influences melodic memory. Participants heard 24 melodies played in four timbres (voice, piano, banjo, marimba) and were subsequently required to distinguish the melodies heard previously from 24 novel melodies presented in the same timbres. Musicians performed better than nonmusicians, but both groups showed a comparable memory advantage for vocal melodies. Moreover, pianists performed no better on melodies played on piano than on other instruments, and AP musicians performed no differently than non-AP musicians. The findings confirm the robust nature of the voice advantage and rule out explanations based on familiarity, practice, and motor representations.  相似文献   

2.
The hypothesis that melodies are recognized at moments when they exhibit a distinctive musical pattern was tested. In a melody recognition experiment, point-of-recognition (POR) data were gathered from 32 listeners (16 musicians and 16 nonmusicians) judging 120 melodies. A series of models of melody recognition were developed, resulting from a stepwise multiple regression of two classes of information relating to melodic familiarity and melodic distinctiveness. Melodic distinctiveness measures were assembled through statistical analyses of over 15,000 Western themes and melodies. A significant model, explaining 85% of the variance, entered measures primarily of timing distinctiveness and pitch distinctiveness, but excluding familiarity, as predictors of POR. Differences between nonmusician and musician models suggest a processing shift from momentary to accumulated information with increased exposure to music. Supplemental materials for this article may be downloaded from http://mc.psychonomic-journals.org/content/supplemental.  相似文献   

3.
We investigated the effect of level-of-processing manipulations on "remember" and "know" responses in episodic melody recognition (Experiments 1 and 2) and how this effect is modulated by item familiarity (Experiment 2). In Experiment 1, participants performed 2 conceptual and 2 perceptual orienting tasks while listening to familiar melodies: judging the mood, continuing the tune, tracing the pitch contour, and counting long notes. The conceptual mood task led to higher d' rates for "remember" but not "know" responses. In Experiment 2, participants either judged the mood or counted long notes of tunes with high and low familiarity. A level-of-processing effect emerged again in participants' "remember" d' rates regardless of melody familiarity. Results are discussed within the distinctive processing framework.  相似文献   

4.
If the notes of two melodies whose pitch ranges do not overlap are interleaved in time so that successive tones come from the different melodies, the resulting sequence of tones is perceptually divided into groups that correspond to the two melodies. Such “melodic fission” demonstrates perceptual grouping based on pitch alone, and has been used extensively in music.Experiment I showed that the identification of interleaved pairs of familiar melodies is possible if their pitch ranges do not overlap, but difficult otherwise. A short-term recognition-memory paradigm (Expt II) showed that interleaving a “background” melody with an unfamiliar melody interferes with same-different judgments regardless of the separation of their pitch ranges, but that range separation attenuates the interference effect. When pitch ranges overlap, listeners can overcome the interference effect and recognize a familiar target melody if the target is prespecified, thereby permitting them to search actively for it (Expt III). But familiarity or prespecification of the interleaved background melody appears not to reduce its interfering effects on same-different judgments concerning unfamiliar target melodies (Expt IV).  相似文献   

5.
The skill of recognizing musical structures   总被引:1,自引:0,他引:1  
In three experiments, musicians and nonmusicians were compared in their ability to discriminate musical chords. Pairs of chords sharing all notes in common or having different notes were played in succession. Some pairs of chords differed in timbre independent of their musical structures because they were played on different instruments. Musicians outperformed nonmusicians only in recognizing the same chord played on different instruments. Both groups could discriminate between instrument timbres, although musicians did slightly better than nonmusicians. In contrast, with chord structures not conforming to the rules of tonal harmony, musicians and nonmusicians performed equally poorly in recognizing identical chords played on different instruments. Signal detection analysis showed that musicians and nonmusicians set similar criteria for these judgments. Musicians' superiority reflects greater sensitivity to familiar diatonic chords. These results are taken as evidence that musicians develop perceptual and cognitive skills specific to the lawful musical structures encountered in their culture's music. Nonmusicians who lack this knowledge based their judgments on the acoustical properties of the chords.  相似文献   

6.
Two experiments were performed to examine musicians' and nonmusicians' electroencephalographic (EEG) responses to changes in major dimensions (tempo, melody, and key) of classical music. In Exp. 1, 12 nonmusicians' and 12 musicians' EEGs during melody and tempo changes in classical music showed more alpha desynchronization in the left hemisphere (F3) for changes in tempo than in the right. For melody, the nonmusicians were more right-sided (F4) than left in activation, and musicians showed no left-right differences. In Exp. 2, 18 musicians' and 18 nonmusicians' EEG after a key change in classical music showed that distant key changes elicited more right frontal (F4) alpha desynchronization than left. Musicians showed more reaction to key changes than nonmusicians and instructions to attend to key changes had no significant effect. Classical music, given its well-defined structure, offers a unique set of stimuli to study the brain. Results support the concept of hierarchical modularity in music processing that may be automatic.  相似文献   

7.
This is the first reported research that explores the feeling of knowing (FOK) for musical stimuli. Subjects attempted to recall melodies and titles of musical pieces, made FOK ratings when recall failed, and then had a recognition test. With instrumental music (Experiment 1), more titles were recalled when melodies were given as cues than vice versa. With songs whose lyrics were not presented (Experiment 2), however, more melodies were recalled than were titles. For nonrecalled items, although the overall levels of recognition did not differ, FOK ratings were higher for titles than for melodies in Experiment 1, and the opposite pattern occurred in Experiment 2. In both experiments, the FOK ratings predicted melody recognition more accurately than they did title recognition.  相似文献   

8.
This investigation explores the relationship between liking ratings and recognition performance for obscure classical and Russian music melodies. Past studies have explored if awareness of stimulus presentation affects the mere exposure effect (MEE) (Bornstein, 1989; Bornstein & D'Agostino, 1992). We investigate if the type of awareness (i.e., "remembering" an actual occurrence of hearing the melody vs. "knowing" that one is familiar with the melody in a more general, less contextualised sense; Tulving, 1985) affects the MEE. In Experiment 1, we administered the liking test and the recognition test within the same test block and found that liking ratings for "remembered" melodies were on average higher than those for melodies that were merely "known". Although the recognition data replicate the findings of Gardiner, Kaminska, Dixon, and Java (1996) whereby remember and know responses to a given stimulus react differently to repetition from one to three trials, liking data did not vary with the degree of exposure. In addition, our adoption of the recognition category "guess" resulted in a pattern of results that is not only different from previous studies but illustrates the importance of judged, instead of true, old/new status in determining liking. The basic findings of Experiment 1 were replicated in Experiment 2 in which the liking test was conducted prior to the recognition test. Implications of these findings for theories of MEE are discussed.  相似文献   

9.
Two experiments demonstrated the way in which musicians and nonmusicians process realistic music encountered for the first time. A set of tunes whose members were related to each other by a number of specific musical relationships was constructed. In Experiment 1, subjects gave similarity judgments of all pairs of tunes, which were analyzed by the ADDTREE clustering program. Musicians and nonmusicians gave essentially equivalent results: Tunes with different rhythms were rated as being very dissimilar, whereas tunes identical except for being in a major versus a minor mode were rated as being highly similar. In Experiment 2, subjects learned to identify the tunes, and their errors formed a confusion matrix. The matrix was submitted to a clustering analysis. Results from the two experiments corresponded better for the nonmusicians than for the musicians. Musicians presumably exceed nonmusicians in the ability to categorize music in multiple ways, but even nonmusicians extract considerable information from newly heard music.  相似文献   

10.
Episodic recognition of novel and familiar melodies was examined by asking participants to make judgments about the recency and frequency of presentation of melodies over the course of two days of testing. For novel melodies, recency judgments were poor and participants often confused the number of presentations of a melody with its day of presentation; melodies heard frequently were judged as have been heard more recently than they actually were. For familiar melodies, recency judgments were much more accurate and the number of presentations of a melody helped rather than hindered performance. Frequency judgments were generally more accurate than recency judgments and did not demonstrate the same interaction with musical familiarity. Overall, these findings suggest that (1) episodic recognition of novel melodies is based more on a generalized "feeling of familiarity" than on a specific episodic memory, (2) frequency information contributes more strongly to this generalized memory than recency information, and (3) the formation of an episodic memory for a melody depends either on the overall familiarity of the stimulus or the availability of a verbal label.  相似文献   

11.
In three experiments, the effects of exposure to melodies on their subsequent liking and recognition were explored. In each experiment, the subjects first listened to a set of familiar and unfamiliar melodies in a study phase. In the subsequent test phase, the melodies were repeated, along with a set of distractors matched in familiarity. Half the subjects were required to rate their liking of each melody, and half had to identify the melodies they had heard earlier in the study phase. Repetition of the studied melodies was found to increase liking of the unfamiliar melodies in the affect task and to be best for detection of familiar melodies in the recognition task (Experiments 1, 2, and 3). These memory effects were found to fade at different time delays between study and test in the affect and recognition tasks, with the latter leading to the most persistent effects (Experiment 2). Both study-to-test changes in melody timbre and manipulation of study tasks had a marked impact on recognition and little influence on liking judgments (Experiment 3). Thus, all manipulated variables were found to dissociate the memory effects in the two tasks. The results are consistent with the view that memory effects in the affect and recognition tasks pertain to the implicit and explicit forms of memory, respectively. Part of the results are, however, at variance with the literature on implicit and explicit memory in the auditory domain. Attribution of these differences to the use of musical material is discussed.  相似文献   

12.
Current research has suggested that musical stimuli are processed in the right hemisphere except in musicians, in whom there is an increased involvement of the left hemisphere. The present study hypothesized that the more musical training persons receive, the more they will rely on an analytic/left-hemispheric processing strategy. The subjects were 10 faculty and 10 student musicians, and 10 faculty and 10 student nonmusicians. All subjects listened to a series of melodies (some recurring and some not) and excerpts (some actual and some not) in one ear and, after a rest, to a different series of melodies in the other ear. The task was to identify recurring vs. nonrecurring melodies and actual vs. nonactual excerpts. For student musicians, there was a right-ear/left-hemispheric advantage for melody recognition, while for student nonmusicians, the situation was the reverse. Neither faculty group showed any ear preference. There were no significant differences for excerpt recognition. Two possible explanations of the faculty performance were discussed in terms of physical maturation and a functionally more integrated hemispheric approach to the task.  相似文献   

13.
The present study reexamines the hypothesis that there exist emotional attributions specific to simple musical elements. In Experiment 1, groups of participants, with varying musical expertise, rated the emotional meaning of four natural intervals heard as two harmonic sine waves. In Experiment 2, the higher tone was kept constant at an octave above the low tone used in Experiment 1, while the lower tone was constant. Attributions for each interval were positively correlated from one experimental session to another; despite the intervals differed in terms of their component pitches. Musicians gave the most reliable choices of meaning. In a third experiment, participants rated the emotional meaning of various unfamiliar ethnic melodies with expressions describing the intervals’ meaning based on the results of Experiment 1 and 2. There were distinct profiles of emotional meanings for each melody and these coincided with the meaning of intervals that constituted the surface and deep structure of each melody. The intervallic structures (i.e., the main intervals of the tunes) and respective chords for each melody were also presented aurally and participants’ ratings showed similar emotional profiles for these when compared to those of the melodies themselves.  相似文献   

14.
Participants heard music snippets of varying melodic and instrumental familiarity paired with animal-name titles. They then recalled the target when given either the melody or the title as a cue, or they gave name feeling-of-knowing (FOK) ratings. In general, recall for titles was better than it was for melodies, and recall was enhanced with increasing melodic familiarity of both the cues and the targets. Accuracy of FOK ratings, but not magnitude, also increased with increasing familiarity. Although similar ratings were given after melody and title cues, accuracy was better with title cues. Finally, knowledge of the real titles of the familiar music enhanced recall but had, by and large, no effect on the FOK ratings.  相似文献   

15.
Musicians and nonmusicians indicated whether a two-note probe following a tonally structured melody occurred in the melody. The critical probes were taken from one of three locations in the melody: the two notes (1) ending the first phrase, (2) straddling the phrase boundary, and (3) beginning the second phrase. As predicted, the probe that straddled the phrase boundary was more difficult to recognize than either of the within-phrase probes. These findings suggest that knowledge of harmonic structure influences perceptual organization of melodies in ways analogous to the influence of clause relations on the perceptual organization of sentences. They also provide evidence that training plays an important role in refining listeners’ sensitivity to harmonic variables.  相似文献   

16.
Indexical effects refer to the influence of surface variability of the to-be-remembered items, such as different voices speaking the same words or different timbres (musical instruments) playing the same melodies, on recognition memory performance. The nature of timbre effects in melody recognition was investigated in two experiments. Experiment 1 showed that melodies that remained in the same timbre from study to test were discriminated better than melodies presented in a previously studied but different, or unstudied timbre at test. Timbre effects are attributed solely to instance-specific matching, rather than timbre-specific familiarity. In Experiment 2, when a previously unstudied timbre was similar to the original timbre and it played the melodies at test, performance was comparable to the condition when the exact same timbre was repeated at test. The use of a similar timbre at test enabled the listener to discriminate old from new melodies reliably. Overall, our data suggest that timbre-specific information is encoded and stored in long-term memory. Analogous indexical effects arising from timbre (nonmusical) and voice (nonlexical) attributes in music and speech processing respectively are implied and discussed.  相似文献   

17.
The perceptual restoration of musical sounds was investigated in 5 experiments with Samuel's (1981a) discrimination methodology. Restoration in familiar melodies was compared to phonemic restoration in Experiment 1. In the remaining experiments, we examined the effect of expectations (generated by familiarity, predictability, and musical schemata) on musical restoration. We investigated restoration in melodies by comparing familiar and unfamiliar melodies (Experiment 2), as well as unfamiliar melodies varying in tonal and rhythmic predictability (Experiment 3). Expectations based on both familiarity and predictability were found to reduce restoration at the melodic level. Restoration at the submelodic level was investigated with scales and chords in Experiments 4 and 5. At this level, key-based expectations were found to increase restoration. Implications for music perception, as well as similarities between restoration in music and speech, are discussed.  相似文献   

18.
This study investigated the effects of stimulus modality, standard duration, sex, and laterality in duration discrimination by musicians and nonmusicians. Seventeen musicians (M age = 24.1 yr.) and 22 nonmusicians (M age = 26.8 yr.) participated. Auditory (1,000 Hz) and tactile (250 Hz) sinusoidal suprathreshold stimuli with varying durations were used. The standard durations tested were 0.5 and 3.0 sec. Participants discriminated comparison stimuli which had durations slightly longer and shorter than the standard durations. Difference limens were found by the method of limits and converted to Weber fractions based on the standard durations. Musicians had lower, i.e., better, Weber fractions than nonmusicians in the auditory modality, but there was no significant difference between musicians and nonmusicians in the tactile modality. Auditory discrimination was better than tactile discrimination. Discrimination improved when the standard duration was increased both for musicians and nonmusicians. These results support previous findings of superior auditory processing by musicians. Significant differences between discrimination in the millisecond and second ranges may be due to a deviation from Weber's law and the discontinuity of timing in different duration ranges reported in the literature.  相似文献   

19.
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.  相似文献   

20.
Most Western music is tonal; that is, pitch organization can largely be described in terms of scales or keys. A considerable amount of research has been conducted on the role played by scale in perceiving notes and melodies. The present article points out a potentially important distinction between scale structure (the set permitted pitch intervals between notes) and mode (the assignment of a special salience or centrality to particular notes within the scale structure). Four experiments are described that investigated the judgment of adult Western listeners for melodies that approximated to scale structure in differing degrees but that were random in other respects. We found that musicians and nonmusicians gave higher ratings of preference and adjudged musicality to melodies containing increased numbers of consecutive notes conforming to scale structure. A significant exception to this rule was the least scalar type of sequence, which received ratings as high as the fully scalar sequences. This exception occurred because subjects identified scale structure not only in groups of contiguous notes but also in groups of discontiguous notes that formed a coherent "stream" as long as the number of notes intervening corresponded to a standard temporal grouping, or meter, such as is commonly found in Western music.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号