首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The authors explore priming effects of pitch repetition in music in 3 experiments. Musically untrained participants heard a short melody and sang the last pitch of the melody as quickly as possible. Each experiment manipulated (a) whether or not the tone to be sung (target) was heard earlier in the melody (primed) and (b) the prime-target distance (measured in events). Experiment 1 used variable-length melodies, whereas Experiments 2 and 3 used fixed-length melodies. Experiment 3 changed the timbre of the target tone. In all experiments, fast-responding participants produced repeated tones faster than nonrepeated tones, and this repetition benefit decreased as prime-target distances increased. All participants produced expected tonic endings faster than less expected nontonic endings. Repetition and tonal priming effects are compared with harmonic priming effects in music and with repetition priming effects in language.  相似文献   

2.
Perceptual relationships between four-voice harmonic sequences and single voices were examined in three experiments. In Experiment 1, listeners rated the extent to which single voices were musically consistent with harmonic sequences. When harmonic sequences did not change key, judgments were influenced by three sources of congruency: melody (whether the single voice was the same as the soprano voice of the harmonic sequence), chord progression (whether the single voice could be harmonized to give rise to the chord progression of the harmonic sequence), and key structure (whether or not the single voice implied modulation). When key changes occurred, sensitivity to sources of congruency was reduced. In Experiment 2, another interpretation of the results was examined: that consistency ratings were based on congruency in well-formedness. Listeners provided well-formedness ratings of the single voices and harmonic sequences. A multiple regression analysis suggested that consistency ratings were based not merely on well-formedness but on congruency in melody, chord progression, and key structure. In Experiment 3, listeners rated the extent of modulation in harmonic sequences and in each voice of the sequences. Discrimination between modulation conditions was greater for single voices than for harmonic sequences, suggesting that abstraction of key from melody may occur without reference to implied harmony. A partially hierarchical system for processing melody, harmony, and key is proposed.  相似文献   

3.
Two experiments demonstrate positional variation in the relative detectability of, respectively, local temporal and dynamic perturbations in an isochronous and isodynamic sequence of melody tones, played on a computer-controlled piano. This variation may reflect listeners’ expectations of expressive performance microstructure (thetop-down hypothesis), or it may be due to psychoacoustic (pitch-related) stimulus factors (thebottom-up hypothesis). Percent correct scores for increments in tone duration correlated significantly with the average timing profile of pianists’ expressive performances of the music, as predicted specifically by the top-down hypothesis. For intensity increments, the analogous perception-performance correlation was weak and the bottom-up factors of relative pitch height and/or direction of pitch change accounted for some of the perceptual variation. Subjects’ musical training increased overall detection accuracy but did not affect the positional variation in accuracy scores in either experiment. These results are consistent with the top-down hypothesis for timing, but they favor the bottom-up hypothesis for dynamics. The perception-performance correlation for timing may also be viewed as being due to complex stimulus properties such as tonal motion and tension/relaxation that influence performers and listeners in similar ways.  相似文献   

4.
If the notes of two melodies whose pitch ranges do not overlap are interleaved in time so that successive tones come from the different melodies, the resulting sequence of tones is perceptually divided into groups that correspond to the two melodies. Such “melodic fission” demonstrates perceptual grouping based on pitch alone, and has been used extensively in music.Experiment I showed that the identification of interleaved pairs of familiar melodies is possible if their pitch ranges do not overlap, but difficult otherwise. A short-term recognition-memory paradigm (Expt II) showed that interleaving a “background” melody with an unfamiliar melody interferes with same-different judgments regardless of the separation of their pitch ranges, but that range separation attenuates the interference effect. When pitch ranges overlap, listeners can overcome the interference effect and recognize a familiar target melody if the target is prespecified, thereby permitting them to search actively for it (Expt III). But familiarity or prespecification of the interleaved background melody appears not to reduce its interfering effects on same-different judgments concerning unfamiliar target melodies (Expt IV).  相似文献   

5.
Jackendoff R  Lerdahl F 《Cognition》2006,100(1):33-72
We explore the capacity for music in terms of five questions: (1) What cognitive structures are invoked by music? (2) What are the principles that create these structures? (3) How do listeners acquire these principles? (4) What pre-existing resources make such acquisition possible? (5) Which aspects of these resources are specific to music, and which are more general? We examine these issues by looking at the major components of musical organization: rhythm (an interaction of grouping and meter), tonal organization (the structure of melody and harmony), and affect (the interaction of music with emotion). Each domain reveals a combination of cognitively general phenomena, such as gestalt grouping principles, harmonic roughness, and stream segregation, with phenomena that appear special to music and language, such as metrical organization. These are subtly interwoven with a residue of components that are devoted specifically to music, such as the structure of tonal systems and the contours of melodic tension and relaxation that depend on tonality. In the domain of affect, these components are especially tangled, involving the interaction of such varied factors as general-purpose aesthetic framing, communication of affect by tone of voice, and the musically specific way that tonal pitch contours evoke patterns of posture and gesture.  相似文献   

6.
The notion that the melody (i.e., pitch structure) of familiar music is more recognizable than its accompanying rhythm (i.e., temporal structure) was examined with the same set of nameable musical excerpts in three experiments. In Experiment 1, the excerpts were modified so as to keep either their original pitch variations, whereas durations were set to isochrony (melodic condition) or their original temporal pattern while played on a single constant pitch (rhythmic condition). The subjects, who were selected without regard to musical training, were found to name more tunes and to rate their feeling of knowing the musical excerpts far higher in the melodic condition than in the rhythmic condition. These results were replicated in Experiment 2, wherein the melodic and rhythmic patterns of the musical excerpts were interchanged to create chimeric mismatched tunes. The difference in saliency of the melodic pattern and the rhythmic pattern also emerged with a music-title-verification task in Experiment 3, hence discarding response selection as the main source of the discrepancy. The lesser effectiveness of rhythmic structure appears to be related to its lesser encoding distinctiveness relative to melodic structure. In general, rhythm was found to be a poor cue for the musical representations that are stored in long-term memory. Nevertheless, in all three experiments, the most effective cue for music identification involved the proper combination of pitches and durations. Therefore, the optimal code of access to long-term memory for music resides in a combination of rhythm and melody, of which the latter would be the most informative.  相似文献   

7.
When listening to a melody, we are often able to anticipate not onlywhat tonal intervals will occur next but alsowhen in time these will appear. The experiments reported here were carried out to investigate what types of structural relations support the generation of temporal expectancies in the context of a melody recognition task. The strategy was to present subjects with a set of folk tunes in which temporal accents (i.e., notes with a prolonged duration) always occurred in the first half of a melody, so that expectancies, if generated, could carry over to an isochronous sequence of notes in the latter half ofthe melody. The ability to detect deviant pitch changes in the final variation as a function of rhythmic context was then evaluated. Accuracy and reaction time data from Experiment 1 indicated that expectancy formation jointly depends on an invariant periodicity of temporal accentuation and the attentional highlighting ofcertain melodic relations (i.e., phrase ending points). In Experiment 2, once these joint expectancies were generated, the temporal dimension had a greater facilitating effectupon melody recognition than did the melodic one. These results are discussed in terms of their implications for the perceptual processing of musical events.  相似文献   

8.
The purpose of this research was to investigate a set of factors that may influence the perceived rate of an auditory event. In a paired-comparison task, subjects were presented with a set of music-like patterns that differed in their relative number of contour changes and in the magnitude of pitch skips (Experiment 1) as well as in the compatibility of rhythmic accent structure with the arrangement of pitch relations (Experiment 2). Results indicated that, relative to their standard referents, comparison melodies were judged to unfold more slowly when they displayed more changes in pitch direction, greater pitch distances, and an incompatible rhythmic accent structure. These findings are suggested to stern from animputed velocity hypo thesis, in which people overgeneralize certain invariant relations that typically occur between melodic and temporal accent structure within Western music.  相似文献   

9.
What is the involvement of what we know in what we perceive? In this article, the contribution of melodic schema-based processes to the perceptual organization of tone sequences is examined. Two unfamiliar six-tone melodies, one of which was interleaved with distractor tones, were presented successively to listeners who were required to decide whether the melodies were identical or different. In one condition, the comparison melody was presented after the mixed sequence: a target melody interleaved with distractor tones. In another condition, it was presented beforehand, so that the listeners had precise knowledge about the melody to be extracted from the mixture. In the latter condition, recognition performance was better and a bias toward same responses was reduced, as compared with the former condition. A third condition, in which the comparison melody presented beforehand was transposed up in frequency, revealed that whereas the performance improvement was explained in part by absolute pitch or frequency priming, relative pitch representation (interval and/or contour structure) may also have played a role. Differences in performance as a function of mean frequency separation between target and distractor sequences, when listeners did or did not have prior knowledge about the target melody, argue for a functional distinction between primitive and schema-based processes in auditory scene analysis.  相似文献   

10.
Previous studies have shown that the effect of the Spatial Musical Association of Response Codes (SMARC) depends on various features, such as task conditions (whether pitch height is implicit or explicit), response dimension (horizontal vs. vertical), presence or absence of a reference tone, and former musical training of the participants. In the present study, we investigated the effects of pitch range and timbre: in particular, how timbre (piano vs. vocal) contributes to the horizontal and vertical SMARC effect in nonmusicians under varied pitch range conditions. Nonmusicians performed a timbre judgement task in which the pitch range was either small (6 or 8 semitone steps) or large (9 or 12 semitone steps) in a horizontal and a vertical response setting. For piano sounds, SMARC effects were observed in all conditions. For the vocal sounds, in contrast, SMARC effects depended on pitch range. We concluded that the occurrence of the SMARC effect, especially in horizontal response settings, depends on the interaction of the timbre (vocal and piano) and pitch range if vocal and instrumental sounds are combined in one experiment: the human voice enhances the attention, both to the vocal and the instrumental sounds.  相似文献   

11.
The present study tested whether coding of tone pitch relative to a referent contributes to the correspondence effect between the pitch height of an auditory stimulus and the location of a lateralized response. When left-right responses are mapped to high or low pitch tones, performance is better with the high-right/low-left mapping than with the opposite mapping, a phenomenon called the horizontal SMARC effect. However, when pitch height is task irrelevant, the horizontal SMARC effect occurs only for musicians. In Experiment 1, nonmusicians performed a pitch discrimination task, and the SMARC effect was evident regardless of whether a referent tone was presented. However, in Experiment 2, for a timbre-judgment task, nonmusicians showed a SMARC effect only when a referent tone was presented, whereas musicians showed a SMARC effect that did not interact with presence/absence of the referent. Dependence of the SMARC effect for nonmusicians on a reference tone was replicated in Experiment 3, in which judgments of the color of a visual stimulus were made in the presence of a concurrent high- or low-pitched pure tone. These results suggest that referential coding of pitch height is a key determinant for the horizontal SMARC effect when pitch height is irrelevant to the task.  相似文献   

12.
In two experiments, the perceptual similarity between a strong tonal melody and various transpositions was investigated using a paradigm in which listeners compared the perceptual similarity of a melody and its transposition with that of the same melody and another transposition. The paradigm has the advantage that it provides a direct judgment regarding the similarity of transposed melodies. The experimental results indicate that the perceptual similarity of a strong tonal melody and its transposition is mainly determined by two factors: (1) the distance on the height dimension between the original melody and its transposition (pitch distance), and (2) the distance between keys as inferred from the circle of fifths (key distance). The major part of the variance is explained by the factor pitch distance, whereas key distance explains only a small part.  相似文献   

13.
Pitch perception is fundamental to melody in music and prosody in speech. Unlike many animals, the vast majority of human adults store melodic information primarily in terms of relative not absolute pitch, and readily recognize a melody whether rendered in a high or a low pitch range. We show that at 6 months infants are also primarily relative pitch processors. Infants familiarized with a melody for 7 days preferred, on the eighth day, to listen to a novel melody in comparison to the familiarized one, regardless of whether the melodies at test were presented at the same pitch as during familiarization or transposed up or down by a perfect fifth (7/12th of an octave) or a tritone (1/2 octave). On the other hand, infants showed no preference for a transposed over original-pitch version of the familiarized melody, indicating that either they did not remember the absolute pitch, or it was not as salient to them as the relative pitch.  相似文献   

14.
This paper examines infants’ ability to perceive various aspects of musical material that are significant in music in general and in Western European music in particular: contour, intervals, exact pitches, diatonic structure, and rhythm. For the most part, infants focus on relational aspects of melodies, synthesizing global representations from local details. They encode the contour of a melody across variations in exact pitches and intervals. They extract information about pitch direction from the smallest musically relevant pitch change in Western music, the semitone. Under certain conditions, infants detect interval changes in the context of transposed sequences, their performance showing enhancement for sequences that conform to Western musical structure. Infants have difficulty retaining exact pitches except for sets of pitches that embody important musical relations. In the temporal domain, they group the elements of auditory sequences on the basis of similarity and they extract the temporal structure of a melody across variations in tempo.  相似文献   

15.
Musicians and nonmusicians indicated whether a two-note probe following a tonally structured melody occurred in the melody. The critical probes were taken from one of three locations in the melody: the two notes (1) ending the first phrase, (2) straddling the phrase boundary, and (3) beginning the second phrase. As predicted, the probe that straddled the phrase boundary was more difficult to recognize than either of the within-phrase probes. These findings suggest that knowledge of harmonic structure influences perceptual organization of melodies in ways analogous to the influence of clause relations on the perceptual organization of sentences. They also provide evidence that training plays an important role in refining listeners’ sensitivity to harmonic variables.  相似文献   

16.
Three experiments explored online recognition in a nonspeech domain, using a novel experimental paradigm. Adults learned to associate abstract shapes with particular melodies, and at test they identified a played melody's associated shape. To implicitly measure recognition, visual fixations to the associated shape versus a distractor shape were measured as the melody played. Degree of similarity between associated melodies was varied to assess what types of pitch information adults use in recognition. Fixation and error data suggest that adults naturally recognize music, like language, incrementally, computing matches to representations before melody offset, despite the fact that music, unlike language, provides no pressure to execute recognition rapidly. Further, adults use both absolute and relative pitch information in recognition. The implicit nature of the dependent measure should permit use with a range of populations to evaluate postulated developmental and evolutionary changes in pitch encoding.  相似文献   

17.
Martino G  Marks LE 《Perception》1999,28(7):903-923
We tested the semantic coding hypothesis, which states that cross-modal interactions observed in speeded classification tasks arise after perceptual information is recoded into an abstract format common to perceptual and linguistic systems. Using a speeded classification task, we first confirmed the presence of congruence interactions between auditory pitch and visual lightness and observed Garner-type interference with nonlinguistic (perceptual) stimuli (low-frequency and high-frequency tones, black and white squares). Subsequently, we found that modifying the visual stimuli by (a) making them lexical (related words) or (b) reducing their compactness or figural 'goodness' altered congruence effects and Garner interference. The results are consistent with the semantic coding hypothesis, but only in part, and suggest the need for additional assumptions regarding the role of perceptual organization in cross-modal dimensional interactions.  相似文献   

18.
People easily recognize a familiar melody in a previously unheard key, but they also retain some key-specific information. Does the recognition of a transposed melody depend on either pitch distance or harmonic distance from the initially learned instances? Previous research has shown a stronger effect of pitch closeness than of harmonic similarity, but did not directly test for an additional effect of the latter variable. In the present experiment, we familiarized participants with a simple eight-note melody in two different keys (C and D) and then tested their ability to discriminate the target melody from foils in other keys. The transpositions included were to the keys of C# (close in pitch height, but harmonically distant), G (more distant in pitch, but harmonically close), and F# (more distant in pitch and harmonically distant). Across participants, the transpositions to F# and G were either higher or lower than the initially trained melodies, so that their average pitch distances from C and D were equated. A signal detection theory analysis confirmed that discriminability (d′) was better for targets and foils that were close in pitch distance to the studied exemplars. Harmonic similarity had no effect on discriminability, but it did affect response bias (c), in that harmonic similarity to the studied exemplars increased both hits and false alarms. Thus, both pitch distance and harmonic distance affect the recognition of transposed melodies, but with dissociable effects on discrimination and response bias.  相似文献   

19.
绝对音高感是一种敏锐的音高知觉能力。拥有这种能力的人可以在没有标准音(A4)参照下, 对所听到的音高进行命名。本文通过对比绝对音高感被试与不具有绝对音高感被试对音乐句法基本规则的知觉以及对音乐句法结构划分能力的差异, 探讨绝对音高感与音乐句法加工能力之间的关系。结果表明, 绝对音高感被试对音乐句法基本规则的知觉水平高于控制组; 同时, 这种知觉上的优势也延伸到他们对音乐句法结构的划分。这一结果说明, 绝对音高感被试不仅可以对音高进行孤立命名, 而且表现出对调性音乐音高关系加工的优势。  相似文献   

20.
The perception of microtonal scales was investigated in a melodic identification task. In each trial, eight pure tones, equally-spaced in log frequency in the vicinity of 700 Hz, were presented in one of nine different serial orders. There were two experiments, each with 108 trials (six scales [tone sets] × nine serial orders × two repetitions). In each experiment, 30 subjects, half of whom were musically trained, were asked to match each melody to one of 9 visual representations (frequency-time grids). In Experiment 1, the six scales were spaced at intervals of 25, 33, 50, 67, 100, and 133 cents (100 cents=1 semitone ≈6% of frequency). Performance was worse for scale steps of 25 and 33 cents than it was for wider scale steps. There were no significant effects at other intervals, including the interval of 100 cents, implying that melodic pattern identification is unaffected by long-term experience of music in 12-tone equally tempered tuning (e.g., piano music). In Experiment 2, the six scales were spaced at smaller intervals, of 10, 20, 30, 40, 50, and 60 cents. Performance for the three narrower scale steps was worse than that for the three wider scale steps. For some orders, performance for the narrowest scale step (10 cents) did not exceed chance. The smallest practical scale step for short microtonal melodies in a pattern-identification task was estimated as being 10–20 cents for chance performance, and 30–40 cents for asymptotic performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号