首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A number of different cues allow listeners to perceive musical meter. Three experiments examined effects of melodic and temporal accents on perceived meter in excerpts from folk songs scored in 6/8 or 3/4 meter. Participants matched excerpts with 1 of 2 metrical drum accompaniments. Melodic accents included contour change, melodic leaps, registral extreme, melodic repetition, and harmonic rhythm. Two experiments with isochronous melodies showed that contour change and melodic repetition predicted judgments. For longer melodies in the 2nd experiment, variables predicted judgments best at the beginning of excerpts. The final experiment, with rhythmically varied melodies, showed that temporal accents, tempo, and contour change were the strongest predictors of meter. The authors' findings suggest that listeners combine multiple melodic and temporal features to perceive musical meter.  相似文献   

2.
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract.  相似文献   

3.
Mental representations for musical meter   总被引:5,自引:0,他引:5  
Investigations of the psychological representation for musical meter provided evidence for an internalized hierarchy from 3 sources: frequency distributions in musical compositions, goodness-of-fit judgments of temporal patterns in metrical contexts, and memory confusions in discrimination judgments. The frequency with which musical events occurred in different temporal locations differentiates one meter from another and coincides with music-theoretic predictions of accent placement. Goodness-of-fit judgments for events presented in metrical contexts indicated a multileveled hierarchy of relative accent strength, with finer differentiation among hierarchical levels by musically experienced than inexperienced listeners. Memory confusions of temporal patterns in a discrimination task were characterized by the same hierarchy of inferred accent strength. These findings suggest mental representations for structural regularities underlying musical meter that influence perceiving, remembering, and composing music.  相似文献   

4.
Repp BH 《Cognition》2007,102(3):434-454
Music commonly induces the feeling of a regular beat (i.e., a metrical structure) in listeners. However, musicians can also intentionally impose a beat (i.e., a metrical interpretation) on a metrically ambiguous passage. The present study aimed to provide objective evidence for this little-studied mental ability. Participants were prompted with musical notation to adopt different metrical interpretations of a cyclically repeated isochronous 12-note melody while tapping in synchrony with specified target tones in the melody. The target tones either coincided with the imposed beat (on-beat tapping) or did not (off-beat tapping). An adaptive staircase method was employed to determine the fastest tempo at which each synchronization task could be performed. For each metrical interpretation, a significant advantage for on-beat over off-beat tapping was obtained - except in a condition in which participants, instead of synchronizing, were in control of the target tones. By showing that a self-imposed beat can affect sensorimotor synchronization, the present results provide objective evidence for endogenous perceptual organization of metrical sequences. It is hypothesized that metrical interpretation rests upon covert rhythmic action.  相似文献   

5.
We explored the differences between metamemory judgments for titles as well as for melodies of instrumental music and those for songs with lyrics. Participants were given melody or title cues and asked to provide the corresponding titles or melodies or feeling of knowing (FOK) ratings. FOK ratings were higher but less accurate for titles with melody cues than vice versa, but only in instrumental music, replicating previous findings. In a series of seven experiments, we ruled out style, instrumentation, and strategy differences as explanations for this asymmetry. A mediating role of lyrics between the title and the melody in songs was also ruled out. What emerged as the main explanation was the degree of familiarity with the musical pieces, which was manipulated either episodically or semantically, and within this context, lyrics appeared to serve as an additional source of familiarity. Results are discussed using the Interactive Theory of how FOK judgments are made.  相似文献   

6.
For listeners to recognize words, they must map temporally distributed phonetic feature cues onto higher order phonological representations. Three experiments are reported that were performed to examine what information listeners extract from assimilated segments (e.g., place-assimilated tokens of cone that resemble comb) and how they interpret it. Experiment 1 employed form priming to demonstrate that listeners activate the underlying form of CONE, but not of its neighbor (COMB). Experiment 2 employed phoneme monitoring to show that the same assimilated tokens facilitate the perception of postassimilation context. Together, the results of these two experiments suggest that listeners recover both the underlying place of the modified item and information about the subsequent item from the same modified segment. Experiment 3 replicated Experiment 1, using different postassimilation contexts to demonstrate that context effects do not reflect familiarity with a given assimilation process. The results are discussed in the context of general auditory grouping mechanisms.  相似文献   

7.
Pitch and time are two principal form-bearing dimensions in Western tonal music. Research on melody perception has shown that listeners develop expectations about "What" note is coming next and "When" in time it will occur. Our study used sequences of chords (i.e., simultaneously sounding notes) to investigate the influence of these expectations on chord processing (Experiments 1 and 4) and subjective judgments of completion (Experiments 2 and 3). Both tasks showed an influence of tonal relations and temporal regularities: expected events occurring at the expected moment were processed faster and led to higher completion judgments. However, pitch and time dimensions interacted only for completion judgments. The present outcome suggests that for chord perception the influence of pitch and time might depend on the required processing: with a more global judgment favoring interactive influences in contrast to a task focusing on local chord processing.  相似文献   

8.
Episodic recognition of novel and familiar melodies was examined by asking participants to make judgments about the recency and frequency of presentation of melodies over the course of two days of testing. For novel melodies, recency judgments were poor and participants often confused the number of presentations of a melody with its day of presentation; melodies heard frequently were judged as have been heard more recently than they actually were. For familiar melodies, recency judgments were much more accurate and the number of presentations of a melody helped rather than hindered performance. Frequency judgments were generally more accurate than recency judgments and did not demonstrate the same interaction with musical familiarity. Overall, these findings suggest that (1) episodic recognition of novel melodies is based more on a generalized "feeling of familiarity" than on a specific episodic memory, (2) frequency information contributes more strongly to this generalized memory than recency information, and (3) the formation of an episodic memory for a melody depends either on the overall familiarity of the stimulus or the availability of a verbal label.  相似文献   

9.
The perceptual restoration of musical sounds was investigated in 5 experiments with Samuel's (1981a) discrimination methodology. Restoration in familiar melodies was compared to phonemic restoration in Experiment 1. In the remaining experiments, we examined the effect of expectations (generated by familiarity, predictability, and musical schemata) on musical restoration. We investigated restoration in melodies by comparing familiar and unfamiliar melodies (Experiment 2), as well as unfamiliar melodies varying in tonal and rhythmic predictability (Experiment 3). Expectations based on both familiarity and predictability were found to reduce restoration at the melodic level. Restoration at the submelodic level was investigated with scales and chords in Experiments 4 and 5. At this level, key-based expectations were found to increase restoration. Implications for music perception, as well as similarities between restoration in music and speech, are discussed.  相似文献   

10.
If the notes of two melodies whose pitch ranges do not overlap are interleaved in time so that successive tones come from the different melodies, the resulting sequence of tones is perceptually divided into groups that correspond to the two melodies. Such “melodic fission” demonstrates perceptual grouping based on pitch alone, and has been used extensively in music.Experiment I showed that the identification of interleaved pairs of familiar melodies is possible if their pitch ranges do not overlap, but difficult otherwise. A short-term recognition-memory paradigm (Expt II) showed that interleaving a “background” melody with an unfamiliar melody interferes with same-different judgments regardless of the separation of their pitch ranges, but that range separation attenuates the interference effect. When pitch ranges overlap, listeners can overcome the interference effect and recognize a familiar target melody if the target is prespecified, thereby permitting them to search actively for it (Expt III). But familiarity or prespecification of the interleaved background melody appears not to reduce its interfering effects on same-different judgments concerning unfamiliar target melodies (Expt IV).  相似文献   

11.
Four experiments investigated the influence of situational familiarity within a judgmental context on the process of credibility attribution. We predicted that high familiarity with a situation would lead to higher efficacy expectations for, and a more pronounced use of, verbal information when making judgments of credibility. Under low situational familiarity, judges were expected to experience higher efficacy expectations for, and a more pronounced use of, nonverbal information. In Experiments 1 through 4, participants under low or high situational familiarity saw a film in which nonverbal cues (fidgety vs. calm movements) and verbal content cues (low vs. high plausibility) were manipulated. As predicted, when familiarity was low, only the nonverbal cues influenced participants’ judgments of credibility. In contrast, participants in the high familiarity condition used only the verbal cues. Experiments 3 and 4 showed that efficacy expectations regarding verbal and nonverbal information, but not processing motivation, drive this familiarity effect.  相似文献   

12.
Influences of acculturation and musical sophistication on music perception were examined. Judgments for mistuning were obtained for Ss differing in musical sophistication who listened to a melody that was based on interval patterns from Western and Javanese musical scales. Less musically sophisticated Ss' judgments were better for Western than Javanese patterns. Musicians' thresholds did not differ across Western and Javanese patterns. Differences in judgments across scales are accountable to acculturation through listening exposure and musical sophistication gained through formal experience.  相似文献   

13.
Expression in musical performance is largely communicated by the manner in which a piece is played; interpretive aspects that supplement the written score. In piano performance, timing and amplitude are the principal parameters the performer can vary. We examined the way in which such variation serves to communicate emotion by manipulating timing and amplitude in performances of classical piano pieces. Over three experiments, listeners rated the emotional expressivity of performances and their manipulated versions. In Experiments 1 and 2, timing and amplitude information were covaried; judgments were monotonically decreasing with performance variability, demonstrating that the rank ordering of acoustical manipulations was captured by participants' responses. Further, participants' judgments formed an S-shaped (sigmoidal) function in which greater sensitivity was seen for musical manipulations in the middle of the range than at the extremes. In Experiment 3, timing and amplitude were manipulated independently; timing variation was found to provide more expressive information than did amplitude. Across all three experiments, listeners demonstrated sensitivity to the expressive cues we manipulated, with sensitivity increasing as a function of musical experience.  相似文献   

14.
Musical tuning perception in infancy and adulthood was explored in three experiments. In Experiment 1, Western adults were tested in detection of randomly located mistunings in a melody based on musical interval patterns from native and nonnative musical scales. Subjects performed better in a Western major scale context than in either a Western augmented or--a Javanese pelog scale context. Because the major scale is used frequently in Western music and, therefore, is more perceptually familiar than either the augmented scale or the pelog scale are, the adults’ pattern of performance is suggestive of musical acculturation. Experiments 2 and3 were designed to explore the onset of culturally specific perceptual reorganization for music in the age period that has been found to be important in linguistically specific perceptual reorganization for speech. In Experiment 2, 1-year-olds had a pattern of performance similar to that of the adults, but 6-month-olds could not detect mistunings reliably better than chance. In Experiment 3, another group of 6-month-olds was tested, and a larger degree of mistuning was used so that floor effects might be avoided. These 6-month-olds performed better in the major and augmented scale contexts than in the pelog context, without a reliable performance difference between the major and augmented contexts. Comparison of the results obtained with 6-month-olds and 1-year-olds suggests that culturally specific perceptual reorganization for musical tuning begins to affect perception between these ages, but the 6-month-olds’ pattern of results considered alone is not as clear. The 6-month-olds’ better performance on the major and augmented interval patterns than on the pelog interval pattern is potentially attributable to either the 6-month.olds’ lesser perceptual acculturation than that of the 1-year-olds or perhaps to an innate predisposition for processing of music based on a single fundamental interval, in this case the semitone.  相似文献   

15.
Musical tuning perception in infancy and adulthood was explored in three experiments. In Experiment 1, Western adults were tested in detection of randomly located mistunings in a melody based on musical interval patterns from native and nonnative musical scales. Subjects performed better in a Western major scale context than in either a Western augmented or a Javanese pelog scale context. Because the major scale is used frequently in Western music and, therefore, is more perceptually familiar than either the augmented scale or the pelog scale are, the adults' pattern of performance is suggestive of musical acculturation. Experiments 2 and 3 were designed to explore the onset of culturally specific perceptual reorganization for music in the age period that has been found to be important in linguistically specific perceptual reorganization for speech. In Experiment 2, 1-year-olds had a pattern of performance similar to that of the adults, but 6-month-olds could not detect mistunings reliably better than chance. In Experiment 3, another group of 6-month-olds was tested, and a larger degree of mistuning was used so that floor effects might be avoided. These 6-month-olds performed better in the major and augmented scale contexts than in the pelog context, without a reliable performance difference between the major and augmented contexts. Comparison of the results obtained with 6-month-olds and 1-year-olds suggests that culturally specific perceptual reorganization for musical tuning begins to affect perception between these ages, but the 6-month-olds' pattern of results considered alone is not as clear. The 6-month-olds' better performance on the major and augmented interval patterns than on the pelog interval pattern is potentially attributable to either the 6-month-olds' lesser perceptual acculturation than that of the 1-year-olds or perhaps to an innate predisposition for processing of music based on a single fundamental interval, in this case the semitone.  相似文献   

16.
A classical experiment of auditory stream segregation is revisited, reconceptualising perceptual ambiguity in terms of affordances and musical engagement. Specifically, three experiments are reported that investigate how listeners’ perception of auditory sequences change dynamically depending on emotional context. The experiments show that listeners adapt their attention to higher or lower pitched streams (Experiments 1 and 2) and the degree of auditory stream integration or segregation (Experiment 3) in accordance with the presented emotional context. Participants with and without formal musical training show this influence, although to differing degrees (Experiment 2). Contributing evidence to the literature on interactions between emotion and cognition, these experiments demonstrate how emotion is an intrinsic part of music perception and not merely a product of the listening experience.  相似文献   

17.
Absolute pitch (AP) is the rare ability to name or produce an isolated musical note without the aid of a reference note. One skill thought to be unique to AP possessors is the ability to provide absolute intonation judgments (e.g., classifying an isolated note as “in-tune” or “out-of-tune”). Recent work has suggested that absolute intonation perception among AP possessors is not crystallized in a critical period of development, but is dynamically maintained by the listening environment, in which the vast majority of Western music is tuned to a specific cultural standard. Given that all listeners of Western music are constantly exposed to this specific cultural tuning standard, our experiments address whether absolute intonation perception extends beyond AP possessors. We demonstrate that non-AP listeners are able to accurately judge the intonation of completely isolated notes. Both musicians and nonmusicians showed evidence for absolute intonation recognition when listening to familiar timbres (piano and violin). When testing unfamiliar timbres (triangle and inverted sine waves), only musicians showed weak evidence of absolute intonation recognition (Experiment 2). Overall, these results highlight a previously unknown similarity between AP and non-AP possessors’ long-term musical note representations, including evidence of sensitivity to frequency.  相似文献   

18.
During speech perception, listeners make judgments about the phonological category of sounds by taking advantage of multiple acoustic cues for each phonological contrast. Perceptual experiments have shown that listeners weight these cues differently. How do listeners weight and combine acoustic cues to arrive at an overall estimate of the category for a speech sound? Here, we present several simulations using a mixture of Gaussians models that learn cue weights and combine cues on the basis of their distributional statistics. We show that a cue‐weighting metric in which cues receive weight as a function of their reliability at distinguishing phonological categories provides a good fit to the perceptual data obtained from human listeners, but only when these weights emerge through the dynamics of learning. These results suggest that cue weights can be readily extracted from the speech signal through unsupervised learning processes.  相似文献   

19.
We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and music-theoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for producing memory representations.  相似文献   

20.
The hypothesis that melodies are recognized at moments when they exhibit a distinctive musical pattern was tested. In a melody recognition experiment, point-of-recognition (POR) data were gathered from 32 listeners (16 musicians and 16 nonmusicians) judging 120 melodies. A series of models of melody recognition were developed, resulting from a stepwise multiple regression of two classes of information relating to melodic familiarity and melodic distinctiveness. Melodic distinctiveness measures were assembled through statistical analyses of over 15,000 Western themes and melodies. A significant model, explaining 85% of the variance, entered measures primarily of timing distinctiveness and pitch distinctiveness, but excluding familiarity, as predictors of POR. Differences between nonmusician and musician models suggest a processing shift from momentary to accumulated information with increased exposure to music. Supplemental materials for this article may be downloaded from http://mc.psychonomic-journals.org/content/supplemental.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号