首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Can skilled performers, such as artists or athletes, recognize the products of their own actions? We recorded 12 pianists playing 12 mostly unfamiliar musical excerpts, half of them on a silent keyboard. Several months later, we played these performances back and asked the pianists to use a 5-point scale to rate whether they thought they were the person playing each excerpt (1 = no, 5 = yes). They gave their own performances significantly higher ratings than any other pianist's performances. In two later follow-up tests, we presented edited performances from which differences in tempo, overall dynamic (i.e., intensity) level, and dynamic nuances had been removed. The pianists' ratings did not change significantly, which suggests that the remaining information (expressive timing and articulation) was sufficient for self-recognition. Absence of sound during recording had no significant effect. These results are best explained by the hypothesis that an observer's action system is most strongly activated during perception of self-produced actions.  相似文献   

2.
In 12 tasks, each including 10 repetitions, 6 skilled pianists performed or responded to a musical excerpt. In the first 6 tasks, expressive timing was required; in the last 6 tasks, metronomic timing. The pianists first played the music on a digital piano (Tasks 1 and 7), then played it without auditory feedback (Tasks 2 and 8), then tapped on a response key in synchrony with one of their own performances (Tasks 3 and 9), with an imagined performance (Tasks 4 and 10), with a computer-generated performance (Tasks 5 and 11), and with a computer-generated sequence of clicks (Tasks 6 and 12). The results demonstrated that pianists are capable of generating the expressive timing pattern of their performance in the absence of auditory and kinaesthetic (piano keyboard) feedback. They can also synchronize their finger taps quite well with expressively timed music or clicks (while imagining the music), although they tend to underestimate long interonset intervals and to compensate on the following tap. Expressive timing is thus shown to be generated from an internal representation of the music. In metronomic performance, residual expressive timing effects were evident. Those did not depend on auditory feedback, but they were much reduced or absent when kinaesthetic feedback from the piano keyboard was eliminated. Thus, they seemed to arise from the pianist's physical interaction with the instrument. Systematic timing patterns related to expressive timing were also observed in synchronization with a metronomic computer performance and even in synchronization with metronomic clicks. These results shed light on intentional and unintentional, structurally governed processes of timing control in music performance.  相似文献   

3.
Past research has suggested that the disruptive effect of altered auditory feedback depends on how structurally similar the sequence of feedback events is to the planned sequence of actions. Three experiments pursued one basis for similarity in musical keyboard performance: matches between sequential transitions in spatial targets for movements and the melodic contour of auditory feedback. Trained pianists and musically untrained persons produced simple tonal melodies on a keyboard while hearing feedback sequences that either matched the planned melody or were contour-preserving variations of that melody. Sequence production was disrupted among pianists when feedback events were serially shifted by one event, similarly for shifts of planned melodies and tonal variations but less so for shifts of atonal variations. Nonpianists were less likely to be disrupted by serial shifts of variations but showed similar disruption to pianists for shifts of the planned melody. Thus, transitional properties and tonal schemata may jointly determine perception-action similarity during musical sequence production, and the tendency to generalize from a planned sequence to variations of it may develop with the acquisition of skill.  相似文献   

4.
Musical tuning perception in infancy and adulthood was explored in three experiments. In Experiment 1, Western adults were tested in detection of randomly located mistunings in a melody based on musical interval patterns from native and nonnative musical scales. Subjects performed better in a Western major scale context than in either a Western augmented or--a Javanese pelog scale context. Because the major scale is used frequently in Western music and, therefore, is more perceptually familiar than either the augmented scale or the pelog scale are, the adults’ pattern of performance is suggestive of musical acculturation. Experiments 2 and3 were designed to explore the onset of culturally specific perceptual reorganization for music in the age period that has been found to be important in linguistically specific perceptual reorganization for speech. In Experiment 2, 1-year-olds had a pattern of performance similar to that of the adults, but 6-month-olds could not detect mistunings reliably better than chance. In Experiment 3, another group of 6-month-olds was tested, and a larger degree of mistuning was used so that floor effects might be avoided. These 6-month-olds performed better in the major and augmented scale contexts than in the pelog context, without a reliable performance difference between the major and augmented contexts. Comparison of the results obtained with 6-month-olds and 1-year-olds suggests that culturally specific perceptual reorganization for musical tuning begins to affect perception between these ages, but the 6-month-olds’ pattern of results considered alone is not as clear. The 6-month-olds’ better performance on the major and augmented interval patterns than on the pelog interval pattern is potentially attributable to either the 6-month.olds’ lesser perceptual acculturation than that of the 1-year-olds or perhaps to an innate predisposition for processing of music based on a single fundamental interval, in this case the semitone.  相似文献   

5.
The effects of harmony and rhythm on expectancy formation were studied in two experiments. In both studies, we generated musical passages consisting of a melodic line accompanied by four harmonic (chord) events. These sequences varied in their harmonic content, the rhythmic periodicity of the three context chords prior to the final chord, and the ending time of the final chord itself. In Experiment 1, listeners provided ratings for how well the final chord in a chord sequence fit their expectations for what was to come next; analyses revealed subtle changes in ratings as a function of both harmonic and rhythmic variation. Experiment 2 extended these results; listeners made a speeded reaction time judgment on whether the final chord of a sequence belonged with its set of context chords. Analysis of the reaction time data suggested that harmonic and rhythmic variation also influenced the speed of musical processing. These results are interpreted with reference to current models of music cognition, and they highlight the need for rhythmical weighting factors within the psychological representation of tonal/pitch information.  相似文献   

6.
Musical tuning perception in infancy and adulthood was explored in three experiments. In Experiment 1, Western adults were tested in detection of randomly located mistunings in a melody based on musical interval patterns from native and nonnative musical scales. Subjects performed better in a Western major scale context than in either a Western augmented or a Javanese pelog scale context. Because the major scale is used frequently in Western music and, therefore, is more perceptually familiar than either the augmented scale or the pelog scale are, the adults' pattern of performance is suggestive of musical acculturation. Experiments 2 and 3 were designed to explore the onset of culturally specific perceptual reorganization for music in the age period that has been found to be important in linguistically specific perceptual reorganization for speech. In Experiment 2, 1-year-olds had a pattern of performance similar to that of the adults, but 6-month-olds could not detect mistunings reliably better than chance. In Experiment 3, another group of 6-month-olds was tested, and a larger degree of mistuning was used so that floor effects might be avoided. These 6-month-olds performed better in the major and augmented scale contexts than in the pelog context, without a reliable performance difference between the major and augmented contexts. Comparison of the results obtained with 6-month-olds and 1-year-olds suggests that culturally specific perceptual reorganization for musical tuning begins to affect perception between these ages, but the 6-month-olds' pattern of results considered alone is not as clear. The 6-month-olds' better performance on the major and augmented interval patterns than on the pelog interval pattern is potentially attributable to either the 6-month-olds' lesser perceptual acculturation than that of the 1-year-olds or perhaps to an innate predisposition for processing of music based on a single fundamental interval, in this case the semitone.  相似文献   

7.
Mesz B  Trevisan MA  Sigman M 《Perception》2011,40(2):209-219
Zarlino, one of the most important music theorists of the XVI century, described the minor consonances as 'sweet' (dolci) and 'soft' (soavi) (Zarlino 1558/1983, in On the Modes New Haven, CT: Yale University Press, 1983). Hector Berlioz, in his Treatise on Modern Instrumentation and Orchestration (London: Novello, 1855), speaks about the 'small acid-sweet voice' of the oboe. In line with this tradition of describing musical concepts in terms of taste words, recent empirical studies have found reliable associations between taste perception and low-level sound and musical parameters, like pitch and phonetic features. Here we investigated whether taste words elicited consistent musical representations by asking trained musicians to improvise on the basis of the four canonical taste words: sweet, sour, bitter, and salty. Our results showed that, even in free improvisation, taste words elicited very reliable and consistent musical patterns:'bitter' improvisations are low-pitched and legato (without interruption between notes), 'salty' improvisations are staccato (notes sharply detached from each other), 'sour' improvisations are high-pitched and dissonant, and 'sweet' improvisations are consonant, slow, and soft. Interestingly, projections of the improvisations of taste words to musical space (a vector space defined by relevant musical parameters) revealed that, in musical space, improvisations based on different taste words were nearly orthogonal or opposite. Decoding methods could classify binary choices of improvisations (i.e., identify the improvisation word from the melody) at performance of around 80%--well above chance. In a second experiment we investigated the mapping from perception of music to taste words. Fifty-seven non-musical experts listened to a fraction of the improvisations. We found that listeners classified with high performance the taste word which had elicited the improvisation. Our results, furthermore, show that associations of taste and music go beyond basic sensory attributes into the domain of semantics, and open a new venue of investigation to understand the origins of these consistent taste-musical patterns.  相似文献   

8.
B H Repp 《Cognition》1992,44(3):241-281
To determine whether structural factors interact with the perception of musical time, musically literate listeners were presented repeatedly with eight-bar musical excerpts, realized with physically regular timing on an electronic piano. On each trial, one or two randomly chosen time intervals were lengthened by a small amount, and the score. The resulting detection accuracy profile across all positions in each musical excerpt showed pronounced dips in places where lengthening would typically occur in an expressive (temporally modulated) performance. False alarm percentages indicated that certain tones seemed longer a priori, and these were among the ones whose actual lengthening was easiest to detect. The detection accuracy and false alarm profiles were significantly correlated with each other and with the temporal microstructure of expert performances, as measured from sound recordings by famous artists. Thus the detection task apparently tapped into listeners' musical thought and revealed their expectations about the temporal microstructure of music performance. These expectations, like the timing patterns of actual performances, derive from the cognitive representation of musical structure, as cued by a variety of systemic factors (grouping, meter, harmonic progression) and their acoustic correlates. No simple psycho-acoustic explanation of the detection accuracy profiles was evident. The results suggest that the perception of musical time is not veridical but "warped" by the structural representation. This warping may provide a natural basis for performance evaluation: expected timing patterns sound more or less regular, unexpected ones irregular. Parallels to language performance and perception are noted.  相似文献   

9.
In the present study, the gating paradigm was used to measure how much perceptual information that was extracted from musical excerpts needs to be heard to provide judgments of familiarity and of emotionality. Nonmusicians heard segments of increasing duration (250, 500, 1,000 msec, etc.). The stimuli were segments from familiar and unfamiliar musical excerpts in Experiment 1 and from very moving and emotionally neutral musical excerpts in Experiment 2. Participants judged how familiar (Experiment 1) or how moving (Experiment 2) the excerpt was to them. Results show that a feeling of familiarity can be triggered by 500-msec segments, and that the distinction between moving and neutral can be made for 250-msec segments. This finding extends the observation of fast-acting cognitive and emotional processes from face and voice perception to music perception.  相似文献   

10.
The detectability of a deviation from metronomic timing—of a small local increment in interonset interval (IOI) duration—in a musical excerpt is subject to positional biases, or “timing expectations,” that are closely related to the expressive timing (sequence of IOI durations) typically produced by musicians in performance (Repp, 1992b, 1998c, 1998d). Experiment 1 replicated this finding with some changes in procedure and showed that the perception-performance correlation is not the result of formal musical training or availability of a musical score. Experiments 2 and 3 used a synchronization task to examine the hypothesis that participants’ perceptual timing expectations are due to systematic modulations in the period of a mental timekeeper that also controls perceptual-motor coordination. Indeed, there was systematic variation in the asynchronies between taps and metronomically timed musical event onsets, and this variation was correlated both with the variations in IOI increment detectability (Experiment 1) and with the typical expressive timing pattern in performance. When the music contained local IOI increments (Experiment 2), they were almost perfectly compensated for on the next tap, regardless of their detectability in Experiment 1, which suggests a perceptual-motor feed-back mechanism that is sensitive to subthreshold timing deviations. Overall, the results suggest that aspects of perceived musical structure influence the predictions of mental timekeeping mechanisms, thereby creating a subliminal warping of experienced time.  相似文献   

11.
We investigated the effects of selective attention and musical training on the processing ofharmonic expectations. In Experiment 1, participants with and without musical training were required to respond to the contour of melodies as they were presented with chord progressions that were highly expected, slightly unexpected, or extremely unexpected. Reaction time and accuracy results showed that when attention was focused on the melody, musically trained participants were still sensitive to different harmonic expectations, whereas participants with no musical training were undifferentiated across expectation conditions. In Experiment 2, participants were required to listen holistically to the entire chord progression and to rate their preference for each chord progression. Results from preference ratings showed that all the participants, with or without musical training, were sensitive to manipulations of harmonic expectations. Experiments 3 and 4 showed that changing the speed of presentation of chord progressions did not affect the pattern of results. The four experiments together highlight the importance of attentional focus in musical training, especially as it relates to the processing of harmonic expectations.  相似文献   

12.
Abstract.— Two pianists and one percussionist performed a number of notated rhythms on the piano and on the side drum or the bongo drum. The tape-recordings of the performances were analyzed by an analyzer for mono-phonic sound sequences as regards the durations and the amplitudes. Several characteristic deviations from the norms implied by the musical notation appeared. The recordings were used as stimuli in experiments on rhythm experience described elsewhere.  相似文献   

13.
When listening to a melody, we are often able to anticipate not onlywhat tonal intervals will occur next but alsowhen in time these will appear. The experiments reported here were carried out to investigate what types of structural relations support the generation of temporal expectancies in the context of a melody recognition task. The strategy was to present subjects with a set of folk tunes in which temporal accents (i.e., notes with a prolonged duration) always occurred in the first half of a melody, so that expectancies, if generated, could carry over to an isochronous sequence of notes in the latter half ofthe melody. The ability to detect deviant pitch changes in the final variation as a function of rhythmic context was then evaluated. Accuracy and reaction time data from Experiment 1 indicated that expectancy formation jointly depends on an invariant periodicity of temporal accentuation and the attentional highlighting ofcertain melodic relations (i.e., phrase ending points). In Experiment 2, once these joint expectancies were generated, the temporal dimension had a greater facilitating effectupon melody recognition than did the melodic one. These results are discussed in terms of their implications for the perceptual processing of musical events.  相似文献   

14.
Music presents information both sequentially, in the form of musical phrases, and simultaneously, in the form of chord structure. The ability to abstract musical structure presented sequentially and simultaneously was investigated using modified versions of the Bransford and Franks’ (1971) paradigm. Listeners heard subsets of musical ideas. The abstraction hypothesis predicted (1) false recognition of novel instances of the abstracted musical idea, (2) confidence of “recognition” should increase as recognition items approximate the complete musical idea, (3) correct rejection of “noncases,” which deviate from the acquired musical structure. Experiment 1 investigated sequential abstraction by using four-phrase folk melodies as musical ideas. Predictions 1 and 3 were confirmed, but the false recognition rate decreased as the number of phrases increased. Listeners were sensitive to improper combinations of phrases and to novel melodies different from melodies presented during acquisition. Experiment 2 investigated simultaneous abstraction using four-voice Bach chorales as musical ideas. Listeners spontaneously integrated choral subsets into holistic musical ideas. Musically trained listeners were better than naive listeners at identifying noncases.  相似文献   

15.
In two experiments, the performance of listeners with different amounts of musical training (high skill, low skill) was examined in a two-alternative forced choice time-detection task involving simple five-cycle acoustic sequences. In each of a series of trials, all listeners determined which of two pattern cycles contained a small time change. Sequence context was also varied (regular vs. irregular timing). In Experiment 1, in which context was manipulated as a between-subjects variable, high-skill listeners performed significantly better than low-skill listeners only with regular patterns. In Experiment 2, in which context was manipulated as a within-subjects variable, the only significant source of variance was pattern context: All listeners were better at detecting time changes in regular than in irregular patterns. The results are considered in light of several hypotheses, including the expectancy/contrast model (Jones & Boltz, 1989).  相似文献   

16.
Expression in musical performance is largely communicated by the manner in which a piece is played; interpretive aspects that supplement the written score. In piano performance, timing and amplitude are the principal parameters the performer can vary. We examined the way in which such variation serves to communicate emotion by manipulating timing and amplitude in performances of classical piano pieces. Over three experiments, listeners rated the emotional expressivity of performances and their manipulated versions. In Experiments 1 and 2, timing and amplitude information were covaried; judgments were monotonically decreasing with performance variability, demonstrating that the rank ordering of acoustical manipulations was captured by participants' responses. Further, participants' judgments formed an S-shaped (sigmoidal) function in which greater sensitivity was seen for musical manipulations in the middle of the range than at the extremes. In Experiment 3, timing and amplitude were manipulated independently; timing variation was found to provide more expressive information than did amplitude. Across all three experiments, listeners demonstrated sensitivity to the expressive cues we manipulated, with sensitivity increasing as a function of musical experience.  相似文献   

17.
Continuous noise is played in many open‐plan offices to partially mask ambient sounds, in particular background speech. With this, the detrimental impact of background sounds on cognitive performance is intended to be reduced as well as subjectively perceived disturbance. Our experiments explored whether background music can achieve the same effects. Besides collecting subjective rating data, we tested cognitive performance using verbal serial recall. This is the standard task for exploring verbal short‐term memory, which is central to human information processing. Either staccato music, legato music or continuous noise was superimposed on office noise. In Experiment 1 (N = 30), only continuous noise reduced the detrimental impact of office noise significantly. Legato music did not qualify in this respect although it did not diminish cognitive performance when presented in isolation in Experiment 2 (N = 20). Subjective ratings in both experiments revealed that most participants would prefer legato music to continuous noise in office environments. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
The detectability of a deviation from metronomic timing--of a small local increment in interonset interval (IOI) duration--in a musical excerpt is subject to positional biases, or "timing expectations," that are closely related to the expressive timing (sequence of IOI durations) typically produced by musicians in performance (Repp, 1992b, 1998c, 1998d). Experiment 1 replicated this finding with some changes in procedure and showed that the perception-performance correlation is not the result of formal musical training or availability of a musical score. Experiments 2 and 3 used a synchronization task to examine the hypothesis that participants' perceptual timing expectations are due to systematic modulations in the period of a mental timekeeper that also controls perceptual-motor coordination. Indeed, there was systematic variation in the asynchronies between taps and metronomically timed musical event onsets, and this variation was correlated both with the variations in IOI increment detectability (Experiment 1) and with the typical expressive timing pattern in performance. When the music contained local IOI increments (Experiment 2), they were almost perfectly compensated for on the next tap, regardless of their detectability in Experiment 1, which suggests a perceptual-motor feedback mechanism that is sensitive to subthreshold timing deviations. Overall, the results suggest that aspects of perceived musical structure influence the predictions of mental timekeeping mechanisms, thereby creating a subliminal warping of experienced time.  相似文献   

19.
The temporal coordination of hand and foot actions in piano performance is an interesting instance of highly practiced, perceptually guided complex motor behavior. To gain some insight into the nature of this coordination, ten pianists were asked to play two excerpts from the piano literature that required repeated use of the damper pedal to connect successive chords. Each excerpt was played at three prescribed tempos on a Yamaha Disklavier and was recorded in MIDI format. The question of interest was whether and how changes in tempo would affect the timing of pedal releases and depressions within the periods defined by successive manual chord onsets. Theoretical possibilities ranged from absolute invariance (variable phase relationships) to relative invariance of pedal timing (constant phase relationships). The results show that, typically, the timing of pedal actions is neither absolutely nor relatively invariant: As the tempo increases, both pedal releases and depressions usually occur a little sooner and pedal changes (release-depression sequences) are executed a little more quickly, but these effects are proportionally smaller than the changes in manual (and pedal) period duration. Since this may be due to unequal changes in peripheral hand and foot kinematics with tempo, it remains possible that there is invariance of either kind at the level of central motor commands. However, it is the peripheral timing that produces the acoustic consequences musicians try to achieve.  相似文献   

20.
Musically trained and untrained listeners were required to listen to 27 musical excerpts and to group those that conveyed a similar emotional meaning (Experiment 1). The groupings were transformed into a matrix of emotional dissimilarity that was analysed through multidimensional scaling methods (MDS). A 3-dimensional space was found to provide a good fit of the data, with arousal and emotional valence as the primary dimensions. Experiments 2 and 3 confirmed the consistency of this 3-dimensional space using excerpts of only 1 second duration. The overall findings indicate that emotional responses to music are very stable within and between participants, and are weakly influenced by musical expertise and excerpt duration. These findings are discussed in light of a cognitive account of musical emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号