首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
Five experiments explored whether fluency in musical sequence production relies on matches between the contents of auditory feedback and the planned outcomes of actions. Participants performed short melodies from memory on a keyboard while musical pitches that sounded in synchrony with each keypress (feedback contents) were altered. Results indicated that altering pitch contents can disrupt production, but only when altered pitches form a sequence that is structurally similar to the planned sequence. These experiments also addressed the role of musical skill: Experiments 1 and 3 included trained pianists; other experiments included participants with little or no musical training. Results were similar across both groups with respect to the disruptive effects of auditory feedback manipulations. These results support the idea that a common hierarchical representation guides sequences of actions and the perception of event sequences and that this coordination is not acquired from learned associations formed by musical skill acquisition.  相似文献   

2.
When subjects are asked to tap in synchrony to a regular sequence of stimulus events (e.g., clicks), performance is not perfect in that, usually, an anticipation of the tap is observed. The present study examines the influence of temporally displaced auditory feedback on the size of this anticipatory error. Whereas earlier studies have shown that this asynchrony exhibits a linear increase in size as a function of an increasing delay in such additional auditory feedback, this study compared the impact of shifting feedback forward in time (i.e., feedback presented before the tap) with that of delayed auditory feedback. Results showed that the impact of feedback displacement on the amount of asynchrony differed for positive and negative displacements. Delayed feedback led to an increase in asynchrony, whereas negative displacements had (almost) no effect. This finding is related to a model assuming that the various feedback components arising from the tap (tactile, kinesthetic, auditory) are integrated to form one central representation, and that the timing of this central representation arises from a linear combination of the components involved.  相似文献   

3.
Past research has suggested that the disruptive effect of altered auditory feedback depends on how structurally similar the sequence of feedback events is to the planned sequence of actions. Three experiments pursued one basis for similarity in musical keyboard performance: matches between sequential transitions in spatial targets for movements and the melodic contour of auditory feedback. Trained pianists and musically untrained persons produced simple tonal melodies on a keyboard while hearing feedback sequences that either matched the planned melody or were contour-preserving variations of that melody. Sequence production was disrupted among pianists when feedback events were serially shifted by one event, similarly for shifts of planned melodies and tonal variations but less so for shifts of atonal variations. Nonpianists were less likely to be disrupted by serial shifts of variations but showed similar disruption to pianists for shifts of the planned melody. Thus, transitional properties and tonal schemata may jointly determine perception-action similarity during musical sequence production, and the tendency to generalize from a planned sequence to variations of it may develop with the acquisition of skill.  相似文献   

4.
Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control.  相似文献   

5.
Mapping musical thought to musical performance   总被引:5,自引:0,他引:5  
Expressive timing methods are described that map pianists' musical thoughts to sounded performance. In Experiment 1, 6 pianists performed the same musical excerpt on a computer-monitored keyboard. Each performance contained 3 expressive timing patterns: chord asynchronies, rubato patterns, and overlaps (staccato and legato). Each pattern was strongest in experienced pianists' performances and decreased when pianists attempted to play unmusically. In Experiment 2 pianists performed another musical excerpt and notated their musical intentions on an unedited score. The notated interpretations correlated with the presence of the 3 methods: The notated melody preceded other events in chords (chord asynchrony); events notated as phase boundaries showed greatest tempo changes (rubato); and the notated melody showed most consistent amount of overlap between adjacent events (staccato and legato). These results suggest that the mapping of musical thought to musical action is rule-governed, and the same rules produce different interpretations.  相似文献   

6.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   

7.
The purpose of this study was to investigate the influence of contingent auditory feedback on the development of infant reaching. Eleven full-term infants were observed biweekly from the age of 10 weeks to 16 weeks, and their arm kinematics were recorded. Auditory feedback that was contingent on arm kinematics was provided in the form of: (a) the mother's voice; and (b) musical tones. Results showed that providing auditory feedback (mother's voice or musical tones): (i) increased the amplitude of exploratory arm movements before the onset of reaching; and (ii) increased the number of reaches at the onset of reaching. These results show that infants are able to use contingent auditory feedback to explore the relevant possibilities for action that are subsequently shaped into goal-directed movements.  相似文献   

8.
In 12 tasks, each including 10 repetitions, 6 skilled pianists performed or responded to a musical excerpt. In the first 6 tasks, expressive timing was required; in the last 6 tasks, metronomic timing. The pianists first played the music on a digital piano (Tasks 1 and 7), then played it without auditory feedback (Tasks 2 and 8), then tapped on a response key in synchrony with one of their own performances (Tasks 3 and 9), with an imagined performance (Tasks 4 and 10), with a computer-generated performance (Tasks 5 and 11), and with a computer-generated sequence of clicks (Tasks 6 and 12). The results demonstrated that pianists are capable of generating the expressive timing pattern of their performance in the absence of auditory and kinaesthetic (piano keyboard) feedback. They can also synchronize their finger taps quite well with expressively timed music or clicks (while imagining the music), although they tend to underestimate long interonset intervals and to compensate on the following tap. Expressive timing is thus shown to be generated from an internal representation of the music. In metronomic performance, residual expressive timing effects were evident. Those did not depend on auditory feedback, but they were much reduced or absent when kinaesthetic feedback from the piano keyboard was eliminated. Thus, they seemed to arise from the pianist's physical interaction with the instrument. Systematic timing patterns related to expressive timing were also observed in synchronization with a metronomic computer performance and even in synchronization with metronomic clicks. These results shed light on intentional and unintentional, structurally governed processes of timing control in music performance.  相似文献   

9.
Audiovisual integration (AVI) has been demonstrated to play a major role in speech comprehension. Previous research suggests that AVI in speech comprehension tolerates a temporal window of audiovisual asynchrony. However, few studies have employed audiovisual presentation to investigate AVI in person recognition. Here, participants completed an audiovisual voice familiarity task in which the synchrony of the auditory and visual stimuli was manipulated, and in which visual speaker identity could be corresponding or noncorresponding to the voice. Recognition of personally familiar voices systematically improved when corresponding visual speakers were presented near synchrony or with slight auditory lag. Moreover, when faces of different familiarity were presented with a voice, recognition accuracy suffered at near synchrony to slight auditory lag only. These results provide the first evidence for a temporal window for AVI in person recognition between approximately 100 ms auditory lead and 300 ms auditory lag.  相似文献   

10.
The present study aimed at investigating to what extent sensorimotor synchronization is related to (i) musical specialization, (ii) perceptual discrimination, and (iii) the movement’s trajectory. To this end, musicians with different musical expertise (drummers, professional pianists, amateur pianists, singers, and non-musicians) performed an auditory and visual synchronization and a cross-modal temporal discrimination task. During auditory synchronization drummers performed less variably than amateur pianists, singers and non-musicians. In the cross-modal discrimination task drummers showed superior discrimination abilities which were correlated with synchronization variability as well as with the trajectory. These data suggest that (i) the type of specialized musical instrument affects synchronization abilities and (ii) synchronization accuracy is related to perceptual discrimination abilities as well as to (iii) the movement’s trajectory. Since particularly synchronization variability was affected by musical expertise, the present data imply that the type of instrument improves accuracy of timekeeping mechanisms.  相似文献   

11.
Borden's (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions--normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden's hypothesis and other accounts about why altered auditory feedback disrupts speech control.  相似文献   

12.
Research on the effects of context and task on learning and memory has included approaches that emphasize processes during learning (e.g., Craik & Tulving, 1975) and approaches that emphasize a match of conditions during learning with conditions during a later test of memory (e.g., Morris, Bransford, & Franks, 1977; Proteau, 1992; Tulving & Thomson, 1973). We investigated the effects of auditory context on learning and retrieval in three experiments on memorized music performance (a form of serial recall). Auditory feedback (presence or absence) was manipulated while pianists learned musical pieces from notation and when they later played the pieces from memory. Auditory feedback during learning significantly improved later recall. However, auditory feedback at test did not significantly affect recall, nor was there an interaction between conditions at learning and test. Auditory feedback in music performance appears to be a contextual factor that affects learning but is relatively independent of retrieval conditions.  相似文献   

13.
Recent literature has raised the suggestion that voice recognition runs in parallel to face recognition. As a result, a prediction can be made that voices should prime faces and faces should prime voices. A traditional associative priming paradigm was used in two studies to explore within‐modality priming and cross‐modality priming. In the within‐modality condition where both prime and target were faces, analysis indicated the expected associative priming effect: The familiarity decision to the second target celebrity was made more quickly if preceded by a semantically related prime celebrity, than if preceded by an unrelated prime celebrity. In the cross‐modality condition, where a voice prime preceded a face target, analysis indicated no associative priming when a 3‐s stimulus onset asynchrony (SOA) was used. However, when a relatively longer SOA was used, providing time for robust recognition of the prime, significant cross‐modality priming emerged. These data are explored within the context of a unified account of face and voice recognition, which recognizes weaker voice processing than face processing.  相似文献   

14.
This article reports research employing a quantitative approach to investigating the specific cognitive processes adopted and musical abilities required during musical improvisation. Two questionnaires were used: the Improvisation Processes Questionnaire and the Improvisation Abilities Questionnaire. Participants were 76 adult musicians, each of them with at least two year's improvisation experience. Factor analysis extracted five dimensions for the Improvisation Processes Questionnaire (anticipation, emotive communication, flow, feedback and use of repertoire) and two dimensions for the Improvisation Abilities Questionnaire (musical practice and basic skills). Data were subjected to ANOVA for each of the 5 + 2 factors, considering the influence of three concurrent variables (instrument played, being or not being skilled at several instruments and kind of preferred music for performances). Results revealed a significant interaction between instrument played and the dimension basic skills, and between being or not being skilled at several instruments and the dimension flow. Significant Pearson correlations were found between flow and anticipation, between flow and musical practice, between anticipation and basic skills, between repertoire and emotive communication, between repertoire and feedback, between musical practice and basic skills. The interactions between the factors and the importance of the dimensions are discussed considering also how an improviser can improve performance levels.  相似文献   

15.
The present study shows that playing a particular musical instrument influences tuning preference. Violinists (n=7), pianists (n=7), and nonmusicians (n=10) were required to adjust three notes (E, A, and B) in computer-generated, eight-tone ascending and descending diatonic scales of C major. The results indicated that (1) violinists set the three tones closer to Pythagorean intonation than do pianists (p< .01), (2) pianists fit closest to equal-tempered intonation (p< .01), and (3) nonmusicians do not show any preference for a specific intonation model. These findings are consistent with the view that tuning preference is determined by musical experience more than by characteristics of the auditory system. The relevance of these results to theories of cultural conditioning and assessment of tonal perception is discussed.  相似文献   

16.
It is widely believed that the well‐adjusted individual has an integrated, coherent and autonomous ‘core self’ or ‘ego identity’. In this paper it is argued that a ‘multi‐voiced’ or ‘dialogical self’ provides a better model. In this model the self has no central core; rather, it is the product of alternative and often opposing narrative voices. Each voice has its own life story; each competes with other voices for dominance in thought and action; and each is constituted by a different set of affectively‐charged attachments: to people, events, objects and our own bodies. It is argued that by exploring these attachments the dominant narrative voices of the self may be identified. A semi‐structured interview protocol, the Personality Web, is introduced as a method for studying the dialogical self. In phase 1, 24 attachments are elicited in four categories: people (6), events (6), places and objects (8), and orientations to body parts (4). During interviewing, the history and meaning of each attachment is explored. In phase 2, participants were asked to group their attachments by strength of association into clusters, and multidimensional scaling was used to map the individual's ‘web’ of attachments. Using a combination of qualitative and quantitative methods, the strategy of clustering attachments was shown to be successful as a means for empirically examining the dialogical self. Two case studies of midlife adults are described to illustrate the arguments and methods proposed. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
Bakhtin's dialogical (not dialectical) philosophy of the everyday, double‐voiced prosaic and poetic discourse of asymmetrically interrelated, embodied selves, each answerable to others and the world, found liberating wisdom in modern novelizing texts, notably those of Rabelais and Dostoevsky, with the Chalcedonian Christ prototype as background. He suggests how language is used in Christian contexts by attending to different voices in confessional utterances that may include God's voice/an interlocutory infinite “third”—heard in and through others’ voices—without collapsing perspectival pluralism into relativism. Current work on comparative theology, contrasted with old‐style comparative religion, echoes his insights.  相似文献   

18.
The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i.e., voice pitch). Groups of 27 congenitally or early-blind adults and 23 sighted controls assessed the trustworthiness, competence, and warmth of men and women speaking a series of vowels, whose voice pitches had been experimentally raised or lowered. Blind and sighted listeners judged both men’s and women’s voices with lowered pitch as being more competent and trustworthy than voices with raised pitch. In contrast, raised-pitch voices were judged as being warmer than were lowered-pitch voices, but only for women’s voices. Crucially, blind and sighted persons did not differ in their voice-based assessments of competence or warmth, or in their certainty of these assessments, whereas the association between low pitch and trustworthiness in women’s voices was weaker among blind than sighted participants. This latter result suggests that blind persons may rely less heavily on nonverbal cues to trustworthiness compared to sighted persons. Ultimately, our findings suggest that robust perceptual associations that systematically link voice pitch to the social and personal dimensions of a speaker can develop without visual input.  相似文献   

19.
Can skilled performers, such as artists or athletes, recognize the products of their own actions? We recorded 12 pianists playing 12 mostly unfamiliar musical excerpts, half of them on a silent keyboard. Several months later, we played these performances back and asked the pianists to use a 5-point scale to rate whether they thought they were the person playing each excerpt (1 = no, 5 = yes). They gave their own performances significantly higher ratings than any other pianist's performances. In two later follow-up tests, we presented edited performances from which differences in tempo, overall dynamic (i.e., intensity) level, and dynamic nuances had been removed. The pianists' ratings did not change significantly, which suggests that the remaining information (expressive timing and articulation) was sufficient for self-recognition. Absence of sound during recording had no significant effect. These results are best explained by the hypothesis that an observer's action system is most strongly activated during perception of self-produced actions.  相似文献   

20.
We investigated source misattributions in the DRM false memory paradigm (Deese, 1959, Roediger & McDermott, 1995). Subjects studied words in one of two voices, manipulated between‐lists (pure‐voice lists) or within‐list (mixed‐voice lists), and were subsequently given a recognition test with voice‐attribution judgements. Experiments 1 and 2 used visual tests. With pure‐voice lists (Experiment 1), subjects frequently attributed related lures to the corresponding study voice, despite having the option to not respond. Further, these erroneous attributions remained high with mixed‐voice lists (Experiment 2). Thus, even when their related lists were not associated with a particular voice, subjects misattributed the lures to one of the voices. Attributions for studied items were fairly accurate in both cases. Experiments 3 and 4 used auditory tests. With pure‐voice lists (Experiment 3), subjects frequently attributed related lures and studied items to the corresponding study voice, regardless of the test voice. In contrast, with mixed‐voice lists (Experiment 4), subjects frequently attributed related lures and studied items to the corresponding test voice, regardless of the study voice. These findings indicate that source attributions can be sensitive to voice information provided either at study or at test, even though this information is irrelevant for related lures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号