首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
More than 50 years after the appearance of the motor theory of speech perception, it is timely to evaluate its three main claims that (1) speech processing is special, (2) perceiving speech is perceiving gestures, and (3) the motor system is recruited for perceiving speech. We argue that to the extent that it can be evaluated, the first claim is likely false. As for the second claim, we review findings that support it and argue that although each of these findings may be explained by alternative accounts, the claim provides a single coherent account. As for the third claim, we review findings in the literature that support it at different levels of generality and argue that the claim anticipated a theme that has become widespread in cognitive science.  相似文献   

2.
Kerzel and Bekkering (2000) found perceptuomotor compatibility effects between spoken syllables and visible speech gestures and interpreted them as evidence in favor of the distinctive claim of the motor theory of speech perception that the motor system is recruited for perceiving speech. We present three experiments aimed at testing this interpretation. In Experiment 1, we replicated the original findings by Kerzel and Bekkering but with audible syllables. In Experiments 2 and 3, we tested the results of Experiment 1 under more stringent conditions, with different materials and different experimental designs. In all of our experiments, we found the same result: Perceiving syllables affects uttering syllables. The result is consistent both with the results of a number of other behavioral and neural studies related to speech and with more general findings of perceptuomotor interactions. Taken together, these studies provide evidence in support of the motor theory claim that the motor system is recruited for perceiving speech.  相似文献   

3.
This study looks at whether there is a relationship between mother and infant gesture production. Specifically, it addresses the extent of articulation in the maternal gesture repertoire and how closely it supports the infant production of gestures. Eight Spanish mothers and their 1‐ and 2‐year‐old babies were studied during 1 year of observations. Maternal and child verbal production, gestures and actions were recorded at their homes on five occasions while performing daily routines. Results indicated that mother and child deictic gestures (pointing and instrumental) and representational gestures (symbolic and social) were very similar at each age group and did not decline across groups. Overall, deictic gestures were more frequent than representational gestures. Maternal adaptation to developmental changes is specific for gesturing but not for acting. Maternal and child speech were related positively to mother and child pointing and representational gestures, and negatively to mother and child instrumental gestures. Mother and child instrumental gestures were positively related to action production, after maternal and child speech was partialled out. Thus, language plays an important role for dyadic communicative activities (gesture–gesture relations) but not for dyadic motor activities (gesture–action relations). Finally, a comparison of the growth curves across sessions showed a closer correspondence for mother–child deictic gestures than for representational gestures. Overall, the results point to the existence of an articulated maternal gesture input that closely supports the child gesture production. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Do the gestures that speakers produce while talking significantly benefit listeners' comprehension of the message? This question has been the topic of many research studies over the previous 35 years, and there has been little consensus. The present meta-analysis examined the effect sizes from 63 samples in which listeners' understanding of a message was compared when speech was presented alone with when speech was presented with gestures. It was found that across samples, gestures do provide a significant, moderate benefit to communication. Furthermore, the magnitude of this effect is moderated by 3 factors. First, effects of gesture differ as a function of gesture topic, such that gestures that depict motor actions are more communicative than those that depict abstract topics. Second, effects of gesture on communication are larger when the gestures are not completely redundant with the accompanying speech; effects are smaller when there is more overlap between the information conveyed in the 2 modalities. Third, the size of the effect of gesture is dependent on the age of the listeners, such that children benefit more from gestures than do adults. Remaining questions for future research are highlighted.  相似文献   

5.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   

6.
Speech directed towards young children ("motherese") is subject to consistent systematic modifications. Recent research suggests that gesture directed towards young children is similarly modified (gesturese). It has been suggested that gesturese supports speech, therefore scaffolding communicative development (the facilitative interactional theory). Alternatively, maternal gestural modification may be a consequence of the semantic simplicity of interaction with infants (the interactional artefact theory). The gesture patterns of 12 English mothers were observed with their 20-month-old infants while engaged in two tasks, free play and a counting task, designed to differentially tap into scaffolding. Gestures accounted for 29% of total maternal communicative behaviour. English mothers employed mainly concrete deictic gestures (e.g. pointing) that supported speech by disambiguating and emphasizing the verbal utterance. Maternal gesture rate and informational gesture-speech relationship were consistent across tasks, supporting the interactional artefact theory. This distinctive pattern of gesture use for the English mothers was similar to that reported for American and Italian mothers, providing support for universality. Child-directed gestures are not redundant in relation to child-directed speech but rather both are used by mothers to support their communicative acts with infants.  相似文献   

7.
Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.  相似文献   

8.
Arm movements can influence language comprehension much as semantics can influence arm movement planning. Arm movement itself can be used as a linguistic signal. We reviewed neurophysiological and behavioural evidence that manual gestures and vocal language share the same control system. Studies of primate premotor cortex and, in particular, of the so-called "mirror system", including humans, suggest the existence of a dual hand/mouth motor command system involved in ingestion activities. This may be the platform on which a combined manual and vocal communication system was constructed. In humans, speech is typically accompanied by manual gesture, speech production itself is influenced by executing or observing transitive hand actions, and manual actions play an important role in the development of speech, from the babbling stage onwards. Behavioural data also show reciprocal influence between word and symbolic gestures. Neuroimaging and repetitive transcranial magnetic stimulation (rTMS) data suggest that the system governing both speech and gesture is located in Broca's area. In general, the presented data support the hypothesis that the hand motor-control system is involved in higher order cognition.  相似文献   

9.
ABSTRACT

We propose a theory of how the speech gesture determines change in a functionally relevant variable of vocal tract state (e.g., constriction degree). A core postulate of the theory is that the gesture determines how the variable evolves in time independent of any executive timekeeper. That is, the theory involves intrinsic timing of speech gestures. We compare the theory against others in which an executive timekeeper determines change in vocal tract state. Theories that employ an executive timekeeper have been proposed to correct for disparities between theoretically predicted and experimentally observed velocity profiles. Such theories of extrinsic timing make the gesture a nonautonomous dynamical system. For a nonautonomous dynamical system, the change in state depends not just on the state but also on time. We show that this nonautonomous extension makes surprisingly weak kinematic predictions both qualitatively and quantitatively. We propose instead that the gesture is a theoretically simpler nonlinear autonomous dynamical system. For the proposed nonlinear autonomous dynamical system, the change in state depends nonlinearly on the state and does not depend on time. This new theory provides formal expression to the notion of intrinsic timing. Furthermore, it predicts experimentally observed relations among kinematic variables.  相似文献   

10.
In accord with a proposed innate link between speech perception and production (e.g., motor theory), this study provides compelling evidence for the inhibition of stuttering events in people who stutter prior to the initiation of the intended speech act, via both the perception and the production of speech gestures. Stuttering frequency during reading was reduced in 10 adults who stutter by approximately 40% in three of four experimental conditions: (1) following passive audiovisual presentation (i.e., viewing and hearing) of another person producing pseudostuttering (stutter-like syllabic repetitions) and following active shadowing of both (2) pseudostuttered and (3) fluent speech. Stuttering was not inhibited during reading following passive audiovisual presentation of fluent speech. Syllabic repetitions can inhibit stuttering both when produced and when perceived, and we suggest that these elementary stuttering forms may serve as compensatory speech gestures for releasing involuntary stuttering blocks by engaging mirror neuronal systems that are predisposed for fluent gestural imitation.  相似文献   

11.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

12.
Spontaneous gesture frequently accompanies speech. The question is why. In these studies, we tested two non‐mutually exclusive possibilities. First, speakers may gesture simply because they see others gesture and learn from this model to move their hands as they talk. We tested this hypothesis by examining spontaneous communication in congenitally blind children and adolescents. Second, speakers may gesture because they recognize that gestures can be useful to the listener. We tested this hypothesis by examining whether speakers gesture even when communicating with a blind listener who is unable to profit from the information that the hands convey. We found that congenitally blind speakers, who had never seen gestures, nevertheless gestured as they spoke, conveying the same information and producing the same range of gesture forms as sighted speakers. Moreover, blind speakers gestured even when interacting with another blind individual who could not have benefited from the information contained in those gestures. These findings underscore the robustness of gesture in talk and suggest that the gestures that co‐occur with speech may serve a function for the speaker as well as for the listener.  相似文献   

13.
Language can be understood as an embodied system, expressible as gestures. Perception of these gestures depends on the “mirror system,” first discovered in monkeys, in which the same neural elements respond both when the animal makes a movement and when it perceives the same movement made by others. This system allows gestures to be understood in terms of how they are produced, as in the so-called motor theory of speech perception. I argue that human speech evolved from manual gestures, with vocal gestures being gradually incorporated into the mirror system in the course of hominin evolution. Speech may have become the dominant mode only with the emergence of Homo sapiens some 170,100 years ago, although language as a relatively complex syntactic system probably emerged over the past 2 million years, initially as a predominantly manual system. Despite the present-day dominance of speech, manual gestures accompany speech, and visuomanual forms of language persist in signed languages of the deaf, in handwriting, and even in such forms as texting.  相似文献   

14.
Visible embodiment: Gestures as simulated action   总被引:1,自引:0,他引:1  
Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.  相似文献   

15.
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.  相似文献   

16.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   

17.
Intentional and attentional dynamics of speech-hand coordination   总被引:1,自引:0,他引:1  
Interest is rapidly growing in the hypothesis that natural language emerged from a more primitive set of linguistic acts based primarily on manual activity and hand gestures. Increasingly, researchers are investigating how hemispheric asymmetries are related to attentional and manual asymmetries (i.e., handedness). Both speech perception and production have origins in the dynamical generative movements of the vocal tract known as articulatory gestures. Thus, the notion of a "gesture" can be extended to both hand movements and speech articulation. The generative actions of the hands and vocal tract can therefore provide a basis for the (direct) perception of linguistic acts. Such gestures are best described using the methods of dynamical systems analysis since both perception and production can be described using the same commensurate language. Experiments were conducted using a phase transition paradigm to examine the coordination of speech-hand gestures in both left- and right-handed individuals. Results address coordination (in-phase vs. anti-phase), hand (left vs. right), lateralization (left vs. right hemisphere), focus of attention (speech vs. tapping), and how dynamical constraints provide a foundation for human communicative acts. Predictions from the asymmetric HKB equation confirm the attentional basis of functional asymmetry. Of significance is a new understanding of the role of perceived synchrony (p-centres) during intentional cases of gestural coordination.  相似文献   

18.
Neural correlates of bimodal speech and gesture comprehension   总被引:2,自引:0,他引:2  
The present study examined the neural correlates of speech and hand gesture comprehension in a naturalistic context. Fifteen participants watched audiovisual segments of speech and gesture while event-related potentials (ERPs) were recorded to the speech. Gesture influenced the ERPs to the speech. Specifically, there was a right-lateralized N400 effect-reflecting semantic integration-when gestures mismatched versus matched the speech. In addition, early sensory components in bilateral occipital and frontal sites differentiated speech accompanied by matching versus non-matching gestures. These results suggest that hand gestures may be integrated with speech at early and late stages of language processing.  相似文献   

19.
The gestures that spontaneously occur in communicative contexts have been shown to offer insight into a child’s thoughts. The information gesture conveys about what is on a child’s mind will, of course, only be accessible to a communication partner if that partner can interpret gesture. Adults were asked to observe a series of children who participated ‘live’ in a set of conservation tasks and gestured spontaneously while performing the tasks. Adults were able to glean substantive information from the children’s gestures, information that was not found anywhere in their speech. ‘Gesture-reading’ did, however, have a cost – if gesture conveyed different information from speech, it hindered the listener’s ability to identify the message in speech. Thus, ordinary listeners can and do extract information from a child’s gestures, even gestures that are unedited and fleeting.  相似文献   

20.
The design of effective communications depends upon an adequate model of the communication process. The traditional model is that speech conveys semantic information and bodily movement conveys information about emotion and interpersonal attitudes. But McNeill (2000) argues that this model is fundamentally wrong and that some bodily movements, namely spontaneous hand movements generated during talk (iconic gestures), are integral to semantic communication. But can we increase the effectiveness of communication using this new theory? Focusing on advertising we found that advertisements in which the message was split between speech and iconic gesture (possible on TV) were significantly more effective than advertisements in which meaning resided purely in speech or language (radio/ newspaper). We also found that the significant differences in communicative effectiveness were maintained across five consecutive trials. We compared the communicative power of professionally made TV advertisements in which a spoken message was accompanied either by iconic gestures or by pictorial images, and found the iconic gestures to be more effective. We hypothesized that iconic gestures are so effective because they illustrate and isolate just the core semantic properties of a product. This research suggests that TV advertisements can be made more effective by incorporating iconic gestures with exactly the right temporal and semantic properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号