首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movement, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture.  相似文献   

2.
The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.  相似文献   

3.
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech–gesture integration processes.  相似文献   

4.
The ubiquitous human practice of spontaneously gesturing while speaking demonstrates the embodiment, embeddedness, and sociality of cognition. The present essay takes gestural practice to be a paradigmatic example of a more general claim: human cognition is social insofar as our embedded, intelligent, and interacting bodies select and construct meaning in a way that is intersubjectively constrained and defeasible. Spontaneous co-speech gesture is markedly interesting because it at once confirms embodied aspects of linguistic meaning-making that formalist and linguistic turn-type philosophical approaches fail to appreciate, and it also forefronts intersubjectivity as an inherent and inherently normative dimension of communicative action. Co-speech hand gestures, as linguistically meaningful speech acts, demonstrate both sedimentation and spontaneity (in the sense of Maurice Merleau-Ponty??s dialectic of linguistic expression (2002)), or features of convention and nonconvention in a Gricean sense (1989). Yet neither pragmatic nor classic phenomenological approaches to communication can accommodate the practice of co-speech hand gesturing without some rehabilitation and reorientation. Pragmatic criteria of intersubjectivity, normativity, and rationality need to confront the non-propositional and nonverbal meaning-making of embodied encounters. Phenomenological treatments of expression and intersubjectivity must consider the normative nature of high-order social practices like language use. Reciprocally critical exchanges between these traditions and gesture studies yield an improved philosophy that treats language as a multi-modal medium for collaborative meaning achievement. The proper paradigm for these discussions is found in enactive approaches to social cognition. Co-speech hand gestures are first and foremost emergent elements of social interaction, not the external whirring of an isolated internal consciousness. In contrast to current literature that frequently presents gestures as uncontrollable bodily upsurge or infallible imagistic phenomenon that drives and dances with verbal or ??linguistic?? convention (McNeill 1992, 2005), I suggest that we study gestures as dynamic, embodied, and shared tools for collaborative sense-making.  相似文献   

5.
李恒 《心理科学》2016,39(5):1080-1085
早期有关时空隐喻表征心理现实性的研究因囿于有声语言而饱受批评,近年来兴起的手势和手语研究为该问题的证明提供了新的视角和证据。一方面,手势在三个空间维度上均可形成时空隐喻,有力地回应了反对派对概念隐喻理论在语言和概念层面循环论证的质疑;另一方面,手语空间运用的独特性以及文化图式的复杂性,导致其时空隐喻表现形式更为多样,为该领域研究提供了更加丰富的类型学证据。未来研究还应当注意心理学、语言学以及民族学等多学科的交汇融合,建立起概括力更强和系统性更完整的理论框架,将口语、手势和手语同时囊括其中。  相似文献   

6.
In previous analyses of the influence of language on cognition, speech has been the main channel examined. In studies conducted among Yucatec Mayas, efforts to determine the preferred frame of reference in use in this community have failed to reach an agreement (Bohnemeyer & Stolz, 2006; Levinson, 2003 vs. Le Guen, 2006, 2009). This paper argues for a multimodal analysis of language that encompasses gesture as well as speech, and shows that the preferred frame of reference in Yucatec Maya is only detectable through the analysis of co-speech gesture and not through speech alone. A series of experiments compares knowledge of the semantics of spatial terms, performance on nonlinguistic tasks and gestures produced by men and women. The results show a striking gender difference in the knowledge of the semantics of spatial terms, but an equal preference for a geocentric frame of reference in nonverbal tasks. In a localization task, participants used a variety of strategies in their speech, but they all exhibited a systematic preference for a geocentric frame of reference in their gestures.  相似文献   

7.
In order to address previous controversies whether hand movements and gestures are linked to mental concepts or solely to the process of speaking, in the present study we investigate the neuropsychological functions of the entire spectrum of unimanual and bimanual hand movements and gestures when they either accompany speaking or when they act as the only means to communicate in the absence of speech. The results showed that the hand movement activity regarding all types of hand movements and gestures stayed constant with and without speaking. The analysis of the Structure of hand movements showed that executions shifted from in space hand movements with a phase structure during the condition without speech to more irregular on body hand movements without a phase structure during the co-speech condition. The gestural analysis revealed that pantomime gestures increase under conditions without speech whereas emotional motions and subject-oriented actions primarily occur when speaking. The present results provide evidence that the overall hand movement activity does not differ between co-speech conditions and conditions without speech, but that the hands adopt different neuropsychological functions. We conclude that the hands primarily externalise mental concepts in conditions without speaking but that their use shifts to more self-regulation and to endorsing verbal output with emotional connotations when they accompany speech.  相似文献   

8.
Differential activation levels of the two hemispheres due to hemispheric specialization for various linguistic processes might determine hand choice for co-speech gestures. To test this hypothesis, we compared hand choices for gesturing in 20 healthy right-handed participants during explanation of metaphorical vs. non-metaphorical meanings, on the assumption that metaphor explanation enhances the right hemisphere contribution to speech production. Hand choices were analyzed separately for: depictive gestures that imitate action ("character viewpoint gestures," [McNeill, D. (1992). Hand and mind. What gestures reveal about thought. Chicago: University of Chicago Press.]), depictive gestures that express motion, relative locations, and shape ("observer viewpoint gestures"), and "abstract deictic gestures." It was found that the right-hand over left-hand preference was significantly weaker in the metaphor condition than in the non-metaphor conditions for depictive gestures that imitated action. Findings suggest that the activation of the right hemisphere in the metaphor condition reduces the likelihood of left hemisphere generation of gestures that imitate action, thus attenuating the right-hand preference.  相似文献   

9.
Children can understand iconic co-speech gestures that characterize entities by age 3 (Stanfield et al. in J Child Lang 40(2):1–10, 2014; e.g., “I’m drinking” \(+\) tilting hand in C-shape to mouth as if holding a glass). In this study, we ask whether children understand co-speech gestures that characterize events as early as they do so for entities, and if so, whether their understanding is influenced by the patterns of gesture production in their native language. We examined this question by studying native English speaking 3- to 4 year-old children and adults as they completed an iconic co-speech gesture comprehension task involving motion events across two studies. Our results showed that children understood iconic co-speech gestures about events at age 4, marking comprehension of gestures about events one year later than gestures about entities. Our findings also showed that native gesture production patterns influenced children’s comprehension of gestures characterizing such events, with better comprehension for gestures that follow language-specific patterns compared to the ones that do not follow such patterns—particularly for manner of motion. Overall, these results highlight early emerging abilities in gesture comprehension about motion events.  相似文献   

10.
Recent research shows that co-speech gestures can influence gesturers’ thought. This line of research suggests that the influence of gestures is so strong, that it can wash out and reverse an effect of learning. We argue that these findings need a more robust and ecologically valid test, which we provide in this article. Our results support the claim that gestures not only reflect information in our mental representations, but can also influence gesturer's thought by adding action information to one's mental representation during problem solving (Tower of Hanoi). We show, however, that the effect of gestures on subsequent performance is not as strong as previously suggested. As opposed to what previous research indicates, gestures' facilitative effect through learning was not nullified by the potentially interfering effect on subsequent problem-solving performance of incompatible gestures. To conclude, using gestures during problem solving seems to provide more benefits than costs for task performance.  相似文献   

11.
Co-speech gestures have been shown to interact with working memory (WM). However, no study has investigated whether there are individual differences in the effect of gestures on WM. Combining a novel gesture/no-gesture task and an operation span task, we examined the differences in WM accuracy between individuals who gestured and individuals who did not gesture in relation to their WM capacity. Our results showed individual differences in the gesture effect on WM. Specifically, only individuals with low WM capacity showed a reduced WM accuracy when they did not gesture. Individuals with low WM capacity who did gesture, as well as high-capacity individuals (irrespective of whether they gestured or not), did not show the effect. Our findings show that the interaction between co-speech gestures and WM is affected by an individual’s WM load.  相似文献   

12.
13.
Speech-associated gestures, Broca's area, and the human mirror system   总被引:3,自引:0,他引:3  
Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution matching" system). We asked whether the role that Broca's area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca's area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca's area and other cortical areas because speech-associated gestures are goal-direct actions that are "mirrored"). We compared the functional connectivity of Broca's area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca's area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.  相似文献   

14.
This study looks at whether there is a relationship between mother and infant gesture production. Specifically, it addresses the extent of articulation in the maternal gesture repertoire and how closely it supports the infant production of gestures. Eight Spanish mothers and their 1‐ and 2‐year‐old babies were studied during 1 year of observations. Maternal and child verbal production, gestures and actions were recorded at their homes on five occasions while performing daily routines. Results indicated that mother and child deictic gestures (pointing and instrumental) and representational gestures (symbolic and social) were very similar at each age group and did not decline across groups. Overall, deictic gestures were more frequent than representational gestures. Maternal adaptation to developmental changes is specific for gesturing but not for acting. Maternal and child speech were related positively to mother and child pointing and representational gestures, and negatively to mother and child instrumental gestures. Mother and child instrumental gestures were positively related to action production, after maternal and child speech was partialled out. Thus, language plays an important role for dyadic communicative activities (gesture–gesture relations) but not for dyadic motor activities (gesture–action relations). Finally, a comparison of the growth curves across sessions showed a closer correspondence for mother–child deictic gestures than for representational gestures. Overall, the results point to the existence of an articulated maternal gesture input that closely supports the child gesture production. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
Sign languages modulate the production of signs in space and use this spatial modulation to refer back to entities—to maintain coreference. We ask here whether spatial modulation is so fundamental to language in the manual modality that it will be invented by individuals asked to create gestures on the spot. English speakers were asked to describe vignettes under 2 conditions: using gesture without speech, and using speech with spontaneous gestures. When using gesture alone, adults placed gestures for particular entities in non-neutral locations and then used those locations to refer back to the entities. When using gesture plus speech, adults also produced gestures in non-neutral locations but used the locations coreferentially far less often. When gesture is forced to take on the full burden of communication, it exploits space for coreference. Coreference thus appears to be a resilient property of language, likely to emerge in communication systems no matter how simple.  相似文献   

16.
17.
Neural correlates of bimodal speech and gesture comprehension   总被引:2,自引:0,他引:2  
The present study examined the neural correlates of speech and hand gesture comprehension in a naturalistic context. Fifteen participants watched audiovisual segments of speech and gesture while event-related potentials (ERPs) were recorded to the speech. Gesture influenced the ERPs to the speech. Specifically, there was a right-lateralized N400 effect-reflecting semantic integration-when gestures mismatched versus matched the speech. In addition, early sensory components in bilateral occipital and frontal sites differentiated speech accompanied by matching versus non-matching gestures. These results suggest that hand gestures may be integrated with speech at early and late stages of language processing.  相似文献   

18.
This article aims to show how the relationship between gestures and speech can help understand mechanisms of activity's development. In situation of self-confrontation work analysis between two workers, we study more precisely the repetition of a sweep gesture made by one of the two professionals and repeated by the other one. We argue that this type of repetition highlights development of activities mechanisms. From two extracts from a cross self-confrontation, we analyze the coordination between speech and gestures. We finally show that mismatches between gestures and speech make possible to understand development of activity of these professionals in this particular context. As a conclusion, the hypothesis that this development may be one of the origins of the transformation of action in ordinary work situations is advanced.  相似文献   

19.
Language can be understood as an embodied system, expressible as gestures. Perception of these gestures depends on the “mirror system,” first discovered in monkeys, in which the same neural elements respond both when the animal makes a movement and when it perceives the same movement made by others. This system allows gestures to be understood in terms of how they are produced, as in the so-called motor theory of speech perception. I argue that human speech evolved from manual gestures, with vocal gestures being gradually incorporated into the mirror system in the course of hominin evolution. Speech may have become the dominant mode only with the emergence of Homo sapiens some 170,100 years ago, although language as a relatively complex syntactic system probably emerged over the past 2 million years, initially as a predominantly manual system. Despite the present-day dominance of speech, manual gestures accompany speech, and visuomanual forms of language persist in signed languages of the deaf, in handwriting, and even in such forms as texting.  相似文献   

20.
Speech-associated gestures, Broca’s area, and the human mirror system   总被引:1,自引:1,他引:0  
Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号