首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sign languages modulate the production of signs in space and use this spatial modulation to refer back to entities—to maintain coreference. We ask here whether spatial modulation is so fundamental to language in the manual modality that it will be invented by individuals asked to create gestures on the spot. English speakers were asked to describe vignettes under 2 conditions: using gesture without speech, and using speech with spontaneous gestures. When using gesture alone, adults placed gestures for particular entities in non-neutral locations and then used those locations to refer back to the entities. When using gesture plus speech, adults also produced gestures in non-neutral locations but used the locations coreferentially far less often. When gesture is forced to take on the full burden of communication, it exploits space for coreference. Coreference thus appears to be a resilient property of language, likely to emerge in communication systems no matter how simple.  相似文献   

2.
Gesture and early bilingual development   总被引:1,自引:0,他引:1  
The relationship between speech and gestural proficiency was investigated longitudinally (from 2 years to 3 years 6 months, at 6-month intervals) in 5 French-English bilingual boys with varying proficiency in their 2 languages. Because of their different levels of proficiency in the 2 languages at the same age, these children's data were used to examine the relative contribution of language and cognitive development to gestural development. In terms of rate of gesture production, rate of gesture production with speech, and meaning of gesture and speech, the children used gestures much like adults from 2 years on. In contrast, the use of iconic and beat gestures showed differential development in the children's 2 languages as a function of mean length of utterance. These data suggest that the development of these kinds of gestures may be more closely linked to language development than other kinds (such as points). Reasons why this might be so are discussed.  相似文献   

3.
Arm movements can influence language comprehension much as semantics can influence arm movement planning. Arm movement itself can be used as a linguistic signal. We reviewed neurophysiological and behavioural evidence that manual gestures and vocal language share the same control system. Studies of primate premotor cortex and, in particular, of the so-called "mirror system", including humans, suggest the existence of a dual hand/mouth motor command system involved in ingestion activities. This may be the platform on which a combined manual and vocal communication system was constructed. In humans, speech is typically accompanied by manual gesture, speech production itself is influenced by executing or observing transitive hand actions, and manual actions play an important role in the development of speech, from the babbling stage onwards. Behavioural data also show reciprocal influence between word and symbolic gestures. Neuroimaging and repetitive transcranial magnetic stimulation (rTMS) data suggest that the system governing both speech and gesture is located in Broca's area. In general, the presented data support the hypothesis that the hand motor-control system is involved in higher order cognition.  相似文献   

4.
Neural correlates of bimodal speech and gesture comprehension   总被引:2,自引:0,他引:2  
The present study examined the neural correlates of speech and hand gesture comprehension in a naturalistic context. Fifteen participants watched audiovisual segments of speech and gesture while event-related potentials (ERPs) were recorded to the speech. Gesture influenced the ERPs to the speech. Specifically, there was a right-lateralized N400 effect-reflecting semantic integration-when gestures mismatched versus matched the speech. In addition, early sensory components in bilateral occipital and frontal sites differentiated speech accompanied by matching versus non-matching gestures. These results suggest that hand gestures may be integrated with speech at early and late stages of language processing.  相似文献   

5.
Speakers of many languages prefer allocentric frames of reference (FoRs) when talking about small-scale space, using words like “east” or “downhill.” Ethnographic work has suggested that this preference is also reflected in how such speakers gesture. Here, we investigate this possibility with a field experiment in Juchitán, Mexico. In Juchitán, a preferentially allocentric language (Isthmus Zapotec) coexists with a preferentially egocentric one (Spanish). Using a novel task, we elicited spontaneous co-speech gestures about small-scale motion events (e.g., toppling blocks) in Zapotec-dominant speakers and in balanced Zapotec-Spanish bilinguals. Consistent with prior claims, speakers’ spontaneous gestures reliably reflected either an egocentric or allocentric FoR. The use of the egocentric FoR was predicted—not by speakers’ dominant language or the language they used in the task—but by mastery of words for “right” and “left,” as well as by properties of the event they were describing. Additionally, use of the egocentric FoR in gesture predicted its use in a separate nonlinguistic memory task, suggesting a cohesive cognitive style. Our results show that the use of spatial FoRs in gesture is pervasive, systematic, and shaped by several factors. Spatial gestures, like other forms of spatial conceptualization, are thus best understood within broader ecologies of communication and cognition.  相似文献   

6.
Effective communication in aphasia depends not only on use of preserved linguistic capacities but also (and perhaps primarily) on the capacity to exploit alternative modalities of communication, such as gesture. To ascertain the capacity of aphasic patients to use gesture in their spontaneous communication, informally structured interviews were conducted with two Wernicke's aphasics and two Broca's aphasics, as well as with four normal controls. The performances of the patient groups were compared on the physical parameters of gesture, the points in the communication where gestures occurred, and several facets of the semantics and pragmatics of gesture. Generally speaking, the gestures of the aphasics closely paralleled their speech output: on most indices, the performance of the Wernicke's aphasics more closely resembled that of the normal controls. Wernicke's aphasics differed from normals in the clarity of their language and gestures: While individual linguistic units were often clear, the relation among units was not. In contrast, the Broca's aphasics equaled or surpassed the normal controls in the clarity of their communications. The results offer little support for the view that aphasic patients spontaneously enhance their communicative efficacy through the use of gesture; these findings can, however, be interpreted as evidence in favor of a “central organizer” which controls critical features of communication, irrespective of the modality of expression.  相似文献   

7.
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.  相似文献   

8.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

9.
This study looks at whether there is a relationship between mother and infant gesture production. Specifically, it addresses the extent of articulation in the maternal gesture repertoire and how closely it supports the infant production of gestures. Eight Spanish mothers and their 1‐ and 2‐year‐old babies were studied during 1 year of observations. Maternal and child verbal production, gestures and actions were recorded at their homes on five occasions while performing daily routines. Results indicated that mother and child deictic gestures (pointing and instrumental) and representational gestures (symbolic and social) were very similar at each age group and did not decline across groups. Overall, deictic gestures were more frequent than representational gestures. Maternal adaptation to developmental changes is specific for gesturing but not for acting. Maternal and child speech were related positively to mother and child pointing and representational gestures, and negatively to mother and child instrumental gestures. Mother and child instrumental gestures were positively related to action production, after maternal and child speech was partialled out. Thus, language plays an important role for dyadic communicative activities (gesture–gesture relations) but not for dyadic motor activities (gesture–action relations). Finally, a comparison of the growth curves across sessions showed a closer correspondence for mother–child deictic gestures than for representational gestures. Overall, the results point to the existence of an articulated maternal gesture input that closely supports the child gesture production. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
Visible embodiment: Gestures as simulated action   总被引:1,自引:0,他引:1  
Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.  相似文献   

11.
The gestures that spontaneously occur in communicative contexts have been shown to offer insight into a child’s thoughts. The information gesture conveys about what is on a child’s mind will, of course, only be accessible to a communication partner if that partner can interpret gesture. Adults were asked to observe a series of children who participated ‘live’ in a set of conservation tasks and gestured spontaneously while performing the tasks. Adults were able to glean substantive information from the children’s gestures, information that was not found anywhere in their speech. ‘Gesture-reading’ did, however, have a cost – if gesture conveyed different information from speech, it hindered the listener’s ability to identify the message in speech. Thus, ordinary listeners can and do extract information from a child’s gestures, even gestures that are unedited and fleeting.  相似文献   

12.
Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.  相似文献   

13.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   

14.
Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems (Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced (Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We concluded that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations. (PsycINFO Database Record (c) 2010 APA, all rights reserved).  相似文献   

15.
I will first review cross-cultural research in the area of culture and cognition, with particular focus on the development of spatial concepts. I propose that the formulation best covering all empirical data is in terms of «cognitive style», i.e., spatial cognitive processes are universally available to all humans, but there are preferences for some spatial frames of reference over others. These cultural differences are under the influence of a number of eco-cultural variables. The second part will illustrate this general conclusion by research on the development of the «geocentric» frame of spatial reference, initially studied by Levinson (Space in language and cognition: explorations in cognitive diversity, Cambridge University Press, Cambridge, 2003). This is a cognitive style in which individuals choose to describe and represent small-scale tabletop space in terms of large-scale geographic dimensions. In Indonesia, India, Nepal and Switzerland, we explore the development with age of geocentric language as well as geocentric cognition, and the relationships between the two, as well as the environmental and socio-cultural variables that favor the use of this frame (Dasen and Mishra, Development of geocentric spatial language and cognition, Cambridge University Press, Cambridge, 2010).  相似文献   

16.
We investigated which reference frames are preferred when matching spatial language to the haptic domain. Sighted, low-vision, and blind participants were tested on a haptic-sentence-verification task where participants had to haptically explore different configurations of a ball and a shoe and judge the relation between them. Results from the spatial relation "above", in the vertical plane, showed that various reference frames are available after haptic inspection of a configuration. Moreover, the pattern of results was similar for all three groups and resembled patterns found for the sighted on visual sentence-verification tasks. In contrast, when judging the spatial relation "in front", in the horizontal plane, the blind showed a markedly different response pattern. The sighted and low-vision participants did not show a clear preference for either the absolute/relative or the intrinsic reference frame when these frames were dissociated. The blind, on the other hand, showed a clear preference for the intrinsic reference frame. In the absence of a dominant cue, such as gravity in the vertical plane, the blind might emphasise the functional relationship between the objects owing to enhanced experience with haptic exploration of objects.  相似文献   

17.
This paper describes a study carried out at Varanasi on the development of geocentric spatial cognition with 4–14 year old children of Hindi and Sanskrit medium schools. A number of tasks and procedures were used to assess the spatial frames of reference children used in describing and interpreting spatial displays. Analysis revealed that Sanskrit medium school children used more geocentric language and encoding than Hindi medium school children. The effect of age was significant only for encoding, not for language. Geocentric spatial cognition was significantly linked to fundamental spatial cognitive ability, as measured by Story-Pictorial Embedded Figures Test and Block Designs Test. The stronger expression of geocentric language and geocentric encoding in Sanskrit than Hindi medium school children suggests that the use of the ability can be sharpened by its practice and actualization in day-to-day life. The relationship between language and encoding was found to be of a moderate level suggesting that geocentric cognition is not determined by language alone, but also by other factors present in children’s eco-cultural contexts.  相似文献   

18.
The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children's gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.  相似文献   

19.
Co-speech gestures traditionally have been considered communicative, but they may also serve other functions. For example, hand-arm movements seem to facilitate both spatial working memory and speech production. It has been proposed that gestures facilitate speech indirectly by sustaining spatial representations in working memory. Alternatively, gestures may affect speech production directly by activating embodied semantic representations involved in lexical search. Consistent with the first hypothesis, we found participants gestured more when describing visual objects from memory and when describing objects that were difficult to remember and encode verbally. However, they also gestured when describing a visually accessible object, and gesture restriction produced dysfluent speech even when spatial memory was untaxed, suggesting that gestures can directly affect both spatial memory and lexical retrieval.  相似文献   

20.
Gestures and speech parallel or complement each other semantically and pragmatically. Previous studies have postulated a similarly close correlation between intonation and gesture. This study microanalyzed the gestures of three subjects frame by frame and matched the movements with direction of pitch changes for each syllable in 90 intonation groups. The speech was analyzed using spectrographs. Coordinating direction of pitch and manual gesture movements is an option available to speakers, but it is not biologically mandated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号