首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Talking and Thinking With Our Hands   总被引:1,自引:0,他引:1  
ABSTRACT— When people talk, they gesture. Typically, gesture is produced along with speech and forms a fully integrated system with that speech. However, under unusual circumstances, gesture can be produced on its own, without speech. In these instances, gesture must take over the full burden of communication usually shared by the two modalities. What happens to gesture in this very different context? One possibility is that there are no differences in the forms gesture takes with speech and without it—that gesture is gesture no matter what its function. But that is not what we find. When gesture is produced on its own and assumes the full burden of communication, it takes on a language-like form. In contrast, when gesture is produced in conjunction with speech and shares the burden of communication with that speech, it takes on an unsegmented, imagistic form, often conveying information not found in speech. As such, gesture sheds light on how people think and can even play a role in changing those thoughts. Gesture can thus be part of language or it can itself be language, altering its form to fit its function.  相似文献   

2.
Spontaneous gesture frequently accompanies speech. The question is why. In these studies, we tested two non‐mutually exclusive possibilities. First, speakers may gesture simply because they see others gesture and learn from this model to move their hands as they talk. We tested this hypothesis by examining spontaneous communication in congenitally blind children and adolescents. Second, speakers may gesture because they recognize that gestures can be useful to the listener. We tested this hypothesis by examining whether speakers gesture even when communicating with a blind listener who is unable to profit from the information that the hands convey. We found that congenitally blind speakers, who had never seen gestures, nevertheless gestured as they spoke, conveying the same information and producing the same range of gesture forms as sighted speakers. Moreover, blind speakers gestured even when interacting with another blind individual who could not have benefited from the information contained in those gestures. These findings underscore the robustness of gesture in talk and suggest that the gestures that co‐occur with speech may serve a function for the speaker as well as for the listener.  相似文献   

3.
In order to produce a coherent narrative, speakers must identify the characters in the tale so that listeners can figure out who is doing what to whom. This paper explores whether speakers use gesture, as well as speech, for this purpose. English speakers were shown vignettes of two stories and asked to retell the stories to an experimenter. Their speech and gestures were transcribed and coded for referent identification. A gesture was considered to identify a referent if it was produced in the same location as the previous gesture for that referent. We found that speakers frequently used gesture location to identify referents. Interestingly, however, they used gesture most often to identify referents that were also uniquely specified in speech. Lexical specificity in referential expressions in speech thus appears to go hand-in-hand with specification in referential expressions in gesture.  相似文献   

4.
When asked to explain their solutions to a problem, both adults and children gesture as they talk. These gestures at times convey information that is not conveyed in speech and thus reveal thoughts that are distinct from those revealed in speech. In this study, we use the classic Tower of Hanoi puzzle to validate the claim that gesture and speech taken together can reflect the activation of two cognitive strategies within a single response. The Tower of Hanoi is a well‐studied puzzle, known to be most efficiently solved by activating subroutines at theoretically defined choice points. When asked to explain how they solved the Tower of Hanoi puzzle, both adults and children produced significantly more gesture‐speech mismatches—explanations in which speech conveyed one path and gesture another—at these theoretically defined choice points than they produced at non‐choice points. Even when the participants did not solve the problem efficiently, gesture could be used to indicate where the participants were deciding between alternative paths. Gesture can, thus, serve as a useful adjunct to speech when attempting to discover cognitive processes in problem‐solving.  相似文献   

5.
Do the gestures that speakers produce while talking significantly benefit listeners' comprehension of the message? This question has been the topic of many research studies over the previous 35 years, and there has been little consensus. The present meta-analysis examined the effect sizes from 63 samples in which listeners' understanding of a message was compared when speech was presented alone with when speech was presented with gestures. It was found that across samples, gestures do provide a significant, moderate benefit to communication. Furthermore, the magnitude of this effect is moderated by 3 factors. First, effects of gesture differ as a function of gesture topic, such that gestures that depict motor actions are more communicative than those that depict abstract topics. Second, effects of gesture on communication are larger when the gestures are not completely redundant with the accompanying speech; effects are smaller when there is more overlap between the information conveyed in the 2 modalities. Third, the size of the effect of gesture is dependent on the age of the listeners, such that children benefit more from gestures than do adults. Remaining questions for future research are highlighted.  相似文献   

6.
Children can express thoughts in gesture that they do not express in speech--they produce gesture-speech mismatches. Moreover, children who produce mismatches on a given task are particularly ready to learn that task. Gesture, then, is a tool that researchers can use to predict who will profit from instruction. But is gesture also useful to adults who must decide how to instruct a particular child? We asked 8 adults to instruct 38 third- and fourth-grade children individually in a math problem. We found that the adults offered more variable instruction to children who produced mismatches than to children who produced no mismatches--more different types of instructional strategies and more instructions that contained two different strategies, one in speech and the other in gesture. The children thus appeared to be shaping their own learning environments just by moving their hands. Gesture not only reflects a child's understanding but can play a role in eliciting input that could shape that understanding. As such, it may be part of the mechanism of cognitive change.  相似文献   

7.
Teachers gesture when they teach, and those gestures do not always convey the same information as their speech. Gesture thus offers learners a second message. To determine whether learners take advantage of this offer, we gave 160 children in the third and fourth grades instruction in mathematical equivalence. Children were taught either one or two problem-solving strategies in speech accompanied by no gesture, gesture conveying the same strategy, or gesture conveying a different strategy. The children were likely to profit from instruction with gesture, but only when it conveyed a different strategy than speech did. Moreover, two strategies were effective in promoting learning only when the second strategy was taught in gesture, not speech. Gesture thus has an active hand in learning.  相似文献   

8.
We explored how speakers and listeners use hand gestures as a source of perceptual-motor information during naturalistic communication. After solving the Tower of Hanoi task either with real objects or on a computer, speakers explained the task to listeners. Speakers’ hand gestures, but not their speech, reflected properties of the particular objects and the actions that they had previously used to solve the task. Speakers who solved the problem with real objects used more grasping handshapes and produced more curved trajectories during the explanation. Listeners who observed explanations from speakers who had previously solved the problem with real objects subsequently treated computer objects more like real objects; their mouse trajectories revealed that they lifted the objects in conjunction with moving them sideways, and this behavior was related to the particular gestures that were observed. These findings demonstrate that hand gestures are a reliable source of perceptual-motor information during human communication.  相似文献   

9.
The gestures that spontaneously occur in communicative contexts have been shown to offer insight into a child’s thoughts. The information gesture conveys about what is on a child’s mind will, of course, only be accessible to a communication partner if that partner can interpret gesture. Adults were asked to observe a series of children who participated ‘live’ in a set of conservation tasks and gestured spontaneously while performing the tasks. Adults were able to glean substantive information from the children’s gestures, information that was not found anywhere in their speech. ‘Gesture-reading’ did, however, have a cost – if gesture conveyed different information from speech, it hindered the listener’s ability to identify the message in speech. Thus, ordinary listeners can and do extract information from a child’s gestures, even gestures that are unedited and fleeting.  相似文献   

10.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

11.
Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigated the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.  相似文献   

12.
The current study assessed the extent to which the use of referential prosody varies with communicative demand. Speaker–listener dyads completed a referential communication task during which speakers attempted to indicate one of two color swatches (one bright, one dark) to listeners. Speakers' bright sentences were reliably higher pitched than dark sentences for ambiguous (e.g., bright red versus dark red) but not unambiguous (e.g., bright red versus dark purple) trials, suggesting that speakers produced meaningful acoustic cues to brightness when the accompanying linguistic content was underspecified (e.g., “Can you get the red one?”). Listening partners reliably chose the correct corresponding swatch for ambiguous trials when lexical information was insufficient to identify the target, suggesting that listeners recruited prosody to resolve lexical ambiguity. Prosody can thus be conceptualized as a type of vocal gesture that can be recruited to resolve referential ambiguity when there is communicative demand to do so.  相似文献   

13.
An expressive disturbance of speech prosody has long been associated with idiopathic Parkinson's disease (PD), but little is known about the impact of dysprosody on vocal-prosodic communication from the perspective of listeners. Recordings of healthy adults (n=12) and adults with mild to moderate PD (n=21) were elicited in four speech contexts in which prosody serves a primary function in linguistic or emotive communication (phonemic stress, contrastive stress, sentence mode, and emotional prosody). Twenty independent listeners naive to the disease status of individual speakers then judged the intended meanings conveyed by prosody for tokens recorded in each condition. Findings indicated that PD speakers were less successful at communicating stress distinctions, especially words produced with contrastive stress, which were identifiable to listeners. Listeners were also significantly less able to detect intended emotional qualities of Parkinsonian speech, especially for anger and disgust. Emotional expressions that were correctly recognized by listeners were consistently rated as less intense for the PD group. Utterances produced by PD speakers were frequently characterized as sounding sad or devoid of emotion entirely (neutral). Results argue that motor limitations on the vocal apparatus in PD produce serious and early negative repercussions on communication through prosody, which diminish the social-linguistic competence of Parkinsonian adults as judged by listeners.  相似文献   

14.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.  相似文献   

15.
GESTURE, SPEECH, AND LEXICAL ACCESS:   总被引:1,自引:0,他引:1  
Abstract— In a within-subjects design that varied whether speakers were allowed to gesture and the difficulty of lexical access, speakers were videotaped as they described animated action cartoons to a listener. When speakers were permitted to gesture, they gestured more often during phrases with spatial content than during phrases with other content. Speech with spatial content was less fluent when speakers could not gesture than when they could gesture, speech with nonspatial content was not affected by gesture condition. Preventing gesturing increased the relative frequency of nonjuncture filled pauses in speech with spatial content, but not in speech with other content. Overall, the effects of preventing speakers from gesturing resembled those of increasing the difficulty of lexical access by other means, except that the effects of gesture restriction were specific to speech with spatial content. The findings support the hypothesis that gestural accompaniments to spontaneous speech can facilitate access to the mental lexicon  相似文献   

16.
People gesture a great deal when speaking, and research has shown that listeners can interpret the information contained in gesture. The current research examines whether learners can also use co‐speech gesture to inform language learning. Specifically, we examine whether listeners can use information contained in an iconic gesture to assign meaning to a novel verb form. Two experiments demonstrate that adults and 2‐, 3‐, and 4‐year‐old children can infer the meaning of novel intransitive verbs from gestures when no other source of information is present. The findings support the idea that gesture might be a source of input available to language learners.  相似文献   

17.
《认知与教导》2013,31(3):201-219
Is the information that gesture provides about a child's understanding of a task accessible not only to experimenters who are trained in coding gesture but also to untrained observers? Twenty adults were asked to describe the reasoning of 12 different children, each videotaped responding to a Piagetian conservation task. Six of the children on the videotape produced gestures that conveyed the same information as their nonconserving spoken explanations, and 6 produced gestures that conveyed different information from their nonconserving spoken explanations. The adult observers displayed more uncertainty in their appraisals of children who produced different information in gesture and speech than in their appraisals of children who produced the same information in gesture and speech. Moreover, the adults were able to incorporate the information conveyed in the children's gestures into their own spoken appraisals of the children's reasoning. These data suggest that, even without training, adults form impressions of children's knowledge based not only on what children say with their mouths but also on what they say with their hands.  相似文献   

18.
Much evidence suggests that semantic characteristics of a message (e.g., the extent to which the message evokes thoughts of spatial or motor properties) and social characteristics of a speaking situation (e.g., whether there is a listener who can see the speaker) both influence how much speakers gesture. However, the Gesture as Simulated Action (GSA) framework (Hostetter & Alibali, 2008 ) predicts that these effects should not be independent but should interact such that the effect of visibility is lessened when a message evokes strong thoughts of action. This study tested this claim by comparing the gesture rates produced by speakers as they described 24 nouns that vary in how strongly they evoke thoughts of action. Further, half of the words were described with visibility between speaker and listener blocked. The results demonstrated a significant interaction as predicted by the GSA framework.  相似文献   

19.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

20.
Sign languages modulate the production of signs in space and use this spatial modulation to refer back to entities—to maintain coreference. We ask here whether spatial modulation is so fundamental to language in the manual modality that it will be invented by individuals asked to create gestures on the spot. English speakers were asked to describe vignettes under 2 conditions: using gesture without speech, and using speech with spontaneous gestures. When using gesture alone, adults placed gestures for particular entities in non-neutral locations and then used those locations to refer back to the entities. When using gesture plus speech, adults also produced gestures in non-neutral locations but used the locations coreferentially far less often. When gesture is forced to take on the full burden of communication, it exploits space for coreference. Coreference thus appears to be a resilient property of language, likely to emerge in communication systems no matter how simple.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号