首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.  相似文献   

2.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

3.
When people talk, they gesture. We show that gesture introduces action information into speakers' mental representations, which, in turn, affect subsequent performance. In Experiment 1, participants solved the Tower of Hanoi task (TOH1), explained (with gesture) how they solved it, and solved it again (TOH2). For all participants, the smallest disk in TOH1 was the lightest and could be lifted with one hand. For some participants (no-switch group), the disks in TOH2 were identical to those in TOH1. For others (switch group), the disk weights in TOH2 were reversed (so that the smallest disk was the heaviest and could not be lifted with one hand). The more the switch group's gestures depicted moving the smallest disk one-handed, the worse they performed on TOH2. This was not true for the no-switch group, nor for the switch group in Experiment 2, who skipped the explanation step and did not gesture. Gesturing grounds people's mental representations in action. When gestures are no longer compatible with the action constraints of a task, problem solving suffers.  相似文献   

4.
Groups might perform compensatory communicative actions if media characteristics impede a particular communication process. The present study examined this hypothesis for the process of information integration, which is a sub‐task in collaborative problem‐solving. Synchronous media characteristics support the information integration process, while asynchronous media characteristics impede it. The synchronicity of the medium was varied by manipulating parallelism and the immediacy of feedback within real‐time, text‐based computer‐mediated communication. A Conversational Games Analysis (CGA) was performed in order to investigate the functional purposes of the task‐oriented contributions. All groups successfully solved the problem. However, groups interacting in the asynchronous communication mode produced significantly more functional contributions for the purpose of mutual task‐oriented understanding and clarification, compared to the synchronous groups. Moreover, members of asynchronous groups repeated unshared pieces of information more often. These differences are interpreted as communicative efforts that balance the hindering influence of asynchronous media characteristics on the process of information integration. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
People move their hands as they talk – they gesture. Gesturing is a robust phenomenon, found across cultures, ages, and tasks. Gesture is even found in individuals blind from birth. But what purpose, if any, does gesture serve? In this review, I begin by examining gesture when it stands on its own, substituting for speech and clearly serving a communicative function. When called upon to carry the full burden of communication, gesture assumes a language-like form, with structure at word and sentence levels. However, when produced along with speech, gesture assumes a different form – it becomes imagistic and analog. Despite its form, the gesture that accompanies speech also communicates. Trained coders can glean substantive information from gesture – information that is not always identical to that gleaned from speech. Gesture can thus serve as a research tool, shedding light on speakers’ unspoken thoughts. The controversial question is whether gesture conveys information to listeners not trained to read them. Do spontaneous gestures communicate to ordinary listeners? Or might they be produced only for speakers themselves? I suggest these are not mutually exclusive functions – gesture serves as both a tool for communication for listeners, and a tool for thinking for speakers.  相似文献   

6.
Spontaneous gesture frequently accompanies speech. The question is why. In these studies, we tested two non‐mutually exclusive possibilities. First, speakers may gesture simply because they see others gesture and learn from this model to move their hands as they talk. We tested this hypothesis by examining spontaneous communication in congenitally blind children and adolescents. Second, speakers may gesture because they recognize that gestures can be useful to the listener. We tested this hypothesis by examining whether speakers gesture even when communicating with a blind listener who is unable to profit from the information that the hands convey. We found that congenitally blind speakers, who had never seen gestures, nevertheless gestured as they spoke, conveying the same information and producing the same range of gesture forms as sighted speakers. Moreover, blind speakers gestured even when interacting with another blind individual who could not have benefited from the information contained in those gestures. These findings underscore the robustness of gesture in talk and suggest that the gestures that co‐occur with speech may serve a function for the speaker as well as for the listener.  相似文献   

7.
Memory for series of action phrases improves in listeners when speakers accompany each phrase with congruent gestures compared to when speakers stay still. Studies reveal that the listeners’ motor system, at encoding, plays a crucial role in this enactment effect. We present two experiments on gesture observation, which explored the role of the listeners’ motor system at recall. The participants listened to the phrases uttered by a speaker in two conditions in each experiment. In the gesture condition, the speaker uttered the phrases with accompanying congruent gestures, and in the no-gesture condition, the speaker stayed still while uttering the phrases. The participants were then invited, in both conditions of the experiments, to perform a motor task while recalling the phrases proffered by the speaker. The results revealed that the advantage of observing gestures on memory disappears if the listeners move at recall arms and hands (same motor effectors moved by the speaker, Experiment 1a), but not when the listeners move legs and feet (different motor effectors from those moved by the speaker, Experiment 1b). The results suggest that the listeners’ motor system is involved not only during the encoding of action phrases uttered by a speaker but also when recalling these phrases during retrieval.  相似文献   

8.
When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used the Give‐a‐Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What's on this Card task to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.  相似文献   

9.
Do the gestures that speakers produce while talking significantly benefit listeners' comprehension of the message? This question has been the topic of many research studies over the previous 35 years, and there has been little consensus. The present meta-analysis examined the effect sizes from 63 samples in which listeners' understanding of a message was compared when speech was presented alone with when speech was presented with gestures. It was found that across samples, gestures do provide a significant, moderate benefit to communication. Furthermore, the magnitude of this effect is moderated by 3 factors. First, effects of gesture differ as a function of gesture topic, such that gestures that depict motor actions are more communicative than those that depict abstract topics. Second, effects of gesture on communication are larger when the gestures are not completely redundant with the accompanying speech; effects are smaller when there is more overlap between the information conveyed in the 2 modalities. Third, the size of the effect of gesture is dependent on the age of the listeners, such that children benefit more from gestures than do adults. Remaining questions for future research are highlighted.  相似文献   

10.
Making children gesture brings out implicit knowledge and leads to learning   总被引:2,自引:0,他引:2  
Speakers routinely gesture with their hands when they talk, and those gestures often convey information not found anywhere in their speech. This information is typically not consciously accessible, yet it provides an early sign that the speaker is ready to learn a particular task (S. Goldin-Meadow, 2003). In this sense, the unwitting gestures that speakers produce reveal their implicit knowledge. But what if a learner was forced to gesture? Would those elicited gestures also reveal implicit knowledge and, in so doing, enhance learning? To address these questions, the authors told children to gesture while explaining their solutions to novel math problems and examined the effect of this manipulation on the expression of implicit knowledge in gesture and on learning. The authors found that, when told to gesture, children who were unable to solve the math problems often added new and correct problem-solving strategies, expressed only in gesture, to their repertoires. The authors also found that when these children were given instruction on the math problems later, they were more likely to succeed on the problems than children told not to gesture. Telling children to gesture thus encourages them to convey previously unexpressed, implicit ideas, which, in turn, makes them receptive to instruction that leads to learning.  相似文献   

11.
Gestures are common when people convey spatial information, for example, when they give directions or describe motion in space. Here, we examine the gestures speakers produce when they explain how they solved mental rotation problems (Shepard and Meltzer in Science 171:701–703, 1971). We asked whether speakers gesture differently while describing their problems as a function of their spatial abilities. We found that low-spatial individuals (as assessed by a standard paper-and-pencil measure) gestured more to explain their solutions than high-spatial individuals. While this finding may seem surprising, finer-grained analyses showed that low-spatial participants used gestures more often than high-spatial participants to convey “static only” information but less often than high-spatial participants to convey dynamic information. Furthermore, the groups differed in the types of gestures used to convey static information: high-spatial individuals were more likely than low-spatial individuals to use gestures that captured the internal structure of the block forms. Our gesture findings thus suggest that encoding block structure may be as important as rotating the blocks in mental spatial transformation.  相似文献   

12.
What happens when speakers try to "dodge" a question they would rather not answer by answering a different question? In 4 studies, we show that listeners can fail to detect dodges when speakers answer similar-but objectively incorrect-questions (the "artful dodge"), a detection failure that goes hand-in-hand with a failure to rate dodgers more negatively. We propose that dodges go undetected because listeners' attention is not usually directed toward a goal of dodge detection (i.e., Is this person answering the question?) but rather toward a goal of social evaluation (i.e., Do I like this person?). Listeners were not blind to all dodge attempts, however. Dodge detection increased when listeners' attention was diverted from social goals toward determining the relevance of the speaker's answers (Study 1), when speakers answered a question egregiously dissimilar to the one asked (Study 2), and when listeners' attention was directed to the question asked by keeping it visible during speakers' answers (Study 4). We also examined the interpersonal consequences of dodge attempts: When listeners were guided to detect dodges, they rated speakers more negatively (Study 2), and listeners rated speakers who answered a similar question in a fluent manner more positively than speakers who answered the actual question but disfluently (Study 3). These results add to the literatures on both Gricean conversational norms and goal-directed attention. We discuss the practical implications of our findings in the contexts of interpersonal communication and public debates.  相似文献   

13.
The current study used a novel problem‐solving task in which the solution could only be reached via interactions between members of dyads. The study aimed to systematically examine how nonverbal interactive behaviour was related to the cultural background of the dyads, the participant's role in the dyad (viz., instructor, problem solver) and task repetition. Twenty‐one Australian dyads and 32 Chinese dyads performed the dyadic puzzle‐solving task while their interactions were video‐recorded. In each dyad, one instructor and one problem solver worked together to solve a seven‐piece puzzle. Six trials, each comprising a different puzzle, were completed. Results indicate that the Australian instructors engaged in significantly more eye gazing and displayed more hand gestures but smiled less than the Chinese instructors. The Australian problem solvers maintained longer eye gazing, displayed more hand gestures and more echoing than their Chinese counterparts. Over trials, the Chinese instructors reduced their total talking time, hand gestures, nodding behaviour and smiling during self‐talking more than the Australian instructors. Moreover, the problem solvers in the dyads from both countries significantly reduced their smiling across trials. The current study shows that nonverbal behaviours during dyadic interactions are related to one's cultural background, role in the task and task repetition.  相似文献   

14.
Chambers CG  Juan VS 《Cognition》2008,108(1):26-50
Recent studies have shown that listeners use verbs and other predicate terms to anticipate reference to semantic entities during real-time language comprehension. This process involves evaluating the denoted action against relevant properties of potential referents. The current study explored whether action-relevant properties are readily available to comprehension systems as a result of the embodied nature of linguistic and conceptual representations. In three experiments, eye movements were monitored as listeners followed instructions to move depicted objects on a computer screen. Critical instructions contained the verb return (e.g., Now return the block to area 3), which presupposes the previous displacement of its complement object--a property that is not reflected in perceptible or stable characteristics of objects. Experiment 1 demonstrated that predictions for previously displaced objects are generated upon hearing return, ruling out the possibility that anticipatory effects draw directly on static affordances in perceptual symbols. Experiment 2 used a referential communication task to evaluate how communicative relevance constrains the use of perceptually derived information. Results showed that listeners anticipate previously displaced objects as candidates upon hearing return only when their displacement was known to the speaker. Experiment 3 showed that the outcome of the original act of displacement further modulates referential predictions. The results show that the use of perceptually grounded information in language interpretation is subject to communicative constraints, even when language denotes physical actions performed on concrete objects.  相似文献   

15.
The gestures that spontaneously occur in communicative contexts have been shown to offer insight into a child’s thoughts. The information gesture conveys about what is on a child’s mind will, of course, only be accessible to a communication partner if that partner can interpret gesture. Adults were asked to observe a series of children who participated ‘live’ in a set of conservation tasks and gestured spontaneously while performing the tasks. Adults were able to glean substantive information from the children’s gestures, information that was not found anywhere in their speech. ‘Gesture-reading’ did, however, have a cost – if gesture conveyed different information from speech, it hindered the listener’s ability to identify the message in speech. Thus, ordinary listeners can and do extract information from a child’s gestures, even gestures that are unedited and fleeting.  相似文献   

16.
In numerous experimental contexts, gesturing has been shown to lighten a speaker's cognitive load. However, in all of these experimental paradigms, the gestures have been directed to items in the "here-and-now." This study attempts to generalize gesture's ability to lighten cognitive load. We demonstrate here that gesturing continues to confer cognitive benefits when speakers talk about objects that are not present, and therefore cannot be directly indexed by gesture. These findings suggest that gesturing confers its benefits by more than simply tying abstract speech to the objects directly visible in the environment. Moreover, we show that the cognitive benefit conferred by gesturing is greater when novice learners produce gestures that add to the information expressed in speech than when they produce gestures that convey the same information as speech, suggesting that it is gesture's meaningfulness that gives it the ability to affect working memory load.  相似文献   

17.
Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems (Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced (Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We concluded that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations. (PsycINFO Database Record (c) 2010 APA, all rights reserved).  相似文献   

18.
Gestures maintain spatial imagery.   总被引:1,自引:0,他引:1  
Recent theories suggest alternatives to the commonly held belief that the sole role of gestures is to communicate meaning directly to listeners. Evidence suggests that gestures may serve a cognitive function for speakers, possibly acting as lexical primes. We observed that participants gestured more often when describing a picture from memory than when the picture was present and that gestures were not influenced by manipulating eye contact of a listener. We argue that spatial imagery serves a short-term memory function during lexical search and that gestures may help maintain spatial images. When spatial imagery is not necessary, as in conditions of direct visual stimulation, reliance on gestures is reduced or eliminated.  相似文献   

19.
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent “secondary” cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.  相似文献   

20.
Speakers of many languages prefer allocentric frames of reference (FoRs) when talking about small-scale space, using words like “east” or “downhill.” Ethnographic work has suggested that this preference is also reflected in how such speakers gesture. Here, we investigate this possibility with a field experiment in Juchitán, Mexico. In Juchitán, a preferentially allocentric language (Isthmus Zapotec) coexists with a preferentially egocentric one (Spanish). Using a novel task, we elicited spontaneous co-speech gestures about small-scale motion events (e.g., toppling blocks) in Zapotec-dominant speakers and in balanced Zapotec-Spanish bilinguals. Consistent with prior claims, speakers’ spontaneous gestures reliably reflected either an egocentric or allocentric FoR. The use of the egocentric FoR was predicted—not by speakers’ dominant language or the language they used in the task—but by mastery of words for “right” and “left,” as well as by properties of the event they were describing. Additionally, use of the egocentric FoR in gesture predicted its use in a separate nonlinguistic memory task, suggesting a cohesive cognitive style. Our results show that the use of spatial FoRs in gesture is pervasive, systematic, and shaped by several factors. Spatial gestures, like other forms of spatial conceptualization, are thus best understood within broader ecologies of communication and cognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号