首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The manual gestures that hearing children produce when explaining their answers to math problems predict whether they will profit from instruction in those problems. We ask here whether gesture plays a similar role in deaf children, whose primary communication system is in the manual modality. Forty ASL-signing deaf children explained their solutions to math problems and were then given instruction in those problems. Children who produced many gestures conveying different information from their signs (gesture-sign mismatches) were more likely to succeed after instruction than children who produced few, suggesting that mismatch can occur within-modality, and paving the way for using gesture-based teaching strategies with deaf learners.  相似文献   

2.
When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used the Give‐a‐Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What's on this Card task to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.  相似文献   

3.
Gesturing makes learning last   总被引:2,自引:0,他引:2  
Cook SW  Mitchell Z  Goldin-Meadow S 《Cognition》2008,106(2):1047-1058
The gestures children spontaneously produce when explaining a task predict whether they will subsequently learn that task. Why? Gesture might simply reflect a child's readiness to learn a particular task. Alternatively, gesture might itself play a role in learning the task. To investigate these alternatives, we experimentally manipulated children's gesture during instruction in a new mathematical concept. We found that requiring children to gesture while learning the new concept helped them retain the knowledge they had gained during instruction. In contrast, requiring children to speak, but not gesture, while learning the concept had no effect on solidifying learning. Gesturing can thus play a causal role in learning, perhaps by giving learners an alternative, embodied way of representing new ideas. We may be able to improve children's learning just by encouraging them to move their hands.  相似文献   

4.
When asked to explain their solutions to a problem, both adults and children gesture as they talk. These gestures at times convey information that is not conveyed in speech and thus reveal thoughts that are distinct from those revealed in speech. In this study, we use the classic Tower of Hanoi puzzle to validate the claim that gesture and speech taken together can reflect the activation of two cognitive strategies within a single response. The Tower of Hanoi is a well‐studied puzzle, known to be most efficiently solved by activating subroutines at theoretically defined choice points. When asked to explain how they solved the Tower of Hanoi puzzle, both adults and children produced significantly more gesture‐speech mismatches—explanations in which speech conveyed one path and gesture another—at these theoretically defined choice points than they produced at non‐choice points. Even when the participants did not solve the problem efficiently, gesture could be used to indicate where the participants were deciding between alternative paths. Gesture can, thus, serve as a useful adjunct to speech when attempting to discover cognitive processes in problem‐solving.  相似文献   

5.
Teachers gesture when they teach, and those gestures do not always convey the same information as their speech. Gesture thus offers learners a second message. To determine whether learners take advantage of this offer, we gave 160 children in the third and fourth grades instruction in mathematical equivalence. Children were taught either one or two problem-solving strategies in speech accompanied by no gesture, gesture conveying the same strategy, or gesture conveying a different strategy. The children were likely to profit from instruction with gesture, but only when it conveyed a different strategy than speech did. Moreover, two strategies were effective in promoting learning only when the second strategy was taught in gesture, not speech. Gesture thus has an active hand in learning.  相似文献   

6.
《认知与教导》2013,31(3):201-219
Is the information that gesture provides about a child's understanding of a task accessible not only to experimenters who are trained in coding gesture but also to untrained observers? Twenty adults were asked to describe the reasoning of 12 different children, each videotaped responding to a Piagetian conservation task. Six of the children on the videotape produced gestures that conveyed the same information as their nonconserving spoken explanations, and 6 produced gestures that conveyed different information from their nonconserving spoken explanations. The adult observers displayed more uncertainty in their appraisals of children who produced different information in gesture and speech than in their appraisals of children who produced the same information in gesture and speech. Moreover, the adults were able to incorporate the information conveyed in the children's gestures into their own spoken appraisals of the children's reasoning. These data suggest that, even without training, adults form impressions of children's knowledge based not only on what children say with their mouths but also on what they say with their hands.  相似文献   

7.
Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigated the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.  相似文献   

8.
Performing action has been found to have a greater impact on learning than observing action. Here we ask whether a particular type of action – the gestures that accompany talk – affect learning in a comparable way. We gave 158 6‐year‐old children instruction in a mental transformation task. Half the children were asked to produce a Move gesture relevant to the task; half were asked to produce a Point gesture. The children also observed the experimenter producing either a Move or Point gesture. Children who produced a Move gesture improved more than children who observed the Move gesture. Neither producing nor observing the Point gesture facilitated learning. Doing gesture promotes learning better than seeing gesture, as long as the gesture conveys information that could help solve the task.  相似文献   

9.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.  相似文献   

10.
Extensive research shows that caregivers’ speech and gestures can scaffold children’s learning. This study examines whether caregivers increase the amount of spoken and gestural instruction when a task becomes difficult for children. We also examine whether increasing the amount of instruction containing both speech and gestures enhances children’s problem-solving. Ninety-three 3- to 4-year-old Chinese children and their caregivers participated in our study. The children tried to assemble two jigsaw puzzles (with 12 pieces in one and 20 in the other); each puzzle was attempted in three phases. The order in which the puzzles were to be solved was randomized. In Phases 1 and 3, the children tried to solve the puzzles alone. In Phase 2, the children received instruction from their caregivers. The children assembled a smaller proportion of the 20-piece puzzle than of the 12-piece one, suggesting that the 20-piece puzzle was more difficult than the 12-piece one. The caregivers produced more spoken and gestural instruction for the 20-piece than for the 12-piece puzzle. The proportion of the instruction employing both speech and gesture (+InstS+InstG) was significantly greater for the 20-piece puzzle than for the 12-piece puzzle. More importantly, the children who received more instruction with +InstS+InstG performed better in solving the 20-piece puzzle than those who received less instruction of the same type. Those who did not receive +InstS+InstG instruction performed less successfully in Phase 3. However, the facilitating effect of instruction with +InstS+InstG was not found with the 12-piece puzzle. Our findings suggest that adults should incorporate speech and gesture in their instruction as frequently as possible when teaching their children to perform a difficult task.  相似文献   

11.
On average, men outperform women on mental rotation tasks. Even boys as young as 4 1/2 perform better than girls on simplified spatial transformation tasks. The goal of our study was to explore ways of improving 5-year-olds' performance on a spatial transformation task and to examine the strategies children use to solve this task. We found that boys performed better than girls before training and that both boys and girls improved with training, whether they were given explicit instruction or just practice. Regardless of training condition, the more children gestured about moving the pieces when asked to explain how they solved the spatial transformation task, the better they performed on the task, with boys gesturing about movement significantly more (and performing better) than girls. Gesture thus provides useful information about children's spatial strategies, raising the possibility that gesture training may be particularly effective in improving children's mental rotation skills.  相似文献   

12.
Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8‐ to 10‐year‐old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture–speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., “pet” + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., “bird” + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post‐test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture–speech integration in children overlaps with—but is broader than—the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.  相似文献   

13.
How our hands help us learn   总被引:5,自引:0,他引:5  
When people talk they gesture, and those gestures often reflect thoughts not expressed in their words. In this sense, gesture and the speech it accompanies can mismatch. Gesture-speech 'mismatches' are found when learners are on the verge of making progress on a task - when they are ready to learn. Moreover, mismatches provide insight into the mental processes that characterize learners when in this transitional state. Gesture is not just handwaving - it reflects how we think. However, evidence is mounting that gesture goes beyond reflecting our thoughts and can have a hand in changing those thoughts. We consider two ways in which gesture could change the course of learning: indirectly by influencing learning environments or directly by influencing learners themselves.  相似文献   

14.
Making children gesture brings out implicit knowledge and leads to learning   总被引:2,自引:0,他引:2  
Speakers routinely gesture with their hands when they talk, and those gestures often convey information not found anywhere in their speech. This information is typically not consciously accessible, yet it provides an early sign that the speaker is ready to learn a particular task (S. Goldin-Meadow, 2003). In this sense, the unwitting gestures that speakers produce reveal their implicit knowledge. But what if a learner was forced to gesture? Would those elicited gestures also reveal implicit knowledge and, in so doing, enhance learning? To address these questions, the authors told children to gesture while explaining their solutions to novel math problems and examined the effect of this manipulation on the expression of implicit knowledge in gesture and on learning. The authors found that, when told to gesture, children who were unable to solve the math problems often added new and correct problem-solving strategies, expressed only in gesture, to their repertoires. The authors also found that when these children were given instruction on the math problems later, they were more likely to succeed on the problems than children told not to gesture. Telling children to gesture thus encourages them to convey previously unexpressed, implicit ideas, which, in turn, makes them receptive to instruction that leads to learning.  相似文献   

15.
Children produce their first gestures before their first words, and their first gesture+word sentences before their first word+word sentences. These gestural accomplishments have been found not only to predate linguistic milestones, but also to predict them. Findings of this sort suggest that gesture itself might be playing a role in the language‐learning process. But what role does it play? Children's gestures could elicit from their mothers the kinds of words and sentences that the children need to hear in order to take their next linguistic step. We examined maternal responses to the gestures and speech that 10 children produced during the one‐word period. We found that all 10 mothers ‘translated’ their children's gestures into words, providing timely models for how one‐ and two‐word ideas can be expressed in English. Gesture thus offers a mechanism by which children can point out their thoughts to mothers, who then calibrate their speech to those thoughts, and potentially facilitate language‐learning.  相似文献   

16.
Previous research has shown differences in monolingual and bilingual communication. We explored whether monolingual and bilingual pre‐schoolers (N = 80) differ in their ability to understand others' iconic gestures (gesture perception) and produce intelligible iconic gestures themselves (gesture production) and how these two abilities are related to differences in parental iconic gesture frequency. In a gesture perception task, the experimenter replaced the last word of every sentence with an iconic gesture. The child was then asked to choose one of four pictures that matched the gesture as well as the sentence. In a gesture production task, children were asked to indicate ‘with their hands’ to a deaf puppet which objects to select. Finally, parental gesture frequency was measured while parents answered three different questions. In the iconic gesture perception task, monolingual and bilingual children did not differ. In contrast, bilinguals produced more intelligible gestures than their monolingual peers. Finally, bilingual children's parents gestured more while they spoke than monolingual children's parents. We suggest that bilinguals' heightened sensitivity to their interaction partner supports their ability to produce intelligible gestures and results in a bilingual advantage in iconic gesture production.  相似文献   

17.
《Cognitive development》1988,3(4):359-400
These studies explore children's conceptual knowledge as it is expressed through their verbal and gestural explanations of concepts. We build on previous work that has shown that children who produce a large proportion of gestures that do not match their verbal explanations are in transition with respect to the concept they are explaining. This gesture/speech mismatch has been called “discordance.” Previous work discovered this phenomenon with respect to 5- to 7-year-old children's explanations of conservation problems. Study 1 shows: (1) that older children (10 to 11 years old) exhibit gesture/speech discordance with respect to another concept, understanding the equivalence relationship in mathematical equations, and; (2) that children who produce many discordant responses in their explanations of mathematical equivalence are more likely to benefit from instruction in the concept than are children who produce few such responses. Studies 2 and 3 explore the properties and usefulness of discordance as an index of transitional knowledge in a child's acquisition of mathematical equivalence. Under any circumstance in which new concepts are acquired, there exists a mental bridge connecting the old knowledge state to the new. The studies reported here suggest that the combination of gesture and speech may be an easily observable and significantly interpretable reflection of knowledge states, both static and in flux.  相似文献   

18.
Previous work has found that guiding problem‐solvers' movements can have an immediate effect on their ability to solve a problem. Here we explore these processes in a learning paradigm. We ask whether guiding a learner's movements can have a delayed effect on learning, setting the stage for change that comes about only after instruction. Children were taught movements that were either relevant or irrelevant to solving mathematical equivalence problems and were told to produce the movements on a series of problems before they received instruction in mathematical equivalence. Children in the relevant movement condition improved after instruction significantly more than children in the irrelevant movement condition, despite the fact that the children showed no improvement in their understanding of mathematical equivalence on a ratings task or on a paper‐and‐pencil test taken immediately after the movements but before instruction. Movements of the body can thus be used to sow the seeds of conceptual change. But those seeds do not necessarily come to fruition until after the learner has received explicit instruction in the concept, suggesting a “sleeper effect” of gesture on learning.  相似文献   

19.
3—7岁儿童与成人筷子使用动作模式的比较研究   总被引:1,自引:0,他引:1  
林磊  董奇  孙燕青 《心理学报》2001,34(3):40-46
通过比较3—7岁儿童与成人的筷子使用动作模式类型、特征和使用率,初步探讨了筷子使用动作模式的特点及发展趋势。研究结果表明:(1)儿童与成人中均存在八种筷子使用动作模式,各种模式在手指的分工、配合及完成任务的稳定性、适应性等方面表现出不同的特点,具有不同的效率水平;(2)随年龄的增长,个体筷子使用动作模式日趋转向效率较高的类型,表现为高效模式的使用率从3岁组的3.7%增加到成人组的50%,而低效模式的使用率从59.3%下降到10%。  相似文献   

20.
手势是语言交流过程中的一种重要的非语言媒介, 其不仅与语言互动间的关系密切, 而且具有不同的交流认知特征。文章重点归纳和述评了手势和语言交流的关系, 手势相对独立的交流特征, 教育情境中的手势交流。文章具体提出:首先, 手势和语言的共同表达促进了语言的发生和语言的理解、整合和记忆; 其次, 手势一定程度上具有独立的交流性, 手势和语言的“不匹配性”反映了交流信息的变化和交流认知的改变; 最后, 教育情境中教师的手势表达可以引导学生的注意并澄清语言信息, 学生的手势交流有助于促进学习认知过程。未来研究需要进一步探讨手势对于语言交流功能的影响, 语言交流过程中手势交流的优势特征和认知机制, 教育情境中手势交流高效性的认知机制, 手势交流的影响因素、一般特征和个体差异。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号