首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3‐ and 5‐year‐old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3‐year‐olds, 5‐year‐olds, and adults were presented with either an iconic gesture or a spoken sentence or a combination of the two on a computer screen, and they were instructed to select a photograph that best matched the message. The 3‐year‐olds did not integrate information in speech and gesture, but 5‐year‐olds and adults did. In Experiment 2, 3‐year‐old children were presented with the same speech and gesture as in Experiment 1 that were produced live by an experimenter. When presented live, 3‐year‐olds could integrate speech and gesture. We concluded that development of the integration ability is a part of the broader developmental shift; however, live‐presentation facilitates the nascent integration ability in 3‐year‐olds.  相似文献   

2.
When asked to explain their solutions to a problem, both adults and children gesture as they talk. These gestures at times convey information that is not conveyed in speech and thus reveal thoughts that are distinct from those revealed in speech. In this study, we use the classic Tower of Hanoi puzzle to validate the claim that gesture and speech taken together can reflect the activation of two cognitive strategies within a single response. The Tower of Hanoi is a well‐studied puzzle, known to be most efficiently solved by activating subroutines at theoretically defined choice points. When asked to explain how they solved the Tower of Hanoi puzzle, both adults and children produced significantly more gesture‐speech mismatches—explanations in which speech conveyed one path and gesture another—at these theoretically defined choice points than they produced at non‐choice points. Even when the participants did not solve the problem efficiently, gesture could be used to indicate where the participants were deciding between alternative paths. Gesture can, thus, serve as a useful adjunct to speech when attempting to discover cognitive processes in problem‐solving.  相似文献   

3.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   

4.
When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used the Give‐a‐Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What's on this Card task to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.  相似文献   

5.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   

6.
7.
We report on a study investigating 3–5‐year‐old children's use of gesture to resolve lexical ambiguity. Children were told three short stories that contained two homonym senses; for example, bat (flying mammal) and bat (sports equipment). They were then asked to re‐tell these stories to a second experimenter. The data were coded for the means that children used during attempts at disambiguation: speech, gesture, or a combination of the two. The results indicated that the 3‐year‐old children rarely disambiguated the two senses, mainly using deictic pointing gestures during attempts at disambiguation. In contrast, the 4‐year‐old children attempted to disambiguate the two senses more often, using a larger proportion of iconic gestures than the other children. The 5‐year‐old children used less iconic gestures than the 4‐year‐olds, but unlike the 3‐year‐olds, were able to disambiguate the senses through the verbal channel. The results highlight the value of gesture to the development of children's language and communication skills.  相似文献   

8.
《认知与教导》2013,31(3):201-219
Is the information that gesture provides about a child's understanding of a task accessible not only to experimenters who are trained in coding gesture but also to untrained observers? Twenty adults were asked to describe the reasoning of 12 different children, each videotaped responding to a Piagetian conservation task. Six of the children on the videotape produced gestures that conveyed the same information as their nonconserving spoken explanations, and 6 produced gestures that conveyed different information from their nonconserving spoken explanations. The adult observers displayed more uncertainty in their appraisals of children who produced different information in gesture and speech than in their appraisals of children who produced the same information in gesture and speech. Moreover, the adults were able to incorporate the information conveyed in the children's gestures into their own spoken appraisals of the children's reasoning. These data suggest that, even without training, adults form impressions of children's knowledge based not only on what children say with their mouths but also on what they say with their hands.  相似文献   

9.
Co‐thought gestures are understudied as compared to co‐speech gestures yet, may provide insight into cognitive functions of gestures that are independent of speech processes. A recent study with adults showed that co‐thought gesticulation occurred spontaneously during mental preparation of problem solving. Moreover, co‐thought gesturing (either spontaneous or instructed) during mental preparation was effective for subsequent solving of the Tower of Hanoi under conditions of high cognitive load (i.e., when visual working memory capacity was limited and when the task was more difficult). In this preregistered study ( https://osf.io/dreks/ ), we investigated whether co‐thought gestures would also spontaneously occur and would aid problem‐solving processes in children (N = 74; 8–12 years old) under high load conditions. Although children also spontaneously used co‐thought gestures during mental problem solving, this did not aid their subsequent performance when physically solving the problem. If these null results are on track, co‐thought gesture effects may be different in adults and children.  相似文献   

10.
Children's gesture production precedes and predicts language development, but the pathways linking these domains are unclear. It is possible that gesture production assists in children's developing word comprehension, which in turn supports expressive vocabulary acquisition. The present study examines this mediation pathway in a population with variability in early communicative abilities—the younger siblings of children with autism spectrum disorder (ASD; high‐risk infants, HR). Participants included 92 HR infants and 28 infants at low risk (LR) for ASD. A primary caregiver completed the MacArthur‐Bates Communicative Development Inventory (Fenson, et al., 1993) at 12, 14, and 18 months, and HR infants received a diagnostic evaluation for ASD at 36 months. Word comprehension at 14 months mediated the relationship between 12‐month gesture and 18‐month word production in LR and HR infants (ab = 0.263; p < 0.01). For LR infants and HR infants with no diagnosis or language delay, gesture was strongly associated with word comprehension (as = 0.666; 0.646; 0.561; ps < 0.01). However, this relationship did not hold for infants later diagnosed with ASD (a = 0.073; p = 0.840). This finding adds to a growing literature suggesting that children with ASD learn language differently. Furthermore, this study provides an initial step toward testing the developmental pathways by which infants transition from early actions and gestures to expressive language.  相似文献   

11.
Previous research has shown differences in monolingual and bilingual communication. We explored whether monolingual and bilingual pre‐schoolers (N = 80) differ in their ability to understand others' iconic gestures (gesture perception) and produce intelligible iconic gestures themselves (gesture production) and how these two abilities are related to differences in parental iconic gesture frequency. In a gesture perception task, the experimenter replaced the last word of every sentence with an iconic gesture. The child was then asked to choose one of four pictures that matched the gesture as well as the sentence. In a gesture production task, children were asked to indicate ‘with their hands’ to a deaf puppet which objects to select. Finally, parental gesture frequency was measured while parents answered three different questions. In the iconic gesture perception task, monolingual and bilingual children did not differ. In contrast, bilinguals produced more intelligible gestures than their monolingual peers. Finally, bilingual children's parents gestured more while they spoke than monolingual children's parents. We suggest that bilinguals' heightened sensitivity to their interaction partner supports their ability to produce intelligible gestures and results in a bilingual advantage in iconic gesture production.  相似文献   

12.
Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects (“cat”). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the same door-opening role in word learning for children with autism spectrum disorder (ASD) and Down syndrome (DS), who show delayed vocabulary development and who differ in the strength of gesture production. To answer this question, we observed 23 18-month-old TD children, 23 30-month-old children with ASD, and 23 30-month-old children with DS 5 times over a year during parent–child interactions. Children in all 3 groups initially expressed a greater proportion of referents uniquely in gesture than in speech. Many of these unique gestures subsequently entered children’s spoken vocabularies within a year—a pattern that was slightly less robust for children with DS, whose word production was the most markedly delayed. These results indicate that gesture is as fundamental to vocabulary development for children with developmental disorders as it is for TD children.  相似文献   

13.
We examined the effects of three different training conditions, all of which involve the motor system, on kindergarteners’ mental transformation skill. We focused on three main questions. First, we asked whether training that involves making a motor movement that is relevant to the mental transformation—either concretely through action (action training) or more abstractly through gestural movements that represent the action (move‐gesture training)—resulted in greater gains than training using motor movements irrelevant to the mental transformation (point‐gesture training). We tested children prior to training, immediately after training (posttest), and 1 week after training (retest), and we found greater improvement in mental transformation skill in both the action and move‐gesture training conditions than in the point‐gesture condition, at both posttest and retest. Second, we asked whether the total gain made by retest differed depending on the abstractness of the movement‐relevant training (action vs. move‐gesture), and we found that it did not. Finally, we asked whether the time course of improvement differed for the two movement‐relevant conditions, and we found that it did—gains in the action condition were realized immediately at posttest, with no further gains at retest; gains in the move‐gesture condition were realized throughout, with comparable gains from pretest‐to‐posttest and from posttest‐to‐retest. Training that involves movement, whether concrete or abstract, can thus benefit children's mental transformation skill. However, the benefits unfold differently over time—the benefits of concrete training unfold immediately after training (online learning); the benefits of more abstract training unfold in equal steps immediately after training (online learning) and during the intervening week with no additional training (offline learning). These findings have implications for the kinds of instruction that can best support spatial learning.  相似文献   

14.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

15.
Functional magnetic resonance imaging (fMRI) was used to examine differences between children (9-12 years) and adults (21-31 years) in the distribution of brain activation during word processing. Orthographic, phonologic, semantic and syntactic tasks were used in both the auditory and visual modalities. Our two principal results were consistent with the hypothesis that development is characterized by increasing specialization. Our first analysis compared activation in children versus adults separately for each modality. Adults showed more activation than children in the unimodal visual areas of middle temporal gyrus and fusiform gyrus for processing written word forms and in the unimodal auditory areas of superior temporal gyrus for processing spoken word forms. Children showed more activation than adults for written word forms in posterior heteromodal regions (Wernicke's area), presumably for the integration of orthographic and phonologic word forms. Our second analysis compared activation in the visual versus auditory modality separately for children and adults. Children showed primarily overlap of activation in brain regions for the visual and auditory tasks. Adults showed selective activation in the unimodal auditory areas of superior temporal gyrus when processing spoken word forms and selective activation in the unimodal visual areas of middle temporal gyrus and fusiform gyrus when processing written word forms.  相似文献   

16.
Children produce their first gestures before their first words, and their first gesture+word sentences before their first word+word sentences. These gestural accomplishments have been found not only to predate linguistic milestones, but also to predict them. Findings of this sort suggest that gesture itself might be playing a role in the language‐learning process. But what role does it play? Children's gestures could elicit from their mothers the kinds of words and sentences that the children need to hear in order to take their next linguistic step. We examined maternal responses to the gestures and speech that 10 children produced during the one‐word period. We found that all 10 mothers ‘translated’ their children's gestures into words, providing timely models for how one‐ and two‐word ideas can be expressed in English. Gesture thus offers a mechanism by which children can point out their thoughts to mothers, who then calibrate their speech to those thoughts, and potentially facilitate language‐learning.  相似文献   

17.
18.
Adults differ in the extent to which they find spending money to be distressing; “tightwads” find spending money painful, and “spendthrifts” do not find spending painful enough. This affective dimension has been reliably measured in adults and predicts a variety of important financial behaviors and outcomes (e.g., saving behavior and credit scores). Although children's financial behavior has also received attention, feelings about spending have not been studied in children, as they have in adults. We measured the spendthrift–tightwad (ST–TW) construct in children for the first time, with a sample of 5‐ to 10‐year‐old children (N = 225). Children across the entire age range were able to reliably report on their affective responses to spending and saving, and children's ST–TW scores were related to parent reports of children's temperament and financial behavior. Further, children's ST–TW scores were predictive of whether they chose to save or spend money in the lab, even after controlling for age and how much they liked the offered items. Our novel findings—that children's feelings about spending and saving can be measured from an early age and relate to their behavior with money—are discussed with regard to theoretical and practical implications. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

19.
Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early cues and then updated when more information arises. These studies have uniformly shown evidence for immediate integration for a variety of phonetic distinctions. We attempted to extend this to fricatives, a class of speech sounds which requires not only temporal integration of asynchronous cues (the frication, followed by the formant transitions 150–350 ms later), but also integration across different frequency bands and compensation for contextual factors like coarticulation. Eye movements in the visual world paradigm showed clear evidence for a memory buffer. Results were replicated in five experiments, ruling out methodological factors and tying the release of the buffer to the onset of the vowel. These findings support a general auditory account for speech by suggesting that the acoustic nature of particular speech sounds may have large effects on how they are processed. It also has major implications for theories of auditory and speech perception by raising the possibility of an encapsulated memory buffer in early auditory processing.  相似文献   

20.
Human languages typically employ a variety of spatial metaphors for time (e.g., “I'm looking forward to the weekend”). The metaphorical grounding of time in space is also evident in gesture. The gestures that are performed when talking about time bolster the view that people sometimes think about regions of time as if they were locations in space. However, almost nothing is known about the development of metaphorical gestures for time, despite keen interest in the origins of space–time metaphors. In this study, we examined the gestures that English‐speaking 6‐to‐7‐year‐olds, 9‐to‐11‐year‐olds, 13‐to‐15‐year‐olds, and adults produced when talking about time. Participants were asked to explain the difference between pairs of temporal adverbs (e.g., “tomorrow” versus “yesterday”) and to use their hands while doing so. There was a gradual increase across age groups in the propensity to produce spatial metaphorical gestures when talking about time. However, even a substantial majority of 6‐to‐7‐year‐old children produced a spatial gesture on at least one occasion. Overall, participants produced fewer gestures in the sagittal (front‐back) axis than in the lateral (left‐right) axis, and this was particularly true for the youngest children and adolescents. Gestures that were incongruent with the prevailing norms of space–time mappings among English speakers (leftward and backward for past; rightward and forward for future) gradually decreased with increasing age. This was true for both the lateral and sagittal axis. This study highlights the importance of metaphoricity in children's understanding of time. It also suggests that, by 6 to 7 years of age, culturally determined representations of time have a strong influence on children's spatial metaphorical gestures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号