首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study investigated the degree to which an infants’ use of simultaneous gesture–speech combinations during controlled social interactions predicts later language development. Nineteen infants participated in a declarative pointing task involving three different social conditions: two experimental conditions (a) available, when the adult was visually attending to the infant but did not attend to the object of reference jointly with the child, and (b) unavailable, when the adult was not visually attending to neither the infant nor the object; and (c) a baseline condition, when the adult jointly engaged with the infant's object of reference. At 12 months of age measures related to infants’ speech-only productions, pointing-only gestures, and simultaneous pointing–speech combinations were obtained in each of the three social conditions. Each child's lexical and grammatical output was assessed at 18 months of age through parental report. Results revealed a significant interaction between social condition and type of communicative production. Specifically, only simultaneous pointing–speech combinations increased in frequency during the available condition compared to baseline, while no differences were found for speech-only and pointing-only productions. Moreover, simultaneous pointing–speech combinations in the available condition at 12 months positively correlated with lexical and grammatical development at 18 months of age. The ability to selectively use this multimodal communicative strategy to engage the adult in joint attention by drawing his attention toward an unseen event or object reveals 12-month-olds’ clear understanding of referential cues that are relevant for language development. This strategy to successfully initiate and maintain joint attention is related to language development as it increases learning opportunities from social interactions.  相似文献   

2.
3.
We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3‐ and 5‐year‐old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3‐year‐olds, 5‐year‐olds, and adults were presented with either an iconic gesture or a spoken sentence or a combination of the two on a computer screen, and they were instructed to select a photograph that best matched the message. The 3‐year‐olds did not integrate information in speech and gesture, but 5‐year‐olds and adults did. In Experiment 2, 3‐year‐old children were presented with the same speech and gesture as in Experiment 1 that were produced live by an experimenter. When presented live, 3‐year‐olds could integrate speech and gesture. We concluded that development of the integration ability is a part of the broader developmental shift; however, live‐presentation facilitates the nascent integration ability in 3‐year‐olds.  相似文献   

4.
Children achieve increasingly complex language milestones initially in gesture or in gesture+speech combinations before they do so in speech, from first words to first sentences. In this study, we ask whether gesture continues to be part of the language-learning process as children begin to develop more complex language skills, namely narratives. A key aspect of narrative development is tracking story referents, specifying who did what to whom. Adults track referents primarily in speech by introducing a story character with a noun phrase and then following the same referent with a pronoun—a strategy that presents challenges for young children. We ask whether young children can track story referents initially in communications that combine gesture and speech by using character viewpoint in gesture to introduce new story characters, before they are able to do so exclusively in speech using nouns followed by pronouns. Our analysis of 4- to 6-year-old children showed that children introduced new characters in gesture+speech combinations with character viewpoint gestures at an earlier age than conveying the same referents exclusively in speech with the use of nominal phrases followed by pronouns. Results show that children rely on viewpoint in gesture to convey who did what to whom as they take their first steps into narratives.  相似文献   

5.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   

6.
Speech directed towards young children ("motherese") is subject to consistent systematic modifications. Recent research suggests that gesture directed towards young children is similarly modified (gesturese). It has been suggested that gesturese supports speech, therefore scaffolding communicative development (the facilitative interactional theory). Alternatively, maternal gestural modification may be a consequence of the semantic simplicity of interaction with infants (the interactional artefact theory). The gesture patterns of 12 English mothers were observed with their 20-month-old infants while engaged in two tasks, free play and a counting task, designed to differentially tap into scaffolding. Gestures accounted for 29% of total maternal communicative behaviour. English mothers employed mainly concrete deictic gestures (e.g. pointing) that supported speech by disambiguating and emphasizing the verbal utterance. Maternal gesture rate and informational gesture-speech relationship were consistent across tasks, supporting the interactional artefact theory. This distinctive pattern of gesture use for the English mothers was similar to that reported for American and Italian mothers, providing support for universality. Child-directed gestures are not redundant in relation to child-directed speech but rather both are used by mothers to support their communicative acts with infants.  相似文献   

7.
In recent years, studies have suggested that gestures influence comprehension of linguistic expressions, for example, eliciting an N400 component in response to a speech/gesture mismatch. In this paper, we investigate the role of gestural information in the understanding of metaphors. Event related potentials (ERPs) were recorded while participants viewed video clips of an actor uttering metaphorical expressions and producing bodily gestures that were congruent or incongruent with the metaphorical meaning of such expressions. This modality of stimuli presentation allows a more ecological approach to meaning integration. When ERPs were calculated using gesture stroke as time-lock event, gesture incongruity with metaphorical expression modulated the amplitude of the N400 and of the late positive complex (LPC). This suggests that gestural and speech information are combined online to make sense of the interlocutor’s linguistic production in an early stage of metaphor comprehension. Our data favor the idea that meaning construction is globally integrative and highly context-sensitive.  相似文献   

8.
Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8‐ to 10‐year‐old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture–speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., “pet” + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., “bird” + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post‐test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture–speech integration in children overlaps with—but is broader than—the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.  相似文献   

9.
Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture–speech mismatches, which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech's integration and synchronization. In the current study, we applied three different perspectives to investigate gesture–speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant's speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal detrended fluctuation analysis was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture–speech synchronization in all three domains. We thereby extended the phenomenon of gesture–speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology), provides novel understanding about cognitive concepts in general and about gesture–speech synchronization and task difficulty in particular.  相似文献   

10.
The gestures children produce predict the early stages of spoken language development. Here we ask whether gesture is a global predictor of language learning, or whether particular gestures predict particular language outcomes. We observed 52 children interacting with their caregivers at home, and found that gesture use at 18 months selectively predicted lexical versus syntactic skills at 42 months, even with early child speech controlled. Specifically, number of different meanings conveyed in gesture at 18 months predicted vocabulary at 42 months, but number of gesture+speech combinations did not. In contrast, number of gesture+speech combinations, particularly those conveying sentence‐like ideas, produced at 18 months predicted sentence complexity at 42 months, but meanings conveyed in gesture did not. We can thus predict particular milestones in vocabulary and sentence complexity at age by watching how children move their hands two years earlier.  相似文献   

11.
Children differ in how quickly they reach linguistic milestones. Boys typically produce their first multi‐word sentences later than girls do. We ask here whether there are sex differences in children’s gestures that precede, and presage, these sex differences in speech. To explore this question, we observed 22 girls and 18 boys every 4 months as they progressed from one‐word speech to multi‐word speech. We found that boys not only produced speech + speech (S+S) combinations (‘drink juice’) 3 months later than girls, but they also produced gesture + speech (G+S) combinations expressing the same types of semantic relations (‘eat’ + point at cookie) 3 months later than girls. Because G+S combinations are produced earlier than S+S combinations, children’s gestures provide the first sign that boys are likely to lag behind girls in the onset of sentence constructions.  相似文献   

12.
In this study, we investigated exploration and language development, particularly whether preliminary object play mediates the role of exploration in gesture and speech production. We followed 27 infants, aged 8–17 months, and gathered data on the frequency of their exploration, preliminary functional acts with single or multiple objects, and communicative behaviors (e.g., gesturing and single-word utterances). The results of our path analysis indicated that exploration had a direct effect on single-object play, which, in turn, affected gesturing and advanced object play. Gesturing as well as single and multi-object play affected speech production. These findings suggest that exploration is associated with language development. This association may be facilitated by object play milestones in which infants recall the object’s function, which strengthens their memory and representation skills. Further, recalling the usage of an object by the caregivers may encourage an infant’s overall imitation tendency, which is important for learning how to communicate with gestures and words.  相似文献   

13.
14.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

15.
Talking and Thinking With Our Hands   总被引:1,自引:0,他引:1  
ABSTRACT— When people talk, they gesture. Typically, gesture is produced along with speech and forms a fully integrated system with that speech. However, under unusual circumstances, gesture can be produced on its own, without speech. In these instances, gesture must take over the full burden of communication usually shared by the two modalities. What happens to gesture in this very different context? One possibility is that there are no differences in the forms gesture takes with speech and without it—that gesture is gesture no matter what its function. But that is not what we find. When gesture is produced on its own and assumes the full burden of communication, it takes on a language-like form. In contrast, when gesture is produced in conjunction with speech and shares the burden of communication with that speech, it takes on an unsegmented, imagistic form, often conveying information not found in speech. As such, gesture sheds light on how people think and can even play a role in changing those thoughts. Gesture can thus be part of language or it can itself be language, altering its form to fit its function.  相似文献   

16.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.  相似文献   

17.
Gaze alternation (GA) is considered a hallmark of pointing in human infants, a sign of intentionality underlying the gesture. GA has occasionally been observed in great apes, and reported only anecdotally in a few monkeys. Three squirrel monkeys that had previously learned to reach toward out-of-reach food in the presence of a human partner were videotaped while the latter visually attended to the food, a distractor object, or the ceiling. Frame-by-frame video analysis revealed that, especially when reaching toward the food, the monkeys rapidly and repeatedly switched between looking at the partner’s face and the food. This type of GA suggests that the monkeys were communicating with the partner. However, the monkeys’ behavior was not influenced by changes in the partner’s focus of attention.  相似文献   

18.
The present study investigates whether knowledge about the intentional relationship between gesture and speech influences controlled processes when integrating the two modalities at comprehension. Thirty-five adults watched short videos of gesture and speech that conveyed semantically congruous and incongruous information. In half of the videos, participants were told that the two modalities were intentionally coupled (i.e., produced by the same communicator), and in the other half, they were told that the two modalities were not intentionally coupled (i.e., produced by different communicators). When participants knew that the same communicator produced the speech and gesture, there was a larger bi-lateral frontal and central N400 effect to words that were semantically incongruous versus congruous with gesture. However, when participants knew that different communicators produced the speech and gesture--that is, when gesture and speech were not intentionally meant to go together--the N400 effect was present only in right-hemisphere frontal regions. The results demonstrate that pragmatic knowledge about the intentional relationship between gesture and speech modulates controlled neural processes during the integration of the two modalities.  相似文献   

19.
GESTURE, SPEECH, AND LEXICAL ACCESS:   总被引:1,自引:0,他引:1  
Abstract— In a within-subjects design that varied whether speakers were allowed to gesture and the difficulty of lexical access, speakers were videotaped as they described animated action cartoons to a listener. When speakers were permitted to gesture, they gestured more often during phrases with spatial content than during phrases with other content. Speech with spatial content was less fluent when speakers could not gesture than when they could gesture, speech with nonspatial content was not affected by gesture condition. Preventing gesturing increased the relative frequency of nonjuncture filled pauses in speech with spatial content, but not in speech with other content. Overall, the effects of preventing speakers from gesturing resembled those of increasing the difficulty of lexical access by other means, except that the effects of gesture restriction were specific to speech with spatial content. The findings support the hypothesis that gestural accompaniments to spontaneous speech can facilitate access to the mental lexicon  相似文献   

20.
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号