首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study looks at whether there is a relationship between mother and infant gesture production. Specifically, it addresses the extent of articulation in the maternal gesture repertoire and how closely it supports the infant production of gestures. Eight Spanish mothers and their 1‐ and 2‐year‐old babies were studied during 1 year of observations. Maternal and child verbal production, gestures and actions were recorded at their homes on five occasions while performing daily routines. Results indicated that mother and child deictic gestures (pointing and instrumental) and representational gestures (symbolic and social) were very similar at each age group and did not decline across groups. Overall, deictic gestures were more frequent than representational gestures. Maternal adaptation to developmental changes is specific for gesturing but not for acting. Maternal and child speech were related positively to mother and child pointing and representational gestures, and negatively to mother and child instrumental gestures. Mother and child instrumental gestures were positively related to action production, after maternal and child speech was partialled out. Thus, language plays an important role for dyadic communicative activities (gesture–gesture relations) but not for dyadic motor activities (gesture–action relations). Finally, a comparison of the growth curves across sessions showed a closer correspondence for mother–child deictic gestures than for representational gestures. Overall, the results point to the existence of an articulated maternal gesture input that closely supports the child gesture production. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
Prosodic cues drive speech segmentation and guide syllable discrimination. However, less is known about the attentional mechanisms underlying an infant's ability to benefit from prosodic cues. This study investigated how 6- to 8-month-old Italian infants allocate their attention to strong vs. weak syllables after familiarization with four repeats of a single CV sequence with alternating strong and weak syllables (different syllables on each trial). In the discrimination test-phase, either the strong or the weak syllable was replaced by a pure tone matching the suprasegmental characteristics of the segmental syllable, i.e., duration, loudness and pitch, whereas the familiarized stimulus was presented as a control. By using an eye-tracker, attention deployment (fixation times) and cognitive resource allocation (pupil dilation) were measured under conditions of high and low saliency that corresponded to the strong and weak syllabic changes, respectively. Italian learning infants were found to look longer and also to show, through pupil dilation, more attention to changes in strong syllable replacement rather than weak syllable replacement, compared to the control condition. These data offer insights into the strategies used by infants to deploy their attention towards segmental units guided by salient prosodic cues, like the stress pattern of syllables, during speech segmentation.  相似文献   

3.
Children achieve increasingly complex language milestones initially in gesture or in gesture+speech combinations before they do so in speech, from first words to first sentences. In this study, we ask whether gesture continues to be part of the language-learning process as children begin to develop more complex language skills, namely narratives. A key aspect of narrative development is tracking story referents, specifying who did what to whom. Adults track referents primarily in speech by introducing a story character with a noun phrase and then following the same referent with a pronoun—a strategy that presents challenges for young children. We ask whether young children can track story referents initially in communications that combine gesture and speech by using character viewpoint in gesture to introduce new story characters, before they are able to do so exclusively in speech using nouns followed by pronouns. Our analysis of 4- to 6-year-old children showed that children introduced new characters in gesture+speech combinations with character viewpoint gestures at an earlier age than conveying the same referents exclusively in speech with the use of nominal phrases followed by pronouns. Results show that children rely on viewpoint in gesture to convey who did what to whom as they take their first steps into narratives.  相似文献   

4.
Gestures and speech parallel or complement each other semantically and pragmatically. Previous studies have postulated a similarly close correlation between intonation and gesture. This study microanalyzed the gestures of three subjects frame by frame and matched the movements with direction of pitch changes for each syllable in 90 intonation groups. The speech was analyzed using spectrographs. Coordinating direction of pitch and manual gesture movements is an option available to speakers, but it is not biologically mandated.  相似文献   

5.
We investigated how two cues to contrast—beat gesture and contrastive pitch accenting—affect comprehenders' cognitive load during processing of spoken referring expressions. In two visual-world experiments, we orthogonally manipulated the presence of these cues and their felicity, or fit, with the local (sentence-level) referential context in critical referring expressions while comprehenders' task-evoked pupillary responses (TEPRs) were examined. In Experiment 1, beat gesture and contrastive accenting always matched the referential context of filler referring expressions and were therefore relatively felicitous on the global (experiment) level, whereas in Experiment 2, beat gesture and contrastive accenting never fit the referential context of filler referring expressions and were therefore infelicitous on the global level. The results revealed that both beat gesture and contrastive accenting increased comprehenders' cognitive load. For beat gesture, this increase in cognitive load was driven by both local and global infelicity. For contrastive accenting, this increase in cognitive load was unaffected when cues were globally felicitous but exacerbated when cues were globally infelicitous. Together, these results suggest that comprehenders' cognitive resources are taxed by processing infelicitous use of beat gesture and contrastive accenting to convey contrast on both the local and global levels.  相似文献   

6.
Understanding the context for children's social learning and language acquisition requires consideration of caregivers’ multi-modal (speech, gesture) messages. Though young children can interpret both manual and head gestures, little research has examined the communicative input that children receive via parents’ head gestures. We longitudinally examined the frequency and communicative functions of mothers’ head nodding and head shaking gestures during laboratory play sessions for 32 mother–child dyads, when the children were 14, 20, and 30 months of age. The majority of mothers produced head nods more frequently than head shakes. Both gestures contributed to mothers’ verbal attempts at behavior regulation and dialog. Mothers’ head nods primarily conveyed agreement with, and attentiveness to, children's utterances, and accompanied affirmative statements and yes/no questions. Mothers’ head shakes primarily conveyed prohibitions and statements with negations. Changes over time appeared to reflect corresponding developmental changes in social and communicative dimensions of caregiver–child interaction. Directions for future research are discussed regarding the role of head gesture input in socialization and in supporting language development.  相似文献   

7.
Children produce their first gestures before their first words, and their first gesture+word sentences before their first word+word sentences. These gestural accomplishments have been found not only to predate linguistic milestones, but also to predict them. Findings of this sort suggest that gesture itself might be playing a role in the language‐learning process. But what role does it play? Children's gestures could elicit from their mothers the kinds of words and sentences that the children need to hear in order to take their next linguistic step. We examined maternal responses to the gestures and speech that 10 children produced during the one‐word period. We found that all 10 mothers ‘translated’ their children's gestures into words, providing timely models for how one‐ and two‐word ideas can be expressed in English. Gesture thus offers a mechanism by which children can point out their thoughts to mothers, who then calibrate their speech to those thoughts, and potentially facilitate language‐learning.  相似文献   

8.
《认知与教导》2013,31(3):201-219
Is the information that gesture provides about a child's understanding of a task accessible not only to experimenters who are trained in coding gesture but also to untrained observers? Twenty adults were asked to describe the reasoning of 12 different children, each videotaped responding to a Piagetian conservation task. Six of the children on the videotape produced gestures that conveyed the same information as their nonconserving spoken explanations, and 6 produced gestures that conveyed different information from their nonconserving spoken explanations. The adult observers displayed more uncertainty in their appraisals of children who produced different information in gesture and speech than in their appraisals of children who produced the same information in gesture and speech. Moreover, the adults were able to incorporate the information conveyed in the children's gestures into their own spoken appraisals of the children's reasoning. These data suggest that, even without training, adults form impressions of children's knowledge based not only on what children say with their mouths but also on what they say with their hands.  相似文献   

9.
People with aphasia use gestures not only to communicate relevant content but also to compensate for their verbal limitations. The Sketch Model (De Ruiter, 2000) assumes a flexible relationship between gesture and speech with the possibility of a compensatory use of the two modalities. In the successor of the Sketch Model, the AR-Sketch Model (De Ruiter, 2017), the relationship between iconic gestures and speech is no longer assumed to be flexible and compensatory, but instead iconic gestures are assumed to express information that is redundant to speech. In this study, we evaluated the contradictory predictions of the Sketch Model and the AR-Sketch Model using data collected from people with aphasia as well as a group of people without language impairment. We only found compensatory use of gesture in the people with aphasia, whereas the people without language impairments made very little compensatory use of gestures. Hence, the people with aphasia gestured according to the prediction of the Sketch Model, whereas the people without language impairment did not. We conclude that aphasia fundamentally changes the relationship of gesture and speech.  相似文献   

10.
In the early stages of word learning, children demonstrate considerable flexibility in the type of symbols they will accept as object labels. However, around the 2nd year, as children continue to gain language experience, they become focused on more conventional symbols (e.g., words) as opposed to less conventional symbols (e.g., gestures). During this period of symbolic narrowing, the degree to which children are able to learn other types of labels, such as arbitrary gestures, remains a topic of debate. Thus, the purpose of the current set of experiments was to determine whether a multimodal label (word + gesture) could facilitate 26-month-olds' ability to learn an arbitrary gestural label. We hypothesized that the multimodal label would exploit children's focus on words thereby increasing their willingness to interpret the gestural label. To test this hypothesis, we conducted two experiments. In Experiment 1, 26-month-olds were trained with a multimodal label (word + gesture) and tested on their ability to map and generalize both the arbitrary gesture and the multimodal label to familiar and novel objects. In Experiment 2, 26-month-olds were trained and tested with only the gestural label. The findings revealed that 26-month-olds are able to map and generalize an arbitrary gesture when it is presented multimodally with a word, but not when it is presented in isolation. Furthermore, children's ability to learn the gestural labels was positively related to their reported productive vocabulary, providing additional evidence that children's focus on words actually helped, not hindered, their gesture learning.  相似文献   

11.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   

12.
Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects (“cat”). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the same door-opening role in word learning for children with autism spectrum disorder (ASD) and Down syndrome (DS), who show delayed vocabulary development and who differ in the strength of gesture production. To answer this question, we observed 23 18-month-old TD children, 23 30-month-old children with ASD, and 23 30-month-old children with DS 5 times over a year during parent–child interactions. Children in all 3 groups initially expressed a greater proportion of referents uniquely in gesture than in speech. Many of these unique gestures subsequently entered children’s spoken vocabularies within a year—a pattern that was slightly less robust for children with DS, whose word production was the most markedly delayed. These results indicate that gesture is as fundamental to vocabulary development for children with developmental disorders as it is for TD children.  相似文献   

13.
In order to produce a coherent narrative, speakers must identify the characters in the tale so that listeners can figure out who is doing what to whom. This paper explores whether speakers use gesture, as well as speech, for this purpose. English speakers were shown vignettes of two stories and asked to retell the stories to an experimenter. Their speech and gestures were transcribed and coded for referent identification. A gesture was considered to identify a referent if it was produced in the same location as the previous gesture for that referent. We found that speakers frequently used gesture location to identify referents. Interestingly, however, they used gesture most often to identify referents that were also uniquely specified in speech. Lexical specificity in referential expressions in speech thus appears to go hand-in-hand with specification in referential expressions in gesture.  相似文献   

14.
Variation in how frequently caregivers engage with their children is associated with variation in children's later language outcomes. One explanation for this link is that caregivers use both verbal behaviors, such as labels, and non-verbal behaviors, such as gestures, to help children establish reference to objects or events in the world. However, few studies have directly explored whether language outcomes are more strongly associated with referential behaviors that are expressed verbally, such as labels, or non-verbally, such as gestures, or whether both are equally predictive. Here, we observed caregivers from 42 Spanish-speaking families in the US engage with their 18-month-old children during 5-min lab-based, play sessions. Children's language processing speed and vocabulary size were assessed when children were 25 months. Bayesian model comparisons assessed the extent to which the frequencies of caregivers’ referential labels, referential gestures, or labels and gestures together, were more strongly associated with children's language outcomes than a model with caregiver total words, or overall talkativeness. The best-fitting models showed that children who heard more referential labels at 18 months were faster in language processing and had larger vocabularies at 25 months. Models including gestures, or labels and gestures together, showed weaker fits to the data. Caregivers’ total words predicted children's language processing speed, but predicted vocabulary size less well. These results suggest that the frequency with which caregivers of 18-month-old children use referential labels, more so than referential gestures, is a critical feature of caregiver verbal engagement that contributes to language processing development and vocabulary growth.

Research Highlights

  • We examined the frequency of referential communicative behaviors, via labels and/or gestures, produced by caregivers during a 5-min play interaction with their 18-month-old children.
  • We assessed predictive relations between labels, gestures, their combination, as well as total words spoken, and children's processing speed and vocabulary growth at 25 months.
  • Bayesian model comparisons showed that caregivers’ referential labels at 18 months best predicted both 25-month vocabulary measures, although total words also predicted later processing speed.
  • Frequent use of referential labels by caregivers, more so than referential gestures, is a critical feature of communicative behavior that supports children's later vocabulary learning.
  相似文献   

15.
English‐learning 7.5‐month‐olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non‐initially stressed words from speech. Using the same artificial language methodology as Johnson and Jusczyk (2001 ), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11‐month‐olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11‐month‐olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non‐initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.  相似文献   

16.
于文勃  梁丹丹 《心理科学进展》2018,26(10):1765-1774
词是语言的基本结构单位, 对词语进行切分是语言加工的重要步骤。口语语流中的切分线索来自于语音、语义和语法三个方面。语音线索包括概率信息、音位配列规则和韵律信息, 韵律信息中还包括词重音、时长和音高等内容, 这些线索的使用在接触语言的早期阶段就逐渐被个体所掌握, 而且在不同的语言背景下有一定的特异性。语法和语义线索属于较高级的线索机制, 主要作用于词语切分过程的后期。后续研究应从语言的毕生发展和语言的特异性两个方面考察口语语言加工中的词语切分线索。  相似文献   

17.
Gesture during speech can promote or diminish recall for conversation content. We explored effects of cognitive load on this relationship, manipulating it at two scales: individual-word abstractness and social constraints to prohibit gestures. Prohibited gestures can diminish recall but more so for abstract-word recall. Insofar as movement planning adds to cognitive load, movement amplitude may moderate gesture effects on memory, with greater permitted- and prohibited-gesture movements reducing abstract-word recall and concrete-word recall, respectively. We tested these effects in a dyadic game in which 39 adult participants described words to confederates without naming the word or five related words. Results supported our expectations and indicated that memory effects of gesturing depend on social, cognitive, and motoric aspects of discourse.  相似文献   

18.
Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words accompanied by beats elicited a positive shift in ERPs at an early sensory stage (before 100 ms) and at a later time window coinciding with the auditory component P2. The same word tokens produced no ERP differences when participants listened to the discourse without view of the speaker. We conclude that beat gestures are integrated with speech early on in time and modulate sensory/phonological levels of processing. The present results support the possible role of beats as a highlighter, helping the listener to direct the focus of attention to important information and modulate the parsing of the speech stream.  相似文献   

19.
Although the language we encounter is typically embedded in rich discourse contexts, many existing models of processing focus largely on phenomena that occur sentence‐internally. Similarly, most work on children's language learning does not consider how information can accumulate as a discourse progresses. Research in pragmatics, however, points to ways in which each subsequent utterance provides new opportunities for listeners to infer speaker meaning. Such inferences allow the listener to build up a representation of the speakers' intended topic and more generally to identify relationships, structures, and messages that extend across multiple utterances. We address this issue by analyzing a video corpus of child–caregiver interactions. We use topic continuity as an index of discourse structure, examining how caregivers introduce and discuss objects across utterances. For the analysis, utterances are grouped into topical discourse sequences using three annotation strategies: raw annotations of speakers' referents, the output of a model that groups utterances based on those annotations, and the judgments of human coders. We analyze how the lexical, syntactic, and social properties of caregiver–child interaction change over the course of a sequence of topically related utterances. Our findings suggest that many cues used to signal topicality in adult discourse are also available in child‐directed speech.  相似文献   

20.
Using Mandarin Chinese, a "tone language" in which the pitch contours of syllables differentiate words, the authors examined the acoustic modifications of infant-directed speech (IDS) at the syllable level to test 2 hypotheses: (a) the overall increase in pitch and intonation contour that occurs in IDS at the phrase level would not distort lexical pitch at the syllable level and (b) IDS provides exaggerates cues to lexical tones. Sixteen Mandarin-speaking mothers were recorded while addressing their infants and addressing an adult. The results indicate that IDS does not distort the acoustic cues that are essential to word meaning at the syllable level; evidence of exaggeration of the acoustic differences in IDS was observed, extending previous findings of phonetic exaggeration to the lexical level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号