首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Caregivers use a range of verbal and nonverbal behaviours when responding to their infants. Previous studies have typically focused on the role of the caregiver in providing verbal responses, while communication is inherently multimodal (involving audio and visual information) and bidirectional (exchange of information between infant and caregiver). In this paper, we present a comprehensive study of caregivers’ verbal, nonverbal, and multimodal responses to 10-month-old infants’ vocalisations and gestures during free play. A new coding scheme was used to annotate 2036 infant vocalisations and gestures of which 87.1 % received a caregiver response. Most caregiver responses were verbal, but 39.7 % of all responses were multimodal. We also examined whether different infant behaviours elicited different responses from caregivers. Infant bimodal (i.e., vocal-gestural combination) behaviours elicited high rates of verbal responses and high rates of multimodal responses, while infant gestures elicited high rates of nonverbal responses. We also found that the types of verbal and nonverbal responses differed as a function of infant behaviour. The results indicate that infants influence the rates and types of responses they receive from caregivers. When examining caregiver-child interactions, analysing caregivers’ verbal responses alone undermines the multimodal richness and bidirectionality of early communication.  相似文献   

2.
Directedness and engagement during pre-verbal vocal communication play a major role in language development. What was their role in the evolution of language? This question invites us to examine these behaviours in chimpanzee vocal ontogeny. We collected observational data on infant (N = 15) and juvenile (N = 13) chimpanzees at Chimfunshi Wildlife Orphanage, Zambia. We examined the impact of age and vocalization type (grunts, whimpers, laughs and screams) on directed cues (gaze directedness and face directedness) and engagement (mutual face directedness) during vocal communication. We also assessed the impact of directed cues and engagement on social interactions by coding the behaviour of social partners before, during and after a vocalisation, and examining whether they contingently changed their behaviour in response to the vocalisation if it was directed or if engagement occurred. We found that face directed vocalisations showed a general increase during ontogeny and we observed call-type dependent effects of age for mutual face directedness. Only face directed vocalisations were significantly predictive of behavioural responses in social partners. We conclude that like young humans, young chimpanzees routinely exhibit directed behaviours and engagement during vocal communication. This social competency improves during ontogeny and benefits individuals by increasing the chances of eliciting behavioural responses from social partners. Directedness and engagement likely provide a foundation for language phylogenetically, as well as ontogenetically.

Research Highlights

  • We show that directedness and engagement routinely occur during early chimpanzee vocalisations.
  • Directedness increases throughout chimpanzee vocal ontogeny, similar to human infants.
  • Directedness enhances social partner responsiveness, demonstrating a direct benefit to this style of communication.
  • Directedness and engagement could provide a route towards language phylogenetically as well as ontogenetically.
  相似文献   

3.
Maternal postpartum depression (PPD) is a risk for disruption of mother–infant interaction. Infants of depressed mothers have been found to display less positive, more negative, and neutral affect. Other studies have found that infants of mothers with PPD inhibit both positive and negative affect. In a sample of 28 infants of mothers with PPD and 52 infants of nonclinical mothers, we examined the role of PPD diagnosis and symptoms for infants’ emotional variability, measured as facial expressions, vocal protest, and gaze using microanalysis, during a mother–infant face-to-face interaction. PPD symptoms and diagnosis were associated with (a) infants displaying fewer high negative, but more neutral/interest facial affect events, and (b) fewer gaze off events.  PPD diagnosis, but not symptoms, was associated with less infant vocal protest. Total duration of seconds of infant facial affective displays and gaze off was not related to PPD diagnosis or symptoms, suggesting that when infants of depressed mothers display high negative facial affect or gaze off, these expressions are more sustained, indicating lower infant ability to calm down and re-engage, interpreted as a disturbance in self-regulation. The findings highlight the importance of not only examining durations, but also frequencies, as the latter may inform infant emotional variability.  相似文献   

4.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   

5.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   

6.
We investigated how the emotional valence of an action outcome influences the experience of control, in an intentional binding experiment. Voluntary actions were followed by emotionally positive or negative human vocalisations, or by neutral tones. We used mental chronometry to measure a retrospective component of sense of agency (SoA), triggered by the occurrence of the action outcome, and a prospective component, driven by the expectation that the outcome will occur. Positive outcomes enhanced the retrospective component of SoA, but only when both occurrence and the valence of the outcome were unexpected. When the valence of outcomes was blocked – and therefore predictable – we found a prospective component of SoA when neutral tones were expected but did not actually occur. This prospective binding was absent, and reversed, for positive and negative expected outcomes. Emotional expectation counteracts the prospective component of SoA, suggesting a distancing effect.  相似文献   

7.
This study explores the associations between electronic media exposure, age, and socioeconomic status (SES) in a longitudinal sample of 24 infants from English-speaking families. Leveraging Language ENvironment Analysis (LENA) technology, the study seeks to characterize the relation between electronic media exposure and parental and child vocal activity. We analyzed ecologically valid, daylong audio recordings collected in infants’ homes when they were 6, 10, 14, 18, and 24 months old. SES was measured with the Hollingshead Index, and exposure to electronic media and adult and infant vocal activity were measured automatically with LENA. On average, the children in the sample were exposed to 58 min of electronic media daily. We found that electronic media exposure was negatively associated with SES and decreased with child age, but only amongst high-SES families. We also found that electronic media exposure negatively impacted concurrent adult and child vocal activity, irrespective of SES and infant age. The present findings are an important step forward in examining the role of demographic factors in exposure to electronic media and enhance our understanding of the mechanisms through which exposure to electronic media may impact linguistic development in infancy and beyond.  相似文献   

8.
Human language is a recombinant system that achieves its productivity through the combination of a limited set of sounds. Research investigating the evolutionary origin of this generative capacity has generally focused on the capacity of non-human animals to combine different types of discrete sounds to encode new meaning, with less emphasis on meaning-differentiating mechanisms achieved through potentially simpler temporal modifications within a sequence of repeated sounds. Here we show that pied babblers (Turdoides bicolor) generate two functionally distinct vocalisations composed of the same sound type, which can only be distinguished by the number of repeated elements. Specifically, babblers produce extended ‘purrs’ composed of, on average, around 17 element repetitions when drawing young offspring to a food source and truncated ‘clucks’ composed of a fixed number of 2–3 elements when collectively mediating imminent changes in foraging site. We propose that meaning-differentiating temporal structuring might be a much more widespread combinatorial mechanism than currently recognised and is likely of particular value for species with limited vocal repertoires in order to increase their communicative output.  相似文献   

9.
We examined 5-month-olds’ responses to adult facial versus vocal displays of happy and sad expressions during face-to-face social interactions in three experiments. Infants interacted with adults in either happy-sad-happy or happy-happy-happy sequences. Across experiments, either facial expressions were present while presence/absence of vocal expressions was manipulated or visual access to facial expressions was blocked but vocal expressions were present throughout. Both visual attention and infant affect were recorded. Although infants looked more when vocal expressions were present, they smiled significantly more to happy than to sad facial expressions regardless of presence or absence of the voice. In contrast, infants showed no evidence of differential responding to voices when faces were obscured; their smiling and visual attention simply declined over time. These results extend findings from non-social contexts to social interactions and also indicate that infants may require facial expressions to be present to discriminate among adult vocal expressions of affect.  相似文献   

10.
This study compared vocal development in Korean- and English-learning infants and examined ambient-language effects focusing on predominant utterance shapes. Vocalization samples were obtained from 14 Korean-learning children and 14 English-learning children, who ranged in age from 9 to 21 months, in monolingual environments using day-long audio recordings. The analyzers, who were blind to participants’ demographic information, identified utterance shapes to determine functional vocal repertoires through naturalistic listening simulating the caregiver’s natural mode of listening. The results showed no cross-linguistic differences in the amount of vocal output or the proportion of canonical syllables. However, the infants from the two language backgrounds showed differences regarding the predominant canonical utterance shapes. The percentage of VCV utterances in Korean-learning children was higher than in English-learning children while CV syllables predominated in the English-learning children. We speculate that the difference between the predominant utterance shapes of Korean- and English-learning children could be associated with differences in early lexical items typically acquired in the two language groups.  相似文献   

11.
Adults are highly proficient in understanding emotional signals from both facial and vocal cues, including when communicating across cultural boundaries. However, the developmental origin of this ability is poorly understood, and in particular, little is known about the ontogeny of differentiation of signals with the same valence. The studies reported here employed a habituation paradigm to test whether preverbal infants discriminate between non-linguistic vocal expressions of relief and triumph. Infants as young as 6 months who had habituated to relief or triumph showed significant discrimination of relief and triumph tokens at test (i.e. greater recovery to the unhabituated stimulus type), when exposed to tokens from a single individual (Study 1). Infants habituated to expressions from multiple individuals showed less consistent discrimination in that consistent discrimination was only found when infants were habituated to relief tokens (Study 2). Further, infants tested with tokens from individuals from different cultures showed dishabituation only when habituated to relief tokens and only at 10–12 months (Study 3). These findings suggest that discrimination between positive emotional expressions develops early and is modulated by learning. Further, infants' categorical representations of emotional expressions, like those of speech sounds, are influenced by speaker-specific information.  相似文献   

12.
Research on early signs of autism in social interactions often focuses on infants’ motor behaviors; few studies have focused on speech characteristics. This study examines infant‐directed speech of mothers of infants later diagnosed with autism (LDA; n = 12) or of typically developing infants (TD; n = 11) as well as infants’ productions (13 LDA, 13 TD). Since LDA infants appear to behave differently in the first months of life, it can affect the functioning of dyadic interactions, especially the first vocal productions, sensitive to expressiveness and emotions sharing. We assumed that in the first 6 months of life, prosodic characteristics (mean duration, mean pitch, and intonative contour types) will be different in dyads with autism. We extracted infants’ and mothers’ vocal productions from family home movies and analyzed the mean duration and pitch as well as the pitch contours in interactive episodes. Results show that mothers of LDA infants use relatively shorter productions as compared to mothers talking to TD infants. LDA infants’ productions are not different in duration or pitch, but they use less complex modulated productions (i.e., those with more than two melodic modulations) than do TD. Further studies should focus on developmental profiles in the first year, analyzing prosody monthly.  相似文献   

13.
This report extends a previous cross-cultural study of synchrony in mother-infant vocal interactions (Bornstein et al., 2015) to immigrant samples. Immigrant dyads from three cultures of origin (Japan, South Korea, South America) living in the same culture of destination (the United States) were compared to nonmigrant dyads in those same cultures of origin and to nonmigrant European American dyads living in the same culture of destination (the United States). This article highlights an underutilized analysis to assess synchrony in mother-infant interaction and extends cross-cultural research on mother-infant vocal interaction. Timing of onsets and offsets of maternal speech to infants and infant nondistress vocalizations were coded separately from 50-min recorded naturalistic observations of mothers and infants. Odds ratios were computed to analyze synchrony in mother-infant vocal interactions. Synchrony was analyzed in three ways -- contingency of timed event sequences, mean differences in contingency by acculturation level and within dyads, and coordination of responsiveness within dyads. Immigrant mothers were contingently responsive to their infants’ vocalizations, but only Korean immigrant infants were contingently responsive to their mothers’ vocalizations. For the Japanese and South American comparisons, immigrant mothers were more contingently responsive than their infants (but not robustly so for South American immigrants). For the Korean comparison, mean differences in contingent responsiveness were found among acculturation groups (culture of origin, immigrant, culture of destination), but not between mothers and infants. Immigrant dyads’ mean levels of responsiveness did not differ. Immigrant mothers’ and infants’ levels of responsiveness were coordinated. Strengths and flexibility of the timed event sequential analytic approach to assessing synchrony in mother-infant interactions are discussed, particularly for culturally diverse samples.  相似文献   

14.
Verbal framing effects have been widely studied, but little is known about how people react to multiple framing cues in risk communication, where verbal messages are often accompanied by facial and vocal cues. We examined joint and differential effects of verbal, facial, and vocal framing on risk preference in hypothetical monetary and life–death situations. In the multiple framing condition with the factorial design (2 verbal frames × 2 vocal tones × 4 basic facial expressions × 2 task domains), each scenario was presented auditorily with a written message on a photo of the messenger's face. Compared with verbal framing effects resulting in preference reversal, multiple frames made risky choice more consistent and shifted risk preference without reversal. Moreover, a positive tone of voice increased risk‐seeking preference in women. When the valence of facial and vocal cues was incongruent with verbal frame, verbal framing effects were significant. In contrast, when the affect cues were congruent with verbal frame, framing effects disappeared. These results suggest that verbal framing is given higher priority when other affect cues are incongruent. Further analysis revealed that participants were more risk‐averse when positive affect cues (positive tone or facial expressions) were congruently paired with a positive verbal frame whereas participants were more risk‐seeking when positive affect cues were incongruent with the verbal frame. In contrast, for negative affect cues, congruency promoted risk‐seeking tendency whereas incongruency increased risk‐aversion. Overall, the results show that facial and vocal cues interact with verbal framing and significantly affect risk communication. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

16.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

17.
This article presents data from four independent studies on the relationship between quantity of maternal vocal stimulation during naturalistic conditions and 3-month-old infants' cognitive processing, as assessed by the infants' differential vocal responsiveness (DVR) to their mother versus a female stranger. In two of the studies, the subjects were full-term American infants whose parents came from a wide socio-educational and ethnic background. In the third study, the subjects were low-risk preterm infants of White American parents. In the fourth study the subjects were full-term infants in Greece. The results from all four studies showed a curvilinear relationship between DVR and maternal vocal stimulation during naturalistic conditions. High DVR was associated with a mid-level amount of maternal vocal stimulation, whereas low DVR was associated with both least and most maternal vocal stimulation. These studies raise the question of possible adverse effects of social overstimulation on infant development.  相似文献   

18.
Fifteen-month-old infants detected a violation when an actor performed an action that did not match her preceding vocal cue: The infants looked reliably longer when the actor expressed a humorous vocal cue followed by a sweet action or expressed a sweet vocal cue followed by a humorous action, than when the vocal cue was followed by a matching action. The infants failed to detect the mismatch when one person expressed the vocal cue and another performed the action. The results suggest that by 15 months of age, infants are capable of distinguishing between two types of vocal cues and actions along the positive emotional spectrum: humor and sweetness. Furthermore, they match humorous vocal cues to humorous actions and sweet vocal cues to sweet actions only when the cues and actions are made by the same person.  相似文献   

19.
The vocalizations of eight infants with Down syndrome were recorded longitudinally in relation to different social and non-social contexts. The infants were observed biweekly from 8 to 24 weeks and monthly up to 40 weeks. At each visit the infants were presented with their mother, a female stranger, and a rattle puppet, each alternately active and passive. Each condition lasted 60 sec. The results showed that by 4 months of age, the infants produced different types of vocal sounds in relation to environmental contexts. They produced significantly more melodic (speechlike) sounds, vocalic (non-speechlike) sounds, and emotional (crying, laughing and fussing) sounds when facing people than objects. By 6 months of age, these utterances began to be distinguished between mother and female stranger and active and passive adults. However, within the communicative context the overall amount of vocalic (non-speechlike) sounds produced was larger than the amount of melodic (speechlike) sounds. It is suggested that this low output of melodic sounds in the overall vocal production of these infants may adversely affect the development of more appropriate vocal behaviour.  相似文献   

20.
Infants’ prelinguistic vocalizations reliably organize vocal turn-taking with social partners, creating opportunities for learning to produce the sound patterns of the ambient language. This social feedback loop supporting early vocal learning is well-documented, but its developmental origins have yet to be addressed. When do infants learn that their non-cry vocalizations influence others? To test developmental changes in infant vocal learning, we assessed the vocalizations of 2- and 5-month-old infants in a still-face interaction with an unfamiliar adult. During the still-face, infants who have learned the social efficacy of vocalizing increase their babbling rate. In addition, to assess the expectations for social responsiveness that infants build from their everyday experience, we recorded caregiver responsiveness to their infants’ vocalizations during unstructured play. During the still-face, only 5-month-old infants showed an increase in vocalizing (a vocal extinction burst) indicating that they had learned to expect adult responses to their vocalizations. Caregiver responsiveness predicted the magnitude of the vocal extinction burst for 5-month-olds. Because 5-month-olds show a vocal extinction burst with unfamiliar adults, they must have generalized the social efficacy of their vocalizations beyond their familiar caregiver. Caregiver responsiveness to infant vocalizations during unstructured play was similar for 2- and 5-month-olds. Infants thus learn the social efficacy of their vocalizations between 2 and 5 months of age. During this time, infants build associations between their own non-cry sounds and the reactions of adults, which allows learning of the instrumental value of vocalizing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号