首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4‐ to 8‐month‐old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye‐tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.  相似文献   

2.
Early evidence of social referencing was examined in 5?-month-old infants. Infants were habituated to 2 films of moving toys, one toy eliciting a woman's positive emotional expression and the other eliciting a negative expression under conditions of bimodal (audiovisual) or unimodal visual (silent) speech. It was predicted that intersensory redundancy provided by audiovisual (but not available in unimodal visual) events would enhance detection of the relation between emotional expressions and the corresponding toy. Consistent with predictions, only infants who received bimodal, audiovisual events detected a change in the affect-object relations, showing increased looking during a switch test in which the toy-affect pairing was reversed. Moreover, in a subsequent live preference test, they preferentially touched the 3-dimensional toy previously paired with the positive expression. These findings suggest social referencing emerges by 5? months in the context of intersensory redundancy provided by dynamic multimodal stimulation and that even 5?-month-old infants demonstrate preferences for 3-dimensional objects on the basis of affective information depicted in videotaped events.  相似文献   

3.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

4.
Infants and adults are well able to match auditory and visual speech, but the cues on which they rely (viz. temporal, phonetic and energetic correspondence in the auditory and visual speech streams) may differ. Here we assessed the relative contribution of the different cues using sine-wave speech (SWS). Adults (N = 52) and infants (N = 34, age ranged in between 5 and 15 months) matched 2 trisyllabic speech sounds (‘kalisu’ and ‘mufapi’), either natural or SWS, with visual speech information. On each trial, adults saw two articulating faces and matched a sound to one of these, while infants were presented the same stimuli in a preferential looking paradigm. Adults’ performance was almost flawless with natural speech, but was significantly less accurate with SWS. In contrast, infants matched the sound to the articulating face equally well for natural speech and SWS. These results suggest that infants rely to a lesser extent on phonetic cues than adults do to match audio to visual speech. This is in line with the notion that the ability to extract phonetic information from the visual signal increases during development, and suggests that phonetic knowledge might not be the basis for early audiovisual correspondence detection in speech.  相似文献   

5.
Previous findings indicate that bilingual Catalan/Spanish‐learning infants attend more to the highly salient audiovisual redundancy cues normally available in a talker's mouth than do monolingual infants. Presumably, greater attention to such cues renders the challenge of learning two languages easier. Spanish and Catalan are, however, rhythmically and phonologically close languages. This raises the possibility that bilinguals only rely on redundant audiovisual cues when their languages are close. To test this possibility, we exposed 15‐month‐old and 4‐ to 6‐year‐old close‐language bilinguals (Spanish/Catalan) and distant‐language bilinguals (Spanish/”other”) to videos of a talker uttering Spanish or Catalan (native) and English (non‐native) monologues and recorded eye‐gaze to the talker's eyes and mouth. At both ages, the close‐language bilinguals attended more to the talker's mouth than the distant‐language bilinguals. This indicates that language proximity modulates selective attention to a talker's mouth during early childhood and suggests that reliance on the greater salience of audiovisual speech cues depends on the difficulty of the speech‐processing task.  相似文献   

6.
Previous studies have found that infants shift their attention from the eyes to the mouth of a talker when they enter the canonical babbling phase after 6 months of age. Here, we investigated whether this increased attentional focus on the mouth is mediated by audio‐visual synchrony and linguistic experience. To do so, we tracked eye gaze in 4‐, 6‐, 8‐, 10‐, and 12‐month‐old infants while they were exposed either to desynchronized native or desynchronized non‐native audiovisual fluent speech. Results indicated that, regardless of language, desynchronization disrupted the usual pattern of relative attention to the eyes and mouth found in response to synchronized speech at 10 months but not at any other age. These findings show that audio‐visual synchrony mediates selective attention to a talker's mouth just prior to the emergence of initial language expertise and that it declines in importance once infants become native‐language experts.  相似文献   

7.
This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.  相似文献   

8.
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired (“unity assumption”). Participants made temporal order judgments (TOJ) and simultaneity judgments (SJ) about sine-wave speech (SWS) replicas of pseudowords and the corresponding video of the face. Listeners in speech and non-speech mode were equally sensitive judging audiovisual temporal order. Yet, using the McGurk effect, we could demonstrate that the sound was more likely integrated with lipread speech if heard as speech than non-speech. Judging temporal order in audiovisual speech is thus unaffected by whether the auditory and visual streams are paired. Conceivably, previously found differences between speech and non-speech stimuli are not due to the putative “special” nature of speech, but rather reflect low-level stimulus differences.  相似文献   

9.
Research has demonstrated that young infants can detect a change in the tempo and the rhythm of an event when they experience the event bimodally (audiovisually), but not when they experience it unimodally (acoustically or visually). According to Bahrick and Lickliter (2000, 2002), intersensory redundancy available in bimodal, but not in unimodal, stimulation directs attention to the amodal properties of events in early development. Later in development, as infants become more experienced perceivers, attention becomes more flexible and can be directed toward amodal properties in unimodal and bimodal stimulation. The present study tested this developmental hypothesis by assessing the ability of older, more perceptually experienced infants to discriminate the tempo or rhythm of an event, using procedures identical to those in prior studies. The results indicated that older infants can detect a change in the rhythm and the tempo of an event following both bimodal (audiovisual) and unimodal (visual) stimulation. These results provide further support for the intersensory redundancy hypothesis and are consistent with a pattern of increasing specificity in perceptual development.  相似文献   

10.
以30名小学二年级学生2、4名小学五年级学生和29名大学一年级学生为被试,运用McGurk效应研究范式对汉语母语者视听双通道言语知觉的表现特点、发展趋势等问题进行了探讨,三个年龄阶段被试均接受纯听和视听两种条件下的测查,被试的任务是出声报告自己听到的刺激。结果发现:(1)汉语为母语的二年级小学生、五年级小学生和大学生在自然听力环境下的单音节加工中都受到视觉线索的影响,表现出了McGurk效应;(2)二年级小学生、五年级小学生和大学生受视觉言语影响的程度,也就是McGurk效应的强度没有显著差异,没有表现出类似英语母语者的发展趋势。该结果支持了McGurk效应"普遍存在"的假说。  相似文献   

11.
We report a 53-year-old patient (AWF) who has an acquired deficit of audiovisual speech integration, characterized by a perceived temporal mismatch between speech sounds and the sight of moving lips. AWF was less accurate on an auditory digit span task with vision of a speaker's face as compared to a condition in which no visual information from the lower face was available. He was slower in matching words to pictures when he saw congruent lip movements compared to no lip movements or non-speech lip movements. Unlike normal controls, he showed no McGurk effect. We propose that multisensory binding of audiovisual language cues can be selectively disrupted.  相似文献   

12.
This study examined 4- to 10-month-old infants' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Experiment 1 established that infants of all ages could successfully discriminate between two different audiovisual rhythmic events. Experiment 2 showed that only 10-month-old infants detected a desynchronization of the auditory and visual components of a rhythmical event. Experiment 3 showed that 4- to 8-month-old infants could detect A-V desynchronization but only when the audiovisual event was nonrhythmic. These results show that initially in development infants attend to the overall temporal structure of rhythmic audiovisual events but that later in development they become capable of perceiving the embedded intersensory temporal synchrony relations as well.  相似文献   

13.
6~12岁儿童、13~18岁青少年和20~30岁成人被试各30名,运用McGurk效应研究范式对汉语母语者视听言语知觉的发展趋势进行探讨。所有被试需要接受纯听和视听两种条件下的测试,其任务是出声报告自己听到的刺激。结果发现:(1)三个年龄阶段汉语母语者被试在安静听力环境下的单音节加工中都受到了视觉线索的影响,表现出了McGurk效应;(2)三个年龄阶段汉语母语者被试McGurk效应的强度存在显著差异,其受视觉言语影响的程度表现出随年龄增长而增强的发展趋势;(3)13岁以后汉语被试在视听一致下对视觉线索的依赖没有显著增强,但是在视听冲突下视觉言语的影响仍然在逐渐增强。  相似文献   

14.
Prior research has demonstrated intersensory facilitation for perception of amodal properties of events such as tempo and rhythm in early development, supporting predictions of the Intersensory Redundancy Hypothesis (IRH). Specifically, infants discriminate amodal properties in bimodal, redundant stimulation but not in unimodal, nonredundant stimulation in early development, whereas later in development infants can detect amodal properties in both redundant and nonredundant stimulation. The present study tested a new prediction of the IRH: that effects of intersensory redundancy on attention and perceptual processing are most apparent in tasks of high difficulty relative to the skills of the perceiver. We assessed whether by increasing task difficulty, older infants would revert to patterns of intersensory facilitation shown by younger infants. Results confirmed our prediction and demonstrated that in difficult tempo discrimination tasks, 5‐month‐olds perform like 3‐month‐olds, showing intersensory facilitation for tempo discrimination. In contrast, in tasks of low and moderate difficulty, 5‐month‐olds discriminate tempo changes in both redundant audiovisual and nonredundant unimodal visual stimulation. These findings indicate that intersensory facilitation is most apparent for tasks of relatively high difficulty and may therefore persist across the lifespan.  相似文献   

15.
Human infants develop a variety of attentional mechanisms that allow them to extract relevant information from a cluttered multimodal world. We know that both social and nonsocial cues shift infants’ attention, but not how these cues differentially affect learning of multimodal events. Experiment 1 used social cues to direct 8- and 4-month-olds’ attention to two audiovisual events (i.e., animations of a cat or dog accompanied by particular sounds) while identical distractor events played in another location. Experiment 2 directed 8-month-olds’ attention with colorful flashes to the same events. Experiment 3 measured baseline learning without attention cues both with the familiarization and test trials (no cue condition) and with only the test trials (test control condition). The 8-month-olds exposed to social cues showed specific learning of audiovisual events. The 4-month-olds displayed only general spatial learning from social cues, suggesting that specific learning of audiovisual events from social cues may be a function of experience. Infants cued with the colorful flashes looked indiscriminately to both cued locations during test (similar to the 4-month-olds learning from social cues) despite attending for equal duration to the training trials as the 8-month-olds with the social cues. Results from Experiment 3 indicated that the learning effects in Experiments 1 and 2 resulted from exposure to the different cues and multimodal events. We discuss these findings in terms of the perceptual differences and relevance of the cues.  相似文献   

16.
This research examined the developmental course of infants' ability to perceive affect in bimodal (audiovisual) and unimodal (auditory and visual) displays of a woman speaking. According to the intersensory redundancy hypothesis (L. E. Bahrick, R. Lickliter, & R. Flom, 2004), detection of amodal properties is facilitated in multimodal stimulation and attenuated in unimodal stimulation. Later in development, however, attention becomes more flexible, and amodal properties can be perceived in both multimodal and unimodal stimulation. The authors tested these predictions by assessing 3-, 4-, 5-, and 7-month-olds' discrimination of affect. Results demonstrated that in bimodal stimulation, discrimination of affect emerged by 4 months and remained stable across age. However, in unimodal stimulation, detection of affect emerged gradually, with sensitivity to auditory stimulation emerging at 5 months and visual stimulation at 7 months. Further temporal synchrony between faces and voices was necessary for younger infants' discrimination of affect. Across development, infants first perceive affect in multimodal stimulation through detecting amodal properties, and later their perception of affect is extended to unimodal auditory and visual stimulation. Implications for social development, including joint attention and social referencing, are considered.  相似文献   

17.
To determine whether infants follow the gaze of adults because they understand the referential nature of looking or because they use the adult turn as a predictive cue for the location of interesting events, the gaze-following behavior of 14- and 18-month-olds was examined in the joint visual attention paradigm under varying visual obstruction conditions: (a) when the experimenter's line of sight was obstructed by opaque screens (screen condition), (b) when the experimenter's view was not obstructed (no-screen condition), and (c) when the opaque screens contained a large transparent window (window condition). It was assumed that infants who simply use adult turns as predictive cues would turn equally in all 3 conditions but infants who comprehend the referential nature of looking would turn maximally when the experimenter's vision was not blocked and minimally when her vision was blocked. Eighteen-month-olds responded in accord with the referential position (turning much more in the no-screen and window conditions than in the screen condition). However, 14-month-olds yielded a mixed response pattern (turning less in the screen than the no-screen condition but turning still less in the window condition). The results suggest that, unlike 18-month-olds, 14-month-olds do not understand the intentional nature of looking and are unclear about the requirements for successful looking.  相似文献   

18.
The goal of the present study was twofold: to examine the influence of two amodal properties, co-location and temporal synchrony, on infants' associating a sight with a sound, and to determine if the relative influence of these properties on crossmodal learning changes with age. During familiarization 2-, 4-, 6- and 8-month-olds were presented two toys and a sound, with sights and sounds varying with respect to co-location and temporal synchrony. Following each familiarization phase infants were given a paired preference test to assess their learning of sight-sound associations. Measures of preferential looking revealed age-related changes in the influence of co-location and temporal synchrony on infants' learning sight-sound associations. At all ages, infants could use temporal synchrony and co-location as a basis for associating an auditory with a visual event and, in the absence of temporal synchrony, co-location was sufficient to support crossmodal learning. However, when these cues conflicted there were developmental changes in the influence of these cues on infants' learning auditory-visual associations. At 2 and 4 months infants associated the sounds with the toy that moved in synchrony with the sound's rhythm despite extreme violation of co-location of this sight and sound. In contrast, 6- and 8-month-olds did not associate a specific toy with the sound when co-location and synchrony information conflicted. The findings highlight the unique and interactive effects of distinct amodal properties on infants' learning arbitrary crossmodal relations. Possible explanations for the age shift in performance are discussed.  相似文献   

19.
Two experiments examined whether 4-, 6-, and 10-month-old infants process natural looking faces by feature, i.e. processing internal facial features independently of the facial context or holistically by processing the features in conjunction with the facial context. Infants were habituated to two faces and looking time was measured. After habituation they were tested with a habituation face, a switch-face, or a novel face. In the switch-faces, the eyes and mouth of the habituation faces were switched. The results showed that the 4-month-olds processed eyes and mouth by feature, whereas the 10-month-olds processed both features holistically. The 6-month-olds were in a transitional stage where they processed the mouth holistically but the eyes still as a feature. Thus, the results demonstrated a shift from featural to holistic processing in the age range of 4 to 10 months.  相似文献   

20.
A series of four experiments was conducted to determine whether English-learning infants can use allophonic cues to word boundaries to segment words from fluent speech. Infants were familiarized with a pair of two-syllable items, such as nitrates and night rates and then were tested on their ability to detect these same words in fluent speech passages. The presence of allophonic cues to word boundaries did not help 9-month-olds to distinguish one of the familiarized words from an acoustically similar foil. Infants familiarized with nitrates were just as likely to listen to a passage about night rates as they were to listen to one about nitrates. Nevertheless, when the passages contained distributional cues that favored the extraction of the familiarized targets, 9-month-olds were able to segment these items from fluent speech. By the age of 10.5 months, infants were able to rely solely on allophonic cues to locate the familiarized target words in passages. We consider what implications these findings have for understanding how word segmentation skills develop.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号