首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
以30名小学二年级学生2、4名小学五年级学生和29名大学一年级学生为被试,运用McGurk效应研究范式对汉语母语者视听双通道言语知觉的表现特点、发展趋势等问题进行了探讨,三个年龄阶段被试均接受纯听和视听两种条件下的测查,被试的任务是出声报告自己听到的刺激。结果发现:(1)汉语为母语的二年级小学生、五年级小学生和大学生在自然听力环境下的单音节加工中都受到视觉线索的影响,表现出了McGurk效应;(2)二年级小学生、五年级小学生和大学生受视觉言语影响的程度,也就是McGurk效应的强度没有显著差异,没有表现出类似英语母语者的发展趋势。该结果支持了McGurk效应"普遍存在"的假说。  相似文献   

2.
研究噪声环境中自闭症儿童单通道和视听双通道言语知觉特征及发展趋势。结果发现,自闭症儿童在纯听、视听一致下言语辨识率显著低于普通儿童;自闭症儿童在视听一致下的视觉增益和视听不一致下的McGurk效应强度显著低于普通儿童;纯听和视听一致下13~16岁儿童言语辨识率显著高于6~12岁儿童,但是McGurk效应强度差异并不显著。结果表明,噪声环境中自闭症儿童的单通道和视听言语知觉能力不足,其视听整合能力与普通儿童存在显著差异。  相似文献   

3.
McGurk效应(麦格克效应)是典型的视听整合现象, 该效应受到刺激的物理特征、注意分配、个体视听信息依赖程度、视听整合能力、语言文化差异的影响。引发McGurk效应的关键视觉信息主要来自说话者的嘴部区域。产生McGurk效应的认知过程包含早期的视听整合(与颞上皮层有关)以及晚期的视听不一致冲突(与额下皮层有关)。未来研究应关注面孔社会信息对McGurk效应的影响, McGurk效应中单通道信息加工与视听整合的关系, 结合计算模型探讨其认知神经机制等。  相似文献   

4.
李利  郭红婷  华乐萌  方银萍  王瑞明 《心理学报》2012,44(11):1434-1442
采用跨语言长时竞争启动范式探讨汉语为二语学习者言语产生中的跨语言干扰。实验1的被试选择18名俄语为母语者, 实验2选择18名日语为母语者。自变量是命名语言(一语和二语)和学习条件(学过一致、学过不一致和未学过), 因变量是测验阶段被试图片命名的反应时和正确率。实验分为学习阶段和测验阶段, 被试分别用一语和二语命名图片, 观察被试在测验阶段学过一致与学过不一致两种条件下是否都能产生促进效应。实验结果发现, 无论俄语为母语者还是日语为母语者, 被试只有在学过一致条件下的反应时显著快于未学过条件, 而学过不一致条件下的反应时跟未学过没有显著差异。本研究结果表明, 汉语为二语学习者言语产生中存在跨语言干扰, 且语言间书写特征的差异对跨语言干扰没有影响。  相似文献   

5.
基于外源性线索-靶子范式, 采用2(线索-靶子间隔时间, stimulus onset asynchronies, SOA:400~600 ms、1000~1200 ms) × 3(目标刺激类型:视觉、听觉、视听觉) × 2(线索有效性:有效线索、无效线索)的被试内实验设计, 要求被试对目标刺激完成检测任务, 以考察视觉线索诱发的返回抑制(inhibition of return, IOR)对视听觉整合的调节作用, 从而为感知觉敏感度、空间不确定性及感觉通道间信号强度差异假说提供实验证据。结果发现:(1) 随SOA增长, 视觉IOR效应显著降低, 视听觉整合效应显著增强; (2) 短SOA (400~600 ms)时, 有效线索位置上的视听觉整合效应显著小于无效线索位置, 但长SOA (1000~1200 ms)时, 有效与无效线索位置上的视听觉整合效应并无显著差异。结果表明, 在不同SOA条件下, 视觉IOR对视听觉整合的调节作用产生变化, 当前结果支持感觉通道间信号强度差异假说。  相似文献   

6.
采用跨通道启动范式,探讨言语与面孔情绪加工的相互影响效应,以及语言差异(母语:汉语;非母语:英语)在其中的影响作用。实验1以言语情绪词为启动刺激,面孔情绪为目标刺激,结果发现,相比于英语启动刺激条件,在汉语启动刺激条件下的面孔情绪判断具有更好的表现;在积极情绪启动条件下,言语情绪刺激能够启动面孔情绪刺激。实验2以面孔情绪为启动刺激,言语情绪词为目标刺激,结果发现,相比于英语目标刺激条件,在汉语目标刺激条件下的言语情绪判断具有更好的表现;在积极情绪启动条件下,面孔情绪刺激能够启动言语情绪刺激。研究结果表明,言语情绪与面孔情绪的加工能够相互影响,但这种相互关系仅表现在积极情绪启动条件下。此外,母语和非母语在情绪功能上具有差异性。  相似文献   

7.
以85名小学2~6年级儿童为研究对象,采用方差分析、分层回归探讨小学高、低年龄阶段发展性阅读障碍儿童视觉注意广度的发展变化,并以同年龄正常阅读者作为对照组;同时在不同年龄阶段探究视觉注意广度对阅读流畅性发展的预测作用。结果显示:(1)发展性阅读障碍儿童存在视觉注意广度缺陷,并呈现出在小学高年龄阶段更严重的趋势;(2)在阅读障碍儿童中,视觉注意广度对汉语流畅阅读的显著预测作用随发展增强;而对于正常阅读者,视觉注意广度仅显著预测低年龄段学生的句子朗读流畅阅读能力。以上结果表明视觉注意广度与汉语流畅阅读能力关系密切,今后汉语阅读障碍的相关干预研究可以尝试从视觉注意广度训练方面切入。  相似文献   

8.
使用单任务研究程序,采用引入提示线索的方法,以产生时距作为反应指标对存在间断的时距估计任务中的间断期望效应和提示线索效应(注意效应)进行系统考察,并对间断时距的效应、产生时距与等待时距的关系问题作出进一步探讨。结果表明,间断位置(等待时距)因素是被试时间判断的主要线索,被试的产生时距随着等待时距的增加而延长。间断实验中表现出极其显著的提示线索效应,此效应既增加了时距估计的变异,又延长了被试的时距估计。无间断实验条件下,被试表现出显著的间断期望效应,被试对间断的期望有损于时间估计。  相似文献   

9.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

10.
在安静、语音型噪音、语音调制型噪音三种背景下测量了汉语母语者、汉语中、高水平的韩语母语者感知汉语元音和声调的正确率。安静背景下,三组人的语音感知类似,而在语音型噪音背景下,汉语母语者的感知正确率显著高于中水平二语者。进一步的检验表明中水平二语者在语音型噪音背景下的感知难度较大是由于其受到的语音型噪音中能量掩蔽的影响比母语被试要大,而其受到的信息掩蔽的干扰和另外两组被试相近。  相似文献   

11.
Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we examined perception of both audiovisually congruent and audiovisually incongruent speech in school‐age children with a history of SLI (H‐SLI), their typically developing (TD) peers, and adults. In the first experiment, all participants watched videos of a talker articulating syllables ‘ba’, ‘da’, and ‘ga’ under three conditions – audiovisual (AV), auditory only (A), and visual only (V). The amplitude of the N1 (but not of the P2) event‐related component elicited in the AV condition was significantly reduced compared to the N1 amplitude measured from the sum of the A and V conditions in all groups of participants. Because N1 attenuation to AV speech is thought to index the degree to which facial movements predict the onset of the auditory signal, our findings suggest that this aspect of audiovisual speech perception is mature by mid‐childhood and is normal in the H‐SLI children. In the second experiment, participants watched videos of audivisually incongruent syllables created to elicit the so‐called McGurk illusion (with an auditory ‘pa’ dubbed onto a visual articulation of ‘ka’, and the expectant perception being that of ‘ta’ if audiovisual integration took place). As a group, H‐SLI children were significantly more likely than either TD children or adults to hear the McGurk syllable as ‘pa’ (in agreement with its auditory component) than as ‘ka’ (in agreement with its visual component), suggesting that susceptibility to the McGurk illusion is reduced in at least some children with a history of SLI. Taken together, the results of the two experiments argue against global audiovisual integration impairment in children with a history of SLI and suggest that, when present, audiovisual integration difficulties in this population likely stem from a later (non‐sensory) stage of processing.  相似文献   

12.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

13.
In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Under some conditions, visual information can override auditory information to the extent that identification judgments of a visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiovisually compatible syllables are phonetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked to match an audio syllable/va/either to an audiovisually consistent syllable (audio/va/-video/fa/) or an audiovisually discrepant syllable (audio/ba/-video/fa/). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio/va/ to the audiovisually consistent/va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.  相似文献   

14.
The visible movement of a talker's face is an influential component of speech perception. However, the ability of this influence to function when large areas of the face (~50%) are covered by simple substantial occlusions, and so are not visible to the observer, has yet to be fully determined. In Experiment 1, both visual speech identification and the influence of visual speech on identifying congruent and incongruent auditory speech were investigated using displays of a whole (unoccluded) talking face and of the same face occluded vertically so that the entire left or right hemiface was covered. Both the identification of visual speech and its influence on auditory speech perception were identical across all three face displays. Experiment 2 replicated and extended these results, showing that visual and audiovisual speech perception also functioned well with other simple substantial occlusions (horizontal and diagonal). Indeed, displays in which entire upper facial areas were occluded produced performance levels equal to those obtained with unoccluded displays. Occluding entire lower facial areas elicited some impairments in performance, but visual speech perception and visual speech influences on auditory speech perception were still apparent. Finally, implications of these findings for understanding the processes supporting visual and audiovisual speech perception are discussed.  相似文献   

15.
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.  相似文献   

16.
McCotter MV  Jordan TR 《Perception》2003,32(8):921-936
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments 1a (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and visual signals were combined to produce congruent and incongruent audiovisual speech stimuli. Experiments 1a and 1b showed that perception of visual speech, and its influence on identifying the auditory components of congruent and incongruent audiovisual speech, was less for LI than for either NC or GS faces, which produced identical results. Experiments 2a and 2b showed that perception of visual speech, and influences on perception of incongruent auditory speech, was less for LI and CLI faces than for NC and CI faces (which produced identical patterns of performance). Our findings for NC and CI faces suggest that colour is not critical for perception of visual and audiovisual speech. The effect of luminance inversion on performance accuracy was relatively small (5%), which suggests that the luminance information preserved in LI faces is important for the processing of visual and audiovisual speech.  相似文献   

17.
If a place-of-articulation contrast is created between the auditory and the visual component syllables of videotaped speech, frequently the syllable that listeners report they have heard differs phonetically from the auditory component. These “McGurk effects”, as they have come to be called, show that speech perception may involve some kind of intermodal process. There are two classes of these phenomena: fusions and combinations. Perception of the syllable /da/ when auditory /ba/ and visual /ga/ are presented provides a clear example of the former, and perception of the string /bga/ after presentation of auditory /ga/ and visual /ba/ an unambiguous instance of the latter. Besides perceptual fusions and combinations, hearing visually presented component syllables also shows an influence of vision on audition. It is argued that these “visual” responses arise from basically the same underlying processes that yield fusions and combinations, respectively. In the present study, the visual component of audiovisually incongruous CV-syllables was presented in the left and the right visual hemifield, respectively. Audiovisual fusion responses showed a left hemifield advantage, and audiovisual combination responses a right hemifield advantage. This finding suggests that the process of audiovisual integration differs between audiovisual fusions and combinations and, furthermore, that the two cerebral hemispheres contribute differentially to the two classes of response.  相似文献   

18.
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.  相似文献   

19.
We conducted three experiments in order to examine the influence of gaze behavior and fixation on audiovisual speech perception in a task that required subjects to report the speech sound they perceived during the presentation of congruent and incongruent (McGurk) audiovisual stimuli. Experiment 1 showed that the subjects' natural gaze behavior rarely involved gaze fixations beyond the oral and ocular regions of the talker's face and that these gaze fixations did not predict the likelihood of perceiving the McGurk effect. Experiments 2 and 3 showed that manipulation of the subjects' gaze fixations within the talker's face did not influence audiovisual speech perception substantially and that it was not until the gaze was displaced beyond 10 degrees - 20 degrees from the talker's mouth that the McGurk effect was significantly lessened. Nevertheless, the effect persisted under such eccentric viewing conditions and became negligible only when the subject's gaze was directed 60 degrees eccentrically. These findings demonstrate that the analysis of high spatial frequency information afforded by direct oral foveation is not necessary for the successful processing of visual speech information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号