首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
语音感知的发展状况对个体的语言发展有着深远影响。生命的第一年中, 在语言经验的作用下, 婴儿的语音感知从最初的普遍性感知逐渐发展为对母语的特异性感知。研究者们提出统计学习机制对这一过程加以解释, 即婴儿对语言环境中语音的频次分布十分敏感, 可以通过对频次分布的计算, 从语音的连续体中区分出在母语中起区别意义作用的各个语音范畴。同时, 功能性重组机制和一些社会性线索也会对婴儿语音感知的发展产生重要影响。  相似文献   

2.
新生儿自娩出起便开始利用臻于成熟的听觉系统对语音的各要素进行大脑表征和学习记忆。考察新生儿语音加工特点,不仅能揭示语言功能在人类发展最初阶段的认知神经机制,还能对自闭症等神经发育性疾病的早期预警和临床诊断提供有价值的线索。我们回顾并总结了新生儿对语音的感知、辨别和学习以及语言发展对自闭症的预测作用,发现新生儿对特定语音存在感知偏好;新生儿具备独特的音素辨别能力;婴儿期语言加工的脑功能或结构指标对自闭症具有一定的预测价值。我们建议未来研究从三个方面开展工作。在基础研究方面:第一,严格控制语音材料的韵律因素,重新审查新生儿语言加工特征及大脑偏侧化问题;第二,揭示新生儿语音学习的认知神经机制以及睡眠的记忆巩固作用。在临床转化研究方面,以高风险自闭症新生儿为追踪对象,基于纵向多模态脑观测数据,建立疾病风险评估系统,揭示出生早期语言发展脑指标对自闭症的预测价值。  相似文献   

3.
听觉是人类获得外部环境信息的主要感官之一,对情绪与社会认知相关听觉加工的发展有重要意义。在生命早期,新生儿、婴儿对情绪语音已有感知、分辨与识别能力并表现出对特定情绪的加工偏好;对人类语音、母语等带有社会属性声音也产生了加工偏好;在诸如面孔识别等社会认知能力的发展中,听觉也起到一定作用,听觉障碍则影响有关社会认知的发展。建议未来研究有更多纵向设计,结合多模态成像技术来更好地解决发展性问题。  相似文献   

4.
口语感知是当前心理语言学的研究热点, 然而以往研究大多以婴儿和成人为被试, 缺乏对幼儿口语感知的研究。此外, 现有口语感知模型主要是基于非声调语言研究建立起来的, 对汉语并不完全适用。汉语是一种声调语言, 在语音构成上不同于非声调语言。本项目将立足于汉语口语特点, 以3~5岁幼儿为研究对象, 考察幼儿汉语口语感知特点及神经机制。综合使用眼动方法、ERP方法和LORETA源定位技术探讨以下问题:(1)幼儿在前注意阶段和注意阶段的听觉语音辨别特点; (2)幼儿汉语口语词汇识别过程中音段信息和超音段信息的作用; (3)幼儿汉语口语感知的神经机制。本项目研究结果将揭示幼儿汉语口语感知特点, 为完善现有的口语感知模型提供新的实验证据。  相似文献   

5.
准确解码语音中的情绪信息能让个体更好地适应社会环境, 此能力对新生儿和婴儿尤其重要, 因为人类刚出生时听觉系统远比视觉系统发育得完善。虽然已有研究表明5~7月龄的婴儿能分辨不同情绪种类的语音, 但目前对新生儿的研究还非常少。人类是否在出生时即具有分辨不同种类情绪性语音的能力?新生儿对情绪的加工是否存在正性或负性偏向?本文选用odd-ball范式考察高兴、恐惧、愤怒三种韵律性语音在1~6天龄新生儿大脑中诱发的事件相关电位。实验1直接对比三种情绪性条件, 发现新生儿大脑的额区(F3和F4电极点)可以区分情绪性语音的正负性, 正性(高兴)语音诱发的“失匹配反应”幅度明显大于负性(愤怒和恐惧)语音。实验2采用偏差和标准刺激反转的odd-ball范式, 证实了实验1的结果并非源于三种情绪语音物理属性的差异。本文的结果提示, 新生儿大脑可自动辨别正性与负性情绪语音, 但尚不能将愤怒和恐惧两种负性语音区分开来。更重要的是, 高兴语音比两种负性语音诱发了更大的失匹配反应, 这一结果首次从神经学层面(电生理指标)为新生儿情绪性语音加工的正性偏向提供了证据。  相似文献   

6.
脑干诱发电位是一种考察听觉脑干加工声音信号时神经活动的非侵入性技术, 近年来被广泛用于探索言语感知的神经基础。相关研究主要集中在考察成年人和正常发展儿童语音编码时脑干的活动特征及发展模式, 探讨发展性阅读障碍及其他语言损伤的语音编码缺陷及其神经表现等方面。在已有研究的基础上进一步探索初级语音编码和高级言语加工之间的相互作用机制, 考察阅读障碍的底层神经基础将成为未来该技术在言语感知研究中应用的重点。  相似文献   

7.
曹艺  杨小虎 《心理科学进展》2019,27(6):1025-1035
精神分裂症是一种常见的精神疾病, 表现为多方面的症状, 其中, 语言异常是精神分裂症患者认知损伤的核心症状之一。本文关注精神分裂症患者的语音感知, 从音段和超音段两方面简述国内外对精神分裂症患者语音感知开展的行为和神经科学实验, 指出中国应加大对中国精神分裂症患者的汉语语音感知探索。  相似文献   

8.
日常言语交流常常会遇到各种噪音的干扰。研究表明, 噪音所产生的能量掩蔽和信息掩蔽会给语音感知带来较大影响。在感知地道的二语语音时, 二语者在各种噪音条件下所受干扰通常比母语者更大, 其表现随噪音类型、水平和目标音位特征的不同而变化; 同时, 二语者的感知也存在较大个体差异, 这是多种因素影响的结果。在噪音下感知带有各种外国腔的二语口音时, 母语者的表现差于其对地道母语语音的感知; 二语经历贫乏的二语者则对与自己口音类似的外国腔感知较好, 但二语经历较长的二语者在感知中却表现出较大灵活性。  相似文献   

9.
已有研究对于双语平衡度是否与三语音系习得水平存在联系并未达成一致。本研究以双语平衡度,即哈萨克语-汉语双语者的非优势语言(汉语)熟悉度水平作为自变量,探索其对三语(英语)音位学习的影响。根据双语平衡度水平将被试分为两组,分别完成哈萨克语元音感知、汉语元音感知和英语元音感知任务。结果发现:首先,高、低双语平衡度组的英语元音识别能力不存在显著差别,但其英语元音感知混淆模式存在很大不同,低双语平衡度组的英语元音感知混淆较高双语平衡度组呈现出更为复杂的混淆模式。其次,低双语平衡度组中的优势语言不能有效地解释三语音位感知,非优势语言的音位感知水平对三语音位感知能力的解释率高于高双语平衡度组。这说明双语平衡度影响了三语元音感知。此外,双语间越不均衡,非优势语言的音位感知水平对三语音位感知的解释率越高。  相似文献   

10.
语言能力是人类区别于动物最本质的能力之一。其中,语音加工是语言认知的核心功能,语音加工的脑机制是语言学及认知心理学关注和研究的重要课题。该领域已有的大部分研究主要关注成人、儿童青少年或婴儿如何加工和处理语音,目前我们对新生儿(人类出生28天内)的大脑是如何感知语音的尚不清楚。随着神经科学技术的发展,非侵入式的大脑活动测量技术被越来越多的用于考察新生儿的脑机制。研究发现,人类在新生儿时期就已经存在相对完善的语音加工神经系统。例如,感知超音段特征的关键脑区为右侧颞上回,检测音节序列结构的关键脑区为左侧额下回(Broca区)。本文分别从新生儿对音段和超音段特征的感知、对音节序列结构(包括序列边缘、重复结构、结构分割)的感知,以及对母语和外语感知的差异这三个方面,介绍新生儿对不同语音特征感知的大脑机制,并就此领域的研究发展方向做出了几点探讨。  相似文献   

11.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

12.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   

13.
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that message is presented in a noisy background. Speech is a particularly important example of multisensory integration because of its behavioural relevance to humans and also because brain regions have been identified that appear to be specifically tuned for auditory speech and lip gestures. Previous research has suggested that speech stimuli may have an advantage over other types of auditory stimuli in terms of audio-visual integration. Here, we used a modified adaptive psychophysical staircase approach to compare the influence of congruent visual stimuli (brief movie clips) on the detection of noise-masked auditory speech and non-speech stimuli. We found that congruent visual stimuli significantly improved detection of an auditory stimulus relative to incongruent visual stimuli. This effect, however, was equally apparent for speech and non-speech stimuli. The findings suggest that speech stimuli are not specifically advantaged by audio-visual integration for detection at threshold when compared with other naturalistic sounds.  相似文献   

14.
Speech Perception Within an Auditory Cognitive Science Framework   总被引:1,自引:0,他引:1  
ABSTRACT— The complexities of the acoustic speech signal pose many significant challenges for listeners. Although perceiving speech begins with auditory processing, investigation of speech perception has progressed mostly independently of study of the auditory system. Nevertheless, a growing body of evidence demonstrates that cross-fertilization between the two areas of research can be productive. We briefly describe research bridging the study of general auditory processing and speech perception, showing that the latter is constrained and influenced by operating characteristics of the auditory system and that our understanding of the processes involved in speech perception is enhanced by study within a more general framework. The disconnect between the two areas of research has stunted the development of a truly interdisciplinary science, but there is an opportunity for great strides in understanding with the development of an integrated field of auditory cognitive science.  相似文献   

15.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

16.
Our native language has a lifelong effect on how we perceive speech sounds. Behaviorally, this is manifested as categorical perception, but the neural mechanisms underlying this phenomenon are still unknown. Here, we constructed a computational model of categorical perception, following principles consistent with infant speech learning. A self-organizing network was exposed to a statistical distribution of speech input presented as neural activity patterns of the auditory periphery, resembling the way sound arrives to the human brain. In the resulting neural map, categorical perception emerges from most single neurons of the model being maximally activated by prototypical speech sounds, while the largest variability in activity is produced at category boundaries. Consequently, regions in the vicinity of prototypes become perceptually compressed, and regions at category boundaries become expanded. Thus, the present study offers a unifying framework for explaining the neural basis of the warping of perceptual space associated with categorical perception.  相似文献   

17.
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as inferior frontal gyrus (IFG) and motor cortices, even in the absence of an explicit task. To investigate this, we applied spectral mixes of a flute sound and either vowels or specific music instrument sounds (e.g. trumpet) in an fMRI study, in combination with three different instructions. The instructions either revealed no information about stimulus features, or explicit information about either the music instrument or the vowel features. The results demonstrated that, besides an involvement of posterior temporal areas, stimulus expectancy modulated in particular a network comprising IFG and premotor cortices during this passive listening task.  相似文献   

18.
Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological deficits and thereby SLI. We investigate this possibility by testing the auditory discrimination abilities of G-SLI children for speech and non-speech sounds, at varying presentation rates, and controlling for the effects of age and language on performance. For non-speech formant transitions, 69% of the G-SLI children showed normal auditory processing, whereas for the same acoustic information in speech, only 31% did so. For rapidly presented tones, 46% of the G-SLI children performed normally. Auditory performance with speech and non-speech sounds differentiated the G-SLI children from their age-matched controls, whereas speed of processing did not. The G-SLI children evinced no relationship between their auditory and phonological/grammatical abilities. We found no consistent evidence that a deficit in processing rapid acoustic information causes or maintains G-SLI. The findings, from at least those G-SLI children who do not exhibit any auditory deficits, provide further evidence supporting the existence of a primary domain-specific deficit underlying G-SLI.  相似文献   

19.
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds are perceived as non-speech. In contrast, selective speech adaptation occurred irrespective of whether listeners were in speech or non-speech mode. These results provide new evidence for the distinction between a speech and non-speech processing mode, and they demonstrate that different mechanisms underlie recalibration and selective speech adaptation.  相似文献   

20.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号