首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
韵律是言语各部分声音特性的综合,包括言语韵律和情绪韵律.有关言语韵律加工的ERP研究,涉及韵律成分加工、其与句法、语义、信息结构等的交互作用,以及韵律加工的个体差异.这些研究提供了丰富的信息.但由于韵律的复杂性,目前的研究仍然缺乏统一的范式,所得发现也难以有效整合.今后的言语韵律研究,应对已有研究成果从不同方面和角度进行验证和拓展,并充分重视汉语这一声调语言的加工独特性.  相似文献   

2.
言语产生是指个体通过口头言语表达想法、感受,而通达过程是言语产生的关键过程。本文以言语产生的不同语言水平为线索,综合分析双语经验对言语产生过程中通达能力的影响,结果发现双语经验降低了言语产生中的通达效率。在此基础上,从跨语言干扰、语言使用频率和词汇量三个角度归纳双语通达劣势的原因。未来的研究应该考察双语经验对言语加工中不同方面的影响,并且对相关解释进行整合。  相似文献   

3.
音高是音乐和言语领域中一个重要维度。失歌症是一种对音乐音高加工的障碍。探讨失歌症者对音乐和言语音高的加工有助于揭示音乐和言语音高加工是否共享特定的认知和神经机制。已有研究结果表明, 失歌症者对音乐音高加工存在障碍, 这种音高障碍在一定程度上影响到言语音高加工。同时, 声调语言背景无法弥补失歌症者的音高障碍。这些研究结果支持了资源-共享框架(resource-sharing framework), 即音乐和语言共享特定的认知和神经机制(Patel, 2003, 2008, in press), 并可能在一定程度上为失语症临床治疗提供借鉴。  相似文献   

4.
人声是人类听觉环境中最熟知和重要的声音, 传递着大量社会相关信息。与视觉人脸加工类似, 大脑对人声也有着特异性加工。研究者使用电生理、脑成像等手段找到了对人声有特异性反应的脑区, 即颞叶人声加工区(TVA), 并发现非人类动物也有类似的特异性加工区域。人声加工主要涉及言语、情绪和身份信息的加工, 分别对应于三条既相互独立又相互作用的神经通路。研究者提出了双通路模型、多阶段模型和整合模型分别对人声的言语、情绪和身份加工进行解释。未来研究需要进一步讨论人声加工的特异性能否由特定声学特征的选择性加工来解释, 并深入探究特殊人群(如自闭症和精神分裂症患者)的人声加工的神经机制。  相似文献   

5.
长期以来,言语信息的阈下启动研究都局限于考察视觉通道的阈下启动效应而忽视了听觉通道的阈下启动效应。听觉言语信息阈下启动研究发现,听觉通道也存在显著的阈下启动效应,而且其与视觉阈下启动存在一定差异。对听觉言语信息阈下启动的影响因素及其认知神经机制的研究促进了对听觉言语信息无意识加工机制的认识。未来的听觉言语信息阈下启动研究应进一步考察其影响因素及认知神经机制。  相似文献   

6.
采用视觉呈现干扰词的图片一词干扰实验范式,探讨产生不同语法结构时言语产生汉语词类信息的加工进程.3个实验分别考察了以单字词、名词短语和简单陈述句命名图片时词类信息的加工,结果发现:三个实验中都存在不同词类干扰的不一致,名词干扰比非名词干扰更强烈;语义加工与语法特征加工存在重叠的时间窗,研究结果支持言语产生的交互激活理论.  相似文献   

7.
言语产生的语音加工单元具有跨语言的特异性。在印欧语言中, 音位是语音加工的重要功能单元。音位指具体语言中能够区别意义的最小语音单位, 如“big”包含三个音位/b/, /i/, /g/。目前, 在汉语言语产生中, 对音位的研究较少。本项目拟采用事件相关电位技术, 对汉语言语产生中的音位加工进行探讨, 试图考察:在汉语言语产生中, 1)音位加工的心理现实性, 以及音位表征是否受第二语言、汉语拼音习得、拼音使用经验的影响?2)音位的加工机制是怎样的?具体而言, 音位加工的特异性、位置编码、组合方式、时间进程是怎样的?对这些问题的回答, 将有助于深化对汉语言语产生的认识, 为建立汉语言语产生计算模型提供基础; 为比较印欧语言与汉语在机制上的异同提供基础; 为制定汉语语音教育教学方法提供心理学依据。  相似文献   

8.
旨在探讨言语中音高信息自下而上的声学语音学加工的神经机制和大脑偏侧化.发现,被动听和主动判断任务分别激活了颞叶和额叶,激活在颞极、颞上回和额下回眶部表现出明显的右侧优势.结果表明,对言语中音高信息自下而上的声学语音学加工主要是右脑的功能,言语与非言语信号的音高信息可能有相似的加工机制,支持Gandour等提出的理论.  相似文献   

9.
工作记忆是当前认知心理学中的一个研究热点,而其言语子系统与视觉子系统之间的关系又是一个颇具争论性的问题。实验1、2采用双任务范式考察了言语工作记忆对视觉工作记忆的影响。结果发现,满负荷言语负载条件下的视觉记忆的成绩显著低于无言语负载条件。这表明,言语工作记忆影响了视觉工作记忆任务的完成。  相似文献   

10.
大脑可以快速地加工信息以应对不断变化的环境,其典型范例之一是快速言语识别。自然言语的瓶颈速率约为8~12音节/秒,与神经振荡的alpha速率相近。此外,已有研究表明alpha振荡可以调控知觉过程的时间分辨率。那么,alpha振荡速率是否影响快速言语识别的时间瓶颈?其作用机制是什么?本研究利用心理物理学方法和认知神经科学方法,从现象和机制两个方面考察alpha振荡如何影响快速言语识别的时间瓶颈。在现象方面,本研究将验证快速言语识别的时间瓶颈与alpha振荡速率的一致性。在机制方面,本研究将研究alpha振荡速率如何影响快速言语识别的行为表现,又如何调控大脑对言语信号的神经加工过程。本研究希望找到快速言语识别的神经机制,从而更深入地理解大脑的快速加工过程,并进一步探讨神经振荡调控大脑时间分辨率的相关机制。  相似文献   

11.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

12.
南云 《心理科学进展》2017,(11):1844-1853
音乐学习可以增强个体的艺术素养,同时,音乐学习对其他认知加工过程尤其是语言学习还有正面的促进作用。近年来国内外学者主要围绕着音乐家、音乐学习障碍个体开展了一系列相关工作,同时还进行了相关的追踪研究。值得注意的是,有关音乐学习促进语言加工的实证依据主要来自横断研究,还需要更多的纵向追踪研究来进一步提供因果关系的证据。这方面的工作将最终促进音乐学习与训练在教育与医疗领域的广泛应用。  相似文献   

13.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

14.
This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.  相似文献   

15.
Over the past couple of decades, research has established that infants are sensitive to the predominant stress pattern of their native language. However, the degree to which the stress pattern shapes infants' language development has yet to be fully determined. Whether stress is merely a cue to help organize the patterns of speech or whether it is an important part of the representation of speech sound sequences has still to be explored. Building on research in the areas of infant speech perception and segmentation, we asked how several months of exposure to the target language shapes infants' speech processing biases with respect to lexical stress. We hypothesize that infants represent stressed and unstressed syllables differently, and employed analyses of child-directed speech to show how this change to the representational landscape results in better distribution-based word segmentation as well as an advantage for stress-initial syllable sequences. A series of experiments then tested 9- and 7-month-old infants on their ability to use lexical stress without any other cues present to parse sequences from an artificial language. We found that infants adopted a stress-initial syllable strategy and that they appear to encode stress information as part of their proto-lexical representations. Together, the results of these studies suggest that stress information in the ambient language not only shapes how statistics are calculated over the speech input, but that it is also encoded in the representations of parsed speech sequences.  相似文献   

16.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   

17.
Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.  相似文献   

18.
Developmental language learning impairments affect 10 to 20% of children and increase their risk of later literacy problems (dyslexia) and psychiatric disorders. Both oral- and written-language impairments have been linked to slow neural processing, which is hypothesized to interfere with the perception of speech sounds that are characterized by rapid acoustic changes. Research into the etiology of language learning impairments not only has led to improved diagnostic and intervention strategies, but also has raised fundamental questions about the neurobiological basis of speech, language, and reading, as well as hemispheric lateralization.  相似文献   

19.
Much previous research has examined various aspects of auditory processing, including the localization of sounds, and the influence of lexical and indexical information on language processing. In the present set of experiments we explored the ability of listeners to estimate the number of speakers in a group solely from the information in an auditory signal. The bound on accurately estimating the number of simultaneous speakers is 3. We suggest that subitization—the ability to estimate numerosity of visual and auditory elements without explicitly counting these elements—rather than the capacity of short-term memory, may underlie this limitation. The cognitive constraint on estimating the number of simultaneous speakers may have implications for a wide variety of seemingly unrelated psychological phenomena.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号