首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
张晶晶  杨玉芳 《心理科学进展》2019,27(12):2043-2051
语言和音乐在呈现过程中, 小单元相互结合组成更大的单元, 最终形成层级结构。已有研究表明, 听者能够将连续的语流和音乐切分成层级结构, 并在大脑中形成层级表征。在感知基础之上, 听者还能将新出现的语言和音乐事件整合到层级结构之中, 形成连贯理解, 最终顺利地完成交流过程。未来研究应剖析边界线索在层级结构感知中的作用, 考察不同层级整合过程的影响因素, 进一步探索语言和音乐层级结构加工的关系。  相似文献   

2.
大脑可以快速地加工信息以应对不断变化的环境,其典型范例之一是快速言语识别。自然言语的瓶颈速率约为8~12音节/秒,与神经振荡的alpha速率相近。此外,已有研究表明alpha振荡可以调控知觉过程的时间分辨率。那么,alpha振荡速率是否影响快速言语识别的时间瓶颈?其作用机制是什么?本研究利用心理物理学方法和认知神经科学方法,从现象和机制两个方面考察alpha振荡如何影响快速言语识别的时间瓶颈。在现象方面,本研究将验证快速言语识别的时间瓶颈与alpha振荡速率的一致性。在机制方面,本研究将研究alpha振荡速率如何影响快速言语识别的行为表现,又如何调控大脑对言语信号的神经加工过程。本研究希望找到快速言语识别的神经机制,从而更深入地理解大脑的快速加工过程,并进一步探讨神经振荡调控大脑时间分辨率的相关机制。  相似文献   

3.
创造力究竟是怎么产生的, 目前尚未得出一致的结论。神经电生理技术因其高时间分辨率, 可以准确地揭示创造力产生进程中的神经振荡机制, 从而帮助人们更深刻地理解创造力的本质。近年来的研究发现, 单节律alpha神经振荡会随着创造力的增加而增强, 这反映了创造力产生过程中的内部信息加工需求增加、自上而下的抑制控制增强。同时, 多频段神经振荡交叉节律耦合体现了创造性产生过程中额叶、颞叶和顶叶等多脑区之间信息交流的动态变化。未来研究应该以整合理论框架为基础, 结合多层次多方法的研究工具, 引进更生态化的数理计算方法, 并利用计算神经科学建模来预测个体创造力发展趋势, 从而全面深刻地认识创造力的本质。  相似文献   

4.
工作记忆的神经振荡机制研究是当前记忆领域的研究热点之一。那么, 神经振荡仅仅是工作记忆过程的伴随现象, 还是直接参与并调控了工作记忆的加工过程?已有研究发现, 大脑内部的神经振荡活动在外界节律性刺激的驱动下, 逐步与外界刺激节律相位同步化, 这一现象被称为“神经振荡夹带”。重复经颅磁刺激(repetitive Transcranial Magnetic Stimulation, rTMS)和经颅交流电刺激(transcranial Alternating Current Stimulation, tACS)干预研究基于此现象, 对大脑局部脑区施加节律性磁、电刺激, 进而调控工作记忆过程中特定频段的神经振荡活动、跨频段的神经振荡耦合或跨脑区的神经振荡相位同步, 为神经振荡参与工作记忆加工提供较为直接的因果证据。未来研究需考虑从脑网络的角度出发, 调控多个脑区之间的神经振荡活动, 进一步考察神经振荡对工作记忆的影响。此外, 还需注意探索和优化rTMS/tACS调控工作记忆的刺激方案, 并辅以客观的脑电记录, 提高该类研究的有效性和可重复性, 最终达到提高工作记忆能力的目的。  相似文献   

5.
言语想象不仅在大脑预处理机制方面起到重要的作用,还是目前脑机接口领域研究的热点。与正常言语产生过程相比,言语想象的理论模型、激活脑区、神经传导路径等均与其有较多相似之处。而言语障碍群体的言语想象、想象有意义的词语和句子时的脑神经机制与正常言语产生存在差异。鉴于人类言语系统的复杂性,言语想象的神经机制研究还面临一系列挑战,未来研究可在言语想象质量评价工具及神经解码范式、脑控制回路、激活通路、言语障碍群体的言语想象机制、词语和句子想象的脑神经信号等方面进一步探索,为有效提高脑机接口的识别率提供依据,为言语障碍群体的沟通提供便利。  相似文献   

6.
语义整合是把当前阅读中新出现的词语与之前的语境进行语义联结以形成连贯表征的过程。在语篇理解中, 很多理论都确定了整合这一过程的重要性, 语义整合是达成语篇连贯的一个关键步骤, 探究其神经机制及影响因素对理解语篇有重要意义。已有的ERP研究表明语篇理解中的语义整合是即时发生的; 已有的fMRI研究和神经震荡的分析提供了参与这一过程的具体脑区和神经网络的证据。其影响因素主要包括语篇内因素、非言语因素和个体差异。  相似文献   

7.
积极共情指个体对他人积极情绪状态的理解和间接分享过程。虽然它与消极共情在产生过程中都依赖于镜像神经系统和心理理论系统的活动, 但是两者在产生难度上不同。对于积极共情情感如何在大脑中表征的, 不同的研究者观点各异:一些认为与消极共情一致, 都集中在脑岛及其相关脑区; 而另一些则认为是大脑的愉悦系统。关于积极共情影响因素的研究主要集中于共情主体和客体的关系方面。今后需要进一步了解积极共情在产生机制和情感表征上的特点, 拓展其影响因素的研究并展开对特质积极共情神经基础的探讨。  相似文献   

8.
言语活动是利用语言交流思想的过程,是人脑的高级功能。言语活动对人的一切心理活动都有影响,因而研究人的言语活动的神经机制,对了解人的心理活动的本质是有重大意义的。现在,就有关言语活动神经机制的一些问题简述如下。一、大脑中是否有言语中枢与这个问题有关的科学事实主要是来自对失语症病人的临床观察。  相似文献   

9.
摘 要 尽管社会认知的内容丰富多样,但其核心在于人们对“自我”、“他人”及两者关系的理解。文化作为一种独特的社会现象,对社会认知有着广泛影响,这一点集中体现在文化对“自我” 与“他人” 信息加工及其大脑机制的影响上。文化神经科学的研究表明:文化显著影响自我相关记忆、自我表征、自我觉知等自我认知过程。这可能主要来源于不同文化人群自我建构方式的不同。上述差异的神经机制主要体现为不同文化人群自我相关加工时,其内侧前额叶功能性变化的不同。与此相对应的是,文化同样显著影响人们对他人,尤其是对他人情绪的认知。这一点集中表现为表情认知的文化优势效应及共情过程的文化差异。在神经机制上这一差异主要体现为杏仁核功能的文化可塑性。文化神经科学的未来研究,可继续探讨主流文化、区域文化、宗教文化等各种形式的文化差异:1)对自我认知与情绪认知相互作用的影响与神经基础;2)对共情(empathy)、社会比较(social comparison)、心理理论(theory of mind)与协同行为(joint action)等多种社会认知过程的影响及其神经机制。 关键词 自我建构 文化神经科学 情绪认知 自我表征 共情  相似文献   

10.
何文广  陈宝国 《心理科学进展》2011,19(11):1615-1624
双语认知优势效应不仅在言语领域有所表现, 更主要的体现在以认知控制、注意选择、心理抑制能力为基础的非言语认知领域。影响该效应出现的因素是多方面的, 第二语言获得年龄和双语熟练程度是其中两个最为重要的因素。研究发现, 双语表征和产生机制是导致双语认知优势效应的语言学机制, 而以Broca语言区为核心的大脑前额叶则是其主要的神经基础。未来该领域的研究不仅要进一步关注双语认知优势效应产生的内在言语机制和神经基础, 还应当积极关注双语认知和个体情绪、人格发展的关系。  相似文献   

11.
The natural rhythms of speech help a listener follow what is being said, especially in noisy conditions. There is increasing evidence for links between rhythm abilities and language skills; however, the role of rhythm-related expertise in perceiving speech in noise is unknown. The present study assesses musical competence (rhythmic and melodic discrimination), speech-in-noise perception and auditory working memory in young adult percussionists, vocalists and non-musicians. Outcomes reveal that better ability to discriminate rhythms is associated with better sentence-in-noise (but not words-in-noise) perception across all participants. These outcomes suggest that sensitivity to rhythm helps a listener understand unfolding speech patterns in degraded listening conditions, and that observations of a “musician advantage” for speech-in-noise perception may be mediated in part by superior rhythm skills.  相似文献   

12.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

13.
The perception of duration-based syllabic rhythm was examined within a metrical framework. Participants assessed the duration patterns of four-syllable phrases set within the stress structure XxxX (an Abercrombian trisyllabic foot). Using on-screen sliders, participants created percussive sequences that imitated speech rhythms and analogous non-speech monotone rhythms. There was a tendency to equalize the interval durations for speech stimuli but not for non-speech. Despite the perceptual regularization of syllable durations, different speech phrases were conceived in various rhythmic configurations, pointing to a diversity of perceived meters in speech. In addition, imitations of speech stimuli showed more variability than those of non-speech. Rhythmically skilled listeners exhibited lower variability and were more consistent with vowel-centric estimates when assessing speech stimuli. These findings enable new connections between meter- and duration-based models of speech rhythm perception.  相似文献   

14.
Lip reading is the ability to partially understand speech by looking at the speaker's lips. It improves the intelligibility of speech in noise when audio-visual perception is compared with audio-only perception. A recent set of experiments showed that seeing the speaker's lips also enhances sensitivity to acoustic information, decreasing the auditory detection threshold of speech embedded in noise [J. Acoust. Soc. Am. 109 (2001) 2272; J. Acoust. Soc. Am. 108 (2000) 1197]. However, detection is different from comprehension, and it remains to be seen whether improved sensitivity also results in an intelligibility gain in audio-visual speech perception. In this work, we use an original paradigm to show that seeing the speaker's lips enables the listener to hear better and hence to understand better. The audio-visual stimuli used here could not be differentiated by lip reading per se since they contained exactly the same lip gesture matched with different compatible speech sounds. Nevertheless, the noise-masked stimuli were more intelligible in the audio-visual condition than in the audio-only condition due to the contribution of visual information to the extraction of acoustic cues. Replacing the lip gesture by a non-speech visual input with exactly the same time course, providing the same temporal cues for extraction, removed the intelligibility benefit. This early contribution to audio-visual speech identification is discussed in relationships with recent neurophysiological data on audio-visual perception.  相似文献   

15.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

16.
This study was designed to test the iambic/trochaic law, which claims that elements contrasting in duration naturally form rhythmic groupings with final prominence, whereas elements contrasting in intensity form groupings with initial prominence. It was also designed to evaluate whether the iambic/trochaic law describes general auditory biases, or whether rhythmic grouping is speech or language specific. In two experiments, listeners were presented with sequences of alternating /ga/ syllables or square wave segments that varied in either duration or intensity and were asked to indicate whether they heard a trochaic (i.e., strong-weak) or an iambic (i.e., weak-strong) rhythmic pattern. Experiment 1 provided a validation of the iambic/trochaic law in English-speaking listeners; for both speech and nonspeech stimuli, variations in duration resulted in iambic grouping, whereas variations in intensity resulted in trochaic grouping. In Experiment 2, no significant differences were found between the rhythmic-grouping performances of English- and French-speaking listeners. The speech/ nonspeech and cross-language parallels suggest that the perception of linguistic rhythm relies largely on general auditory mechanisms. The applicability of the iambic/trochaic law to speech segmentation is discussed.  相似文献   

17.
The effects of perceptual learning of talker identity on the recognition of spoken words and sentences were investigated in three experiments. In each experiment, listeners were trained to learn a set of 10 talkers’ voices and were then given an intelligibility test to assess the influence of learning the voices on the processing of the linguistic content of speech. In the first experiment, listeners learned voices from isolated words and were then tested with novel isolated words mixed in noise. The results showed that listeners who were given words produced by familiar talkers at test showed better identification performance than did listeners who were given words produced by unfamiliar talkers. In the second experiment, listeners learned novel voices from sentence-length utterances and were then presented with isolated words. The results showed that learning a talker’s voice from sentences did not generalize well to identification of novel isolated words. In the third experiment, listeners learned voices from sentence-length utterances and were then given sentence-length utterances produced by familiar and unfamiliar talkers at test. We found that perceptual learning of novel voices from sentence-length utterances improved speech intelligibility for words in sentences. Generalization and transfer from voice learning to linguistic processing was found to be sensitive to the talker-specific information available during learning and test. These findings demonstrate that increased sensitivity to talker-specific information affects the perception of the linguistic properties of speech in isolated words and sentences.  相似文献   

18.
A continuous speech message alternated between the left and right ears retains generally good intelligibility, except at certain critical rates of alternation of about 3–4 switching cycles/sec. In the present experiment, subjects heard speech alternated between the two ears at eight different switching frequencies, and at four different speech rates. Results support an earlier contention that the critical intelligibility parameter in alternated speech is average speech content per ear segment, rather than absolute time per ear. Implications are discussed both in terms of critical speech segments in auditory analysis and in neural processing of binaural auditory information.  相似文献   

19.
The purpose of this investigation was to judge whether the Lombard effect, a characteristic change in the acoustical properties of speech produced in noise, existed in adductor spasmodic dysphonia speech, and if so, whether the effect added to or detracted from speaker intelligibility. Intelligibility, as described by Duffy, is the extent to which the acoustic signal produced by a speaker is understood by a listener based on the auditory signal alone. Four speakers with adductor spasmodic dysphonia provided speech samples consisting of low probability sentences from the Speech Perception in Noise test to use as stimuli. The speakers were first tape-recorded as they read the sentences in a quiet speaking condition and were later tape-recorded as they read the same sentences while exposed to background noise. The listeners used as subjects in this study were 50 undergraduate university students. The results of the statistical analysis indicated a significant difference between the intelligibility of the speech recorded in the quiet versus noise conditions (F(1,49) = 57.80, p < or = .001). It was concluded that a deleterious Lombard effect existed for the adductor spasmodic dysphonia speaker group, with the premise that the activation of a Lombard effect in such patients may detract from their overall speech intelligibility.  相似文献   

20.
The effects of rhythmic context on the ability of listeners to recognize slightly altered versions of 10-tone melodies were examined in three experiments. Listeners judged the melodic equivalence of two auditory patterns when their rhythms were either the same or different. Rhythmic variations produced large effects on a bias measure, indicating that listeners judged melodies to be alike if their rhythms were identical. However, neither rhythm nor pattern rate affected discriminability measures in the first study, in which rhythm was treated as a within subjects variable. The other two studies examined rhythmic context as a between subjects variable. In these, significant effects of temporal uncertainty due to the number and type of rhythms involved in a block of trials, as well as their assignment to standard and comparison melodies on a given trial, were apparent on both discriminability and bias measures. Results were interpreted in terms of the effect of temporal context on the rhythmic targeting of attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号