首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 109 毫秒
1.
本文研究了六只猴在识别嗓声中的听觉平均诱发电位(AEP)。熟悉人的有声辅音音节“ba”在右颢区比无声辅音音节“pa”引出高幅N200波,但陌生人嗓声“pa”却比“ba”在左颞区引出高波幅N200波。熟悉人“ba”音比熟悉猴“er”音在两颞区引出的P300波潜伏期长。这些结果说明猴的AEP 可客观地反映出熟悉人与陌生人,熟悉人与猴嗓声的差异.  相似文献   

2.
林庶芝  沈政 《心理学报》1991,24(4):78-83
该项工作研究了21名女大学生和研究生鉴别两个陌生人的噪声时,六个脑区的事件相关电位(ERPs)。结果表明,双音节语义嗓声的鉴别更易引起明显的ERPs变化;两个陌生人嗓声不等概率呈现,较等概率呈现更易引起明显的ERPs变化。在单音节嗓声鉴别中,在左额区引起N200波波幅增高;而在右颞区引起P300波潜伏期的增长。在双音节语义嗓声鉴别中,除右顶叶外在其它五个脑区引起P300波波幅增高,在右额区引起P300波潜伏期明显缩短。  相似文献   

3.
熟悉人嗓声识别的事件相关电位   总被引:1,自引:0,他引:1  
本实验研究了8名被试在识别熟悉人嗓声和辨别陌生人嗓声过程中的事件相关电位,熟悉人和陌生人的发音方式有两种:Pa 和Ba,并且以三种不同的概率呈现:1:2、1:1和2:1。结果发现,熟悉嗓声诱发的N100和P200波的潜伏期较长,波幅较小,发音Ba 比Pa 能更有效地诱发事件相关电位,而且事件相关电位对熟悉人嗓声的反应更灵敏.  相似文献   

4.
目的:探讨发展性阅读障碍儿童汉字识别早期加工事件相关电位的变化。方法:采用32导脑电仪和四种刺激材料,对发展性阅读障碍和正常儿童各18名进行实验,分析汉字刺激的P1和N170成分。结果:正常儿童左脑枕区的P1波幅明显大于阅读障碍组,但阅读障碍组左枕颞区N170波幅显著大于正常儿童,潜伏期无明显差异;阅读障碍儿童左-右枕颞区差异显著。结论:发展性阅读障碍儿童存在明显的早期感知觉加工问题,对后续的认知活动带来消极影响。  相似文献   

5.
采用事件相关电位(ERPs)技术探索延迟匹配任务范式下面孔识别工作记忆的脑电位特征。实验以面孔图片为刺激,在校大学生被试完成靶匹配工作记忆任务。结果发现,被试识别靶面孔及分心物面孔时均在枕颞区两侧诱发N170,且靶与分心物的N170振幅在相同电极上都没有显著差异,在颞区两侧的P7和P8上差异显著;无论靶面孔还是分心物面孔,工作记忆的 ERPs均产生了 P300成分。在分别追踪新靶和熟悉靶的工作记忆任务条件下,靶与分心物的ERPs波形在250 ms后出现分离,且靶刺激波幅均比分心物更正,新靶比熟悉靶更正。熟悉分心物与新分心物之间显示出250~650 ms的前额区旧/新效应,在晚期的450~650 ms时段,新工作记忆比旧工作记忆波幅更正。这些结果表明,面孔识别的N170效应可能反映的是面孔知觉的整体加工,且 N170的右半球优势具体为颞区的右侧优势;先前的面孔学习会影响工作记忆期间大脑对面孔的识别反应。  相似文献   

6.
采用线索Go/Nogo任务,运用ERPs技术考察了老年人和青年人在自动化情绪调节上的差异。ERPs结果显示:⑴愉快面孔比中性面孔诱发的Go-N2波幅较小,潜伏期较短;Nogo-P3波幅较大,潜伏期较短,表明情绪面孔比中性面孔更能吸引注意;⑵在愉快面孔上,老年组比青年组的Go/Nogo-P3波幅大、潜伏期差异不显著;在悲伤面孔上,老年组比青年组的Go/Nogo-P3波幅大,Nogo-P3潜伏期长,说明老年组比青年组更加抑制对悲伤面孔的反应,老年人在自动化情绪调节中表现出一定程度的积极效应。  相似文献   

7.
采用事件相关电位(ERPs)技术考察了奖赏预期对人类面孔情绪识别的影响。实验采用线索-目标范式, 分别记录了被试在奖赏预期条件下以及无奖赏预期条件下对正性、中性和负性面孔进行情绪辨别任务的ERP数据。行为结果显示, 被试在奖赏预期条件下的反应时快于无奖赏预期条件下的反应时, 对情绪面孔的反应时快于对中性面孔的反应时。ERPs数据显示, 奖赏线索比无奖赏线索诱发了更正的P1、P2和P300成分。目标刺激诱发的P1、N170波幅以及N300均受到奖赏预期的调节, 在奖赏预期条件下目标诱发了更正的ERPs。P1、N170、VPP等成分没有受到面孔情绪的影响, 而额中央位置的N300波幅显示情绪(正性与负性)面孔与中性面孔加工的差异。重要的是, N300波幅出现奖赏预期与情绪的交互作用, 正、负情绪加工效应以及负性偏向效应受奖赏预期的差异性影响。正性情绪加工效应不受奖赏预期的影响, 而负性情绪加工效应和负性偏向效应在奖赏预期条件下显著大于无奖赏预期条件下。这些结果说明, 奖赏预期能够调节对面孔情绪的加工, 且不同加工进程中奖赏对情绪加工的调节作用不同。动机性信息调节注意资源的分配, 促进了个体在加工面孔情绪时的负性偏向。  相似文献   

8.
记录7名左颞叶癫痫病人和9名正常人在四种实验条件下的事件相关电位:(1)听觉脑子诱发电位,(2)红色闪光刺激,(3)陌生人嗓音识别,(4)陌生人面孔识别。实验结果发现,两组被试在视觉信息加工中无差异,在嗓音识别中病人的N150和P300波潜伏期大于正常人,表明其复杂听觉认知功能受到影响.红色闪光刺激和噪音识别条件下,病人的P100波幅大于正常人,说明病人对强刺激的物理强度有较高的反应水平,不易习惯化。  相似文献   

9.
白学军  尹莎莎  杨海波  吕勇  胡伟  罗跃嘉 《心理学报》2011,43(10):1103-1113
采用视觉搜索范式, 以二维抽象对称图形为材料, 通过记录\16名被试在长短两种时间间隔(ISI)条件和有效、中性、无效三种视觉工作记忆内容条件下的行为反应和事件相关电位(ERPs), 探讨视觉工作记忆内容对自上而下注意控制影响的认知过程和脑机制。结果发现:(1)无论ISI长或短, 有效信息条件(记忆图形与目标所在的背景图形相同)的反应时均显著短于无效信息条件(记忆图形与目标所在的背景图形不同)。(2)有效信息条件下的额区P2波幅显著大于中性信息条件(记忆图形不出现在搜索序列中); 枕区P1、N1波幅和潜伏期在视觉工作记忆内容条件下差异不显著; 短ISI条件下, 有效信息条件下的枕区P300波幅显著大于无效信息条件; 长ISI条件下, 有效信息条件的枕区P300波幅显著小于无效信息条件。表明当目标出现在与记忆内容相匹配的客体中时, 激活了工作记忆中的客体表征, 以自上而下的方式优先捕获注意; 同时ISI变化对此过程起着调节作用。  相似文献   

10.
学习困难的ERP研究   总被引:1,自引:0,他引:1  
王恩国  刘昌 《心理科学》2005,28(5):1144-1147
事件相关脑电位(Event—related potentials,ERPs)是研究心理学和认知神经科学的重要技术手段,将该技术用于学习困难的脑机制的研究,有助于发现学习困难的神经机制。研究表明,学习困难者的P300波幅较小,潜伏期较长。学习困难者的MMN波幅比控制组小,在信息的自动加工方面存在缺陷。在单词命名任务中,学习困难者的N400较小,而且不同类型学习困难者的波幅和潜伏期存在明显差异。  相似文献   

11.
Voices, in addition to faces, enable person identification. Voice recognition has been shown to evoke a distributed network of brain regions that includes, in addition to the superior temporal sulcus (STS), the anterior temporal pole, fusiform face area (FFA), and posterior cingulate gyrus (pCG). Here we report an individual (MS) with acquired prosopagnosia who, despite bilateral damage to much of this network, demonstrates the ability to distinguish voices of several well‐known acquaintances from voices of people that he has never heard before. Functional magnetic resonance imaging (fMRI) revealed that, relative to speech‐modulated noise, voices rated as familiar and unfamiliar by MS elicited enhanced haemodynamic activity in the left angular gyrus, left posterior STS, and posterior midline brain regions, including the retrosplenial cortex and the dorsal pCG. More interestingly, relative to noise and unfamiliar voices, the familiar voices elicited greater haemodynamic activity in the left angular gyrus and medial parietal regions including the dorsal pCG and precuneus. The findings are consistent with theories implicating the pCG in recognizing people who are personally familiar, and furthermore suggest that the pCG region of the voice identification network is able to make functional contributions to voice recognition even though other areas of the network, namely the anterior temporal poles, FFA, and the right parietal lobe, may be compromised.  相似文献   

12.
Apart from speech content, the human voice also carries paralinguistic information about speaker identity. Voice identification and its neural correlates have received little scientific attention up to now. Here we use event-related potentials (ERPs) in an adaptation paradigm, in order to investigate the neural representation and the time course of vocal identity processing. Participants adapted to repeated utterances of vowel-consonant-vowel (VCV) of one personally familiar speaker (either A or B), before classifying a subsequent test voice varying on an identity continuum between these two speakers. Following adaptation to speaker A, test voices were more likely perceived as speaker B and vice versa, and these contrastive voice identity aftereffects (VIAEs) were much more pronounced when the same syllable, rather than a different syllable, was used as adaptor. Adaptation induced amplitude reductions of the frontocentral N1-P2 complex and a prominent reduction of the parietal P3 component, for test voices preceded by identity-corresponding adaptors. Importantly, only the P3 modulation remained clear for across-syllable combinations of adaptor and test stimuli. Our results suggest that voice identity is contrastively processed by specialized neurons in auditory cortex within ~250 ms after stimulus onset, with identity processing becoming less dependent on speech content after ~300 ms.  相似文献   

13.
Three experiments investigated functional asymmetries related to self-recognition in the domain of voices. In Experiment 1, participants were asked to identify one of three presented voices (self, familiar or unknown) by responding with either the right or the left-hand. In Experiment 2, participants were presented with auditory morphs between the self-voice and a familiar voice and were asked to perform a forced-choice decision on speaker identity with either the left or the right-hand. In Experiment 3, participants were presented with continua of auditory morphs between self- or a familiar voice and a famous voice, and were asked to stop the presentation either when the voice became "more famous" or "more familiar/self". While these experiments did not reveal an overall hand difference for self-recognition, the last study, with improved design and controls, suggested a right-hemisphere advantage for self-compared to other-voice recognition, similar to that observed in the visual domain for self-faces.  相似文献   

14.
Two experiments examined repetition priming in the recognition of famous voices. In Experiment 1, reaction times for fame decisions to famous voice samples were shorter than in an unprimed condition, when voices were primed by a different voice sample of the same person having been presented in an earlier phase of the experiment. No effect of voice repetition was observed for non-famous voices. In Experiment 2, it was investigated whether this priming effect is voice-specific or whether it is related to post-perceptual processes in person recognition. Recognizing a famous voice was again primed by having earlier heard a different voice sample of that person. Although an earlier exposure to that person's name did not cause any priming, there was some indication of priming following an earlier exposure to that person's face. Finally, earlier exposure to the identical voice sample (as compared to a different voice sample from the same person) caused a considerable bias towards responding 'famous'-i.e. performance benefits for famous but costs for nonfamous voices. The findings suggestthat (1) repetition priming invoice recognition primarily involves the activation of perceptual representations of voices, and (2) it is important to determine the conditions in which priming causes bias effects that need to be disentangled from performance benefits.  相似文献   

15.
In this study, we used the distinction between remember and know (R/K) recognition responses to investigate the retrieval of episodic information during familiar face and voice recognition. The results showed that familiar faces presented in standard format were recognized with R responses on approximately 50% of the trials. The corresponding figure for voices was less than 20%. Even when overall levels of recognition were matched between faces and voices by blurring the faces, significantly more R responses were observed for faces than for voices. Voices were significantly more likely to be recognized with K responses than were blurred faces. These findings indicate that episodic information was recalled more often from familiar faces than from familiar voices. The results also showed that episodic information about a familiar person was never recalled unless some semantic information, such as the person's occupation, was also retrieved.  相似文献   

16.
Integrating the multisensory features of talking faces is critical to learning and extracting coherent meaning from social signals. While we know much about the development of these capacities at the behavioral level, we know very little about the underlying neural processes. One prominent behavioral milestone of these capacities is the perceptual narrowing of face–voice matching, whereby young infants match faces and voices across species, but older infants do not. In the present study, we provide neurophysiological evidence for developmental decline in cross‐species face–voice matching. We measured event‐related brain potentials (ERPs) while 4‐ and 8‐month‐old infants watched and listened to congruent and incongruent audio‐visual presentations of monkey vocalizations and humans mimicking monkey vocalizations. The ERP results indicated that younger infants distinguished between the congruent and the incongruent faces and voices regardless of species, whereas in older infants, the sensitivity to multisensory congruency was limited to the human face and voice. Furthermore, with development, visual and frontal brain processes and their functional connectivity became more sensitive to the congruence of human faces and voices relative to monkey faces and voices. Our data show the neural correlates of perceptual narrowing in face–voice matching and support the notion that postnatal experience with species identity is associated with neural changes in multisensory processing ( Lewkowicz & Ghazanfar, 2009 ).  相似文献   

17.
The voice is a marker of a person's identity which allows individual recognition even if the person is not in sight. Listening to a voice also affords inferences about the speaker's emotional state. Both these types of personal information are encoded in characteristic acoustic feature patterns analyzed within the auditory cortex. In the present study 16 volunteers listened to pairs of non-verbal voice stimuli with happy or sad valence in two different task conditions while event-related brain potentials (ERPs) were recorded. In an emotion matching task, participants indicated whether the expressed emotion of a target voice was congruent or incongruent with that of a (preceding) prime voice. In an identity matching task, participants indicated whether or not the prime and target voice belonged to the same person. Effects based on emotion expressed occurred earlier than those based on voice identity. Specifically, P2 (approximately 200 ms)-amplitudes were reduced for happy voices when primed by happy voices. Identity match effects, by contrast, did not start until around 300 ms. These results show an early task-specific emotion-based influence on the early stages of auditory sensory processing.  相似文献   

18.
Why are familiar-only experiences more frequent for voices than for faces?   总被引:1,自引:0,他引:1  
Hanley,Smith, and Hadfield (1998) showed that when participants were asked to recognize famous people from hearing their voice , there was a relatively large number of trials in which the celebrity's voice was felt to be familiar but biographical information about the person could not be retrieved. When a face was found familiar, however, the celebrity's occupation was significantly more likely to be recalled. This finding is consistent with the view that it is much more difficult to associate biographical information with voices than with faces. Nevertheless, recognition level was much lower for voices than for faces in Hanleyet al.'s study,and participants made significantly more false alarms in the voice condition. In the present study, recognition performance in the face condition was brought down to the same level as recognition in the voice condition by presenting the faces out of focus. Under these circumstances, it proved just as difficult to recall the occupations of faces found familiar as it was to recall the occupations of voices found familiar. In other words, there was an equally large number of familiar-only responses when faces were presented out of focus as in the voice condition. It is argued that these results provide no support for the view that it is relatively difficult to associate biographical information with a person's voice. It is suggested instead that associative connections between processing units at different levels in the voice-processing system are much weaker than is the case with the corresponding units in the face-processing system. This will reduce the recall of occupations from voices even when the voice has been found familiar. A simulation was performed using the latest version of the IAC model of person recognition (Burton, Bruce, & Hancock, 1999) which demonstrated that the model can readily accommodate the pattern of results obtained in this study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号