首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
采用事件相关电位(ERP)技术考察了情绪语音影响面孔表情识别的时间进程。通过设置效价一致或不一致的“语音-面孔”对,要求被试判断情绪语音和面孔表情的效价是否一致。行为结果显示,被试对效价一致的“语音-面孔”对的反应更快。ERP结果显示,在70-130ms和220-450ms,不一致条件下的面孔表情比一致条件诱发了更负的波形;在450-750ms,不一致条件下的面孔表情比一致条件诱发更正的后正成分。说明情绪语音对面孔表情识别的多个阶段产生了跨通道影响。  相似文献   

2.
面部表情识别与面孔身份识别的独立加工与交互作用机制   总被引:6,自引:0,他引:6  
面孔识别功能模型认为,面部表情识别与面孔身份识别是两条独立的并行路径。以往诸多研究者都认可并遵循二者分离的原则。但近期研究表明,面部表情识别与面孔身份识别存在交互作用。首先总结和分析已有的面部表情识别的研究成果,评述神经心理学与认知神经科学研究中的论争,然后介绍人脸知觉的分布式神经机制以及相关的实验证据,最后提出面部表情识别与面孔身份识别的多级整合模型,并展望研究前景。  相似文献   

3.
区分度在面部表情与面孔身份识别交互中的作用   总被引:1,自引:1,他引:0  
汪亚珉  傅小兰 《心理学报》2007,39(2):191-200
已有研究表明,身份对面部表情识别的影响较常见,但表情对面孔身份识别的影响却很罕见。只有个别研究发现了表情对熟悉面孔身份识别的影响。最近有研究使用非常相似的模特面孔图片为实验材料,也发现了表情对身份识别的影响,并提出在以往研究中表情不影响身份识别是因为身份的区分度高于表情的区分度。本研究置疑表情区分度低于身份区分度的必然性,认为过去的研究使用静态表情图片,使得表情自然具有的强度变化线索缺失,才导致表情的区分度相对较低。本研究假设,当表情的强度变化线索得以体现时,表情的区分度就会提高,此时表情识别可能就不会受身份的影响。实验结果支持该假设,证明表情与身份的区分度水平是决定二者识别交互模式的重要因素。表情身份识别之间的相互影响并不能完全证明这两者之间的独立加工关系。此外,研究结果也提示在一定条件下可以部分分离表情与身份信息  相似文献   

4.
为探索精神分裂症患者自我面孔识别的能力以及在视听整合任务中面孔对声音身份识别(同性和异性)的影响, 选取34名住院精神分裂症患者和26名健康被试, 分别进行单通道动态自我面孔识别任务、单通道自我声音识别任务和视听整合任务的3个实验。结果发现:精神分裂症患者的单通道自我面孔识别能力和自我声音识别能力与健康组无区别, 但在视听整合任务中, 精神分裂症患者的声音身份识别受到视觉通道面孔识别的影响。结果表明, 精神分裂症患者有自我面孔识别的能力, 自我面孔会促进自我声音的识别, 并抑制对同性他人声音和异性他人声音的识别。  相似文献   

5.
人们会根据陌生人的面孔线索或语音线索迅速地对其人格特质进行主观推断而形成第一印象。面孔-人格知觉第一印象和语音-人格知觉第一印象在维度结构和内在机制上具有相似性; 在对具体人格特质和维度的敏感性, 以及具体的认知机制方面又具有各自的特异性。未来研究可以基于同一批被知觉者开展面孔-人格知觉第一印象和语音-人格知觉第一印象的直接比较, 并着力探究二者的过程特点, 以及人格知觉第一印象形成时面孔和语音知觉的跨模态整合效应。  相似文献   

6.
已有面孔身份与表情识别研究提示, 高频空间信息可能选择性地与表情识别有关, 而低频空间信息则选择性地与身份识别有关。为验证这一假设, 操纵空间频率设计三个Garner效应测量实验。实验1测量全频条件下身份表情识别之间的Garner效应, 结果显示, 相互间的干扰均显著。实验2测量高频条件下的干扰效应, 发现表情识别的Garner效应不再显著而身份识别的Garner效应无明显变化, 出现分离。实验3测量低频条件下的Garner效应, 结果表明, 表情与身份识别的Garner效应仍显著, 未受高频过滤影响。基于Garner范式, 提出面孔识别的可分离度与难度双指标同时考察的方法, 对实验结果进行了分析, 并由此得出结论, 高频空间信息是面孔身份与表情信息分离的有效尺度。  相似文献   

7.
长期以来,关于面孔表情识别的研究主要是围绕着面孔本身的结构特征来进行的,但是近年来的研究发现,面孔表情的识别也会受到其所在的情境背景(如语言文字、身体背景、自然与社会场景等)的影响,特别是在识别表情相似的面孔时,情境对面孔表情识别的影响更大。本文首先介绍和分析了近几年关于语言文字、身体动作、自然场景和社会场景等情境影响个体对面孔表情的识别的有关研究;其次,又分析了文化背景、年龄以及焦虑程度等因素对面孔表情识别情境效应的影响;最后,强调了未来的研究应重视研究儿童被试群体、拓展情绪的类别、关注真实生活中的面孔情绪感知等。  相似文献   

8.
摘 要:人类躯体是由各部位按照一定的空间关系组成的,与人类面孔相似,也是对称的。它也能提供个体身份和行为方式等信息,如年龄、性别、意图、情感状态等。它与面孔相辅相成,共同促成对个体身份的辨别。躯体知觉是指大脑对进入视觉加工系统中的人类躯体刺激的侦察,感知或识别。躯体知觉应当如面孔知觉研究一样受到更多的关注。文章概述、评价了躯体知觉相关的认知神经研究,着重介绍了躯体知觉的认知和神经机制,并提出了一些值得进一步研究的问题。  相似文献   

9.
发展性面孔失认症是指个体在童年期就开始表现出来的一种终生性面孔识别缺陷,其不能归因于智力衰退、情感障碍、物体识别困难以及后天性脑损伤.发展性面孔失认症涉及的认知机制包括面孔特异性机制、构型加工障碍、面孔探测、面孔记忆和面孔身份识别.此外,该面孔失认症的神经网络由核心神经网络和延伸的神经网络组成,前者与面孔选择反应和记忆表征有关,后者主要负责面孔知识表征、面孔长时记忆和面孔工作记忆.今后的研究应集中在完善延伸的神经网络、拓展其认知网络;进一步明确面孔探测与发展性面孔失认之间的关系;考察发展性面孔失认症的基因基础、加强其发展性研究以及推动康复工作的展开.  相似文献   

10.
面孔具有明显不同于其他刺激物的特点,面孔认知的目的也因此与其他物体认知的目的大相径庭。依据面孔认知的目的,本文将面孔认知划分为四个层面:将面孔区别于一般物体的第一层面,对面孔物理属性进行识别的第二层面,对面孔的生物属性进行识别的第三层面和对面孔的社会属性进行识别的第四层面。FFA是面孔加工的一个重要脑区,通过论述它对面孔认知各层面的作用,FFA在面孔加工中的作用被进一步明确。  相似文献   

11.
The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face–voice pairs in which the face and voice were co-presented and were either “matched” (same person), “related” (two highly associated people), or “mismatched” (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.  相似文献   

12.
Recognising identity and emotion conveyed by the face is important for successful social interactions and has thus been the focus of considerable research. Debate has surrounded the extent to which the mechanisms underpinning face emotion and face identity recognition are distinct or share common processes. Here we use an individual differences approach to address this issue. In a well-powered (N?=?605) and age-diverse sample we used structural equation modelling to assess the association between face emotion recognition and face identity recognition ability. We also sought to assess whether this association (if present) reflected visual short-term memory and/or general intelligence (g). We observed a strong positive correlation (r?=?.52) between face emotion recognition ability and face identity recognition ability. This association was reduced in magnitude but still moderate in size (r?=?.28) and highly significant when controlling for measures of g and visual short-term memory. These results indicate that face emotion and face identity recognition abilities in part share a common processing mechanism. We suggest that face processing ability involves multiple functional components and that modelling the sources of individual differences can offer an important perspective on the relationship between these components.  相似文献   

13.
We are constantly exposed to our own face and voice, and we identify our own faces and voices as familiar. However, the influence of self-identity upon self-speech perception is still uncertain. Speech perception is a synthesis of both auditory and visual inputs; although we hear our own voice when we speak, we rarely see the dynamic movements of our own face. If visual speech and identity are processed independently, no processing advantage would obtain in viewing one’s own highly familiar face. In the present experiment, the relative contributions of facial and vocal inputs to speech perception were evaluated with an audiovisual illusion. Our results indicate that auditory self-speech conveys a processing advantage, whereas visual self-speech does not. The data thereby support a model of visual speech as dynamic movement processed separately from speaker recognition.  相似文献   

14.
人声身份识别对于社交交流的许多方面都至关重要, 大多数个体都能根据声音识别其声源者, 然而人声失认症患者似乎已经丧失了这种能力。人声失认症是指人声身份加工的不同阶段出现障碍, 症状主要包括获得性人声失认症, 发展性人声失认症及其亚型。获得性人声失认症患者受损脑区主要包括颞叶, 赫氏脑回和颞极, 发展性人声失认症主要与右后侧颞上沟的非典型性反应和颞叶与杏仁核间的功能联结障碍有关。以后的研究可以重点关注人声失认症的筛选方法, 界定范围和文化差异等方面。  相似文献   

15.
We present a single case study of a brain-damaged patient, AD, suffering from visual face and object agnosia, with impaired visual perception and preserved mental imagery. She is severely impaired in all aspects of overt recognition of faces as well as in covert recognition of familiar faces. She shows a complete loss of processing facial expressions in recognition as well as in matching tasks. Nevertheless, when presented with a task where face and voice expressions were presented concurrently, there was a clear impact of face expressions on her ratings of the voice. The cross-modal paradigm used here and validated previously with normal subjects (de Gelder & Vroomen, 1995, 2000), appears as a useful tool in investigating spared covert face processing in a neuropsychological perspective, especially with prosopagnosic patients. These findings are discussed against the background of different models of the covert recognition of face expressions.  相似文献   

16.
Recent literature has raised the suggestion that voice recognition runs in parallel to face recognition. As a result, a prediction can be made that voices should prime faces and faces should prime voices. A traditional associative priming paradigm was used in two studies to explore within‐modality priming and cross‐modality priming. In the within‐modality condition where both prime and target were faces, analysis indicated the expected associative priming effect: The familiarity decision to the second target celebrity was made more quickly if preceded by a semantically related prime celebrity, than if preceded by an unrelated prime celebrity. In the cross‐modality condition, where a voice prime preceded a face target, analysis indicated no associative priming when a 3‐s stimulus onset asynchrony (SOA) was used. However, when a relatively longer SOA was used, providing time for robust recognition of the prime, significant cross‐modality priming emerged. These data are explored within the context of a unified account of face and voice recognition, which recognizes weaker voice processing than face processing.  相似文献   

17.
Identity perception often takes place in multimodal settings, where perceivers have access to both visual (face) and auditory (voice) information. Despite this, identity perception is usually studied in unimodal contexts, where face and voice identity perception are modelled independently from one another. In this study, we asked whether and how much auditory and visual information contribute to audiovisual identity perception from naturally-varying stimuli. In a between-subjects design, participants completed an identity sorting task with either dynamic video-only, audio-only or dynamic audiovisual stimuli. In this task, participants were asked to sort multiple, naturally-varying stimuli from three different people by perceived identity. We found that identity perception was more accurate for video-only and audiovisual stimuli compared with audio-only stimuli. Interestingly, there was no difference in accuracy between video-only and audiovisual stimuli. Auditory information nonetheless played a role alongside visual information as audiovisual identity judgements per stimulus could be predicted from both auditory and visual identity judgements, respectively. While the relationship was stronger for visual information and audiovisual information, auditory information still uniquely explained a significant portion of the variance in audiovisual identity judgements. Our findings thus align with previous theoretical and empirical work that proposes that, compared with faces, voices are an important but relatively less salient and a weaker cue to identity perception. We expand on this work to show that, at least in the context of this study, having access to voices in addition to faces does not result in better identity perception accuracy.  相似文献   

18.
Burton AM  Bonner L 《Perception》2004,33(6):747-752
Two experiments are reported in which subjects made judgments about the sex or the familiarity of a voice. In experiment 1, subjects were fans of the BBC-radio soap opera, The Archers, and familiar voice clips were taken from this programme. Subjects showed a large reduction in response times when making sex judgments to familiar voices, despite the fact that sex judgments are generally much faster than familiarity judgments. In experiment 2, the same familiar clips were played to subjects unfamiliar with the soap opera, and no difference was observed in times to make sex judgments to Archers or non-Archers voices. We conclude that, unlike the case of face recognition, sex and identity processing of voices are not independent. The findings constrain models of person recognition across multiple modalities.  相似文献   

19.
We rarely become familiar with the voice of another person in isolation but usually also have access to visual identity information, thus learning to recognize their voice and face in parallel. There are conflicting findings as to whether learning to recognize voices in audiovisual vs audio-only settings is advantageous or detrimental to learning. One prominent finding shows that the presence of a face overshadows the voice, hindering voice identity learning by capturing listeners' attention (Face Overshadowing Effect; FOE). In the current study, we tested the proposal that the effect of audiovisual training on voice identity learning is driven by attentional processes. Participants learned to recognize voices through either audio-only training (Audio-Only) or through three versions of audiovisual training, where a face was presented alongside the voices. During audiovisual training, the faces were either looking at the camera (Direct Gaze), were looking to the side (Averted Gaze) or had closed eyes (No Gaze). We found a graded effect of gaze on voice identity learning: Voice identity recognition was most accurate after audio-only training and least accurate after audiovisual training including direct gaze, constituting a FOE. While effect sizes were overall small, the magnitude of FOE was halved for the Averted and No Gaze conditions. With direct gaze being associated with increased attention capture compared to averted or no gaze, the current findings suggest that incidental attention capture at least partially underpins the FOE. We discuss these findings in light of visual dominance effects and the relative informativeness of faces vs voices for identity perception.  相似文献   

20.
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号