首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
采用线索-靶子范式,利用2个预实验和1个正式实验,考察阈下不同情绪面孔的注视线索提示效应。要求被试在具有表情与注视线索的面孔呈现完毕后,快速而准确地对目标的位置进行判断。记录被试反应时间。结果显示,当被试未有意识的觉察到注视线索与面孔表情时,注视线索提示效应存在,并受到表情的调节。表现为:在注视线索有效并伴有恐惧表情时,被试对于目标的判断更加迅速;当比较不同表情下线索无效与线索有效条件下的反应时差异时发现,恐惧表情面孔出现时,线索无效与线索有效条件之间反应时的差异最大。结果表明,阈下情绪刺激能够激发个体更原始的生物性反应;在早期注意阶段,个体对于恐惧信息更加敏感,注意存在负向情绪偏差。  相似文献   

2.
张美晨  魏萍  张钦 《心理学报》2015,47(11):1309-1317
为考察阈上、阈下不同面孔表情下的注视线索提示效应, 实验以阈上和阈下面孔的不同注视朝向为提示线索, 同时变化面孔表情, 要求被试迅速和准确地对随后出现的靶刺激(大写字母)的呈现位置做出判断。结果显示, 在阈上呈现条件下, 注视线索提示效应显著, 且该效应不受面孔表情的影响。在阈下呈现条件下, 注视线索提示效应显著, 但该效应受到了面孔表情的调节, 表现为在注视线索无效时, 判断目标位置所需反应时在正性、负性面孔表情下显著长于中性面孔条件。上述结果说明, 阈上知觉时, 面孔表情虽被清晰感知, 但个体自上而下的控制机制使其被忽略, 因而未能影响个体的注意偏向; 阈下知觉时, 面孔表情得到自动加工, 并影响了个体的注意偏向。  相似文献   

3.
为探讨高特质焦虑者在前注意阶段对情绪刺激的加工模式以明确其情绪偏向性特点, 本研究采用偏差-标准反转Oddball范式探讨了特质焦虑对面部表情前注意加工的影响。结果发现: 对于低特质焦虑组, 悲伤面孔所诱发的早期EMMN显著大于快乐面孔, 而对于高特质焦虑组, 快乐和悲伤面孔所诱发的早期EMMN差异不显著。并且, 高特质焦虑组的快乐面孔EMMN波幅显著大于低特质焦虑组。结果表明, 人格特质是影响面部表情前注意加工的重要因素。不同于普通被试, 高特质焦虑者在前注意阶段对快乐和悲伤面孔存在相类似的加工模式, 可能难以有效区分快乐和悲伤情绪面孔。  相似文献   

4.
摘 要 本文以三种表情面孔为材料,运用事件相关电位(ERP)方法探讨表情线索在注意朝向中与注视转移发生的交互及两者如何共同影响观察者的反应。结果发现:(1)线索效应在三种表情面孔的两种SOA中都出现了;(2)SOA较长时,注视线索效应量间的显著差异出现在中性和恐惧面孔、高兴和恐惧面孔间;(3)面部表情的效应出现在了表情线索诱发的P1成分上;(4)目标诱发的P1和N1说明了注视线索效应的存在及与表情间的交互。结论:表情线索先对观察者的朝向反应产生影响,随后是注视线索及其与表情线索发生的交互。  相似文献   

5.
孙俊才  刘萍  李丹 《心理科学》2018,(5):1084-1089
共情是指个体通过观察、想象或推断他人的情感而产生与之同形的情感体验。本研究采用点探测范式并结合眼动追踪技术,以词语和面孔表情为实验材料,综合探讨了高低共情被试对不同类型刺激材料的注意偏向及具体成分的时间进程特点。结果表明,虽然高低共情被试都对负性刺激(特别是对悲伤面孔)表现出早期注意定向(首次注视到达时间更快),但只有高共情被试对悲伤面孔的晚期注意维持更长(总注视时间更长)。本研究表明,面孔表情是区分共情特质注意偏向更有效的实验材料;高共情被试对悲伤面孔表情存在注意偏向,这为理解人际间的心灵感知提供了重要的实证依据。  相似文献   

6.
雷怡  夏琦  莫志凤  李红 《心理学报》2020,52(7):811-822
近年来, 研究发现, 与成人面孔和其他社会性刺激相比, 成人对婴儿面孔表现出更多的注意偏向。本研究利用点探测范式, 结合眼动技术, 探讨了面孔可爱度和熟悉度对婴儿面孔注意偏向效应的影响。行为结果表明, 成人对高可爱度的婴儿面孔的反应时注意偏向更强; 眼动结果发现, 高可爱度的婴儿面孔的首注视时间偏向和总注视时间偏向更强, 表现为注意维持模式, 并且, 这一效应只出现在低熟悉度条件下; 而在可爱度评分上, 高熟悉度的婴儿面孔的可爱度评分显著高于低熟悉度的婴儿面孔。结果表明, 在低熟悉度条件下, 可爱度才会影响成人对婴儿面孔注意偏向效应; 在偏好行为上, 对婴儿面孔的主观评定和观看行为上可能存在分离的情况。  相似文献   

7.
本研究拟采用点探测范式及不同情绪内容的面孔刺激(高兴、中性、悲伤和愤怒)考察阈下抑郁个体的负性注意偏向及其内在机制。点探测任务中情绪面孔配对呈现(负性-中性、正性-中性),配对面孔中的情绪线索位置与靶刺激的位置构成负性一致/不一致和正性一致/不一致条件,同时实验中加入“中性-中性”面孔线索作为一致和不一致条件的对比基线来考察注意偏向的内在机制。结果发现,阈下抑郁个体在负性不一致条件下的反应时显著长于负性一致条件,表明阈下抑郁个体具有对负性刺激的注意偏向;进一步比较发现,阈下抑郁个体在负性不一致条件下的反应时显著的长于“中性-中性”基线条件,而负性一致条件与基线之间差异不显著,表明阈下抑郁个体的负性注意偏向为对负性刺激的注意解脱困难。结果另发现,阈下抑郁个体未能像无抑郁对照组个体表现出对正性刺激的注意偏向。结果表明,处于阈下抑郁状态的个体表现出对负性刺激的注意偏向,具体为对负性刺激的注意解脱困难,其原因可能是由于阈下抑郁个体在注意控制和情绪调节功能上的紊乱。  相似文献   

8.
本研究采用"识别-判断"的实验范式考察面部表情识别和性别识别的反应时和正确率,使用2(任务:表情判断、性别判断)×2(图片表情:积极表情、消极表情)×2(图片人物性别:男性、女性)的被试内实验设计,以反应时和正确率为因变量,探讨人脸图片中性别和表情效价信息加工的交互影响。研究发现:个体对积极表情的识别速度和准确率都好于消极表情;研究中出现了同性别偏向现象,女性被试对女性面孔的加工速度更快;面孔表情和面孔性别加工间存在交互作用,表情加工对性别加工产生了影响,未发现性别加工对表情加工的影响。  相似文献   

9.
本研究通过3个实验探讨群体信息对面部表情识别的影响。结果发现:(1)周围面孔的情绪状态影响个体对目标面孔情绪的识别,两者情绪一致时的反应时显著短于不一致时的反应时,且面部表情识别准确性更高。(2)群体信息会调节周围面孔情绪对目标面孔的影响,进而影响面部表情识别。具体而言,群体条件下,个体对目标面部表情的识别受到周围面孔情绪状态的影响,相比周围面孔情绪与目标面孔情绪不一致,两者情绪一致时,即符合个体基于知觉线索建立的群体成员情绪具有一致性的预期,面部表情识别的准确性更高、速度更快;而非群体条件下,个体则不受周围面孔情绪状态的影响。研究结果表明,个体能够基于互动人物之间的社会关系识别面孔情绪,群体存在时,会建立群体成员情绪具有一致性的预期,进而影响面部表情识别。  相似文献   

10.
选取大学本科生33名,采用情绪启动范式与再认范式相结合,要求被试依次完成情绪词识记、目标面孔性别判断及情绪词再认任务,探讨保存于工作记忆中的情绪性刺激对面孔性别判断任务的影响。结果显示:(1)在中性和恐惧情绪启动刺激条件下,被试对目标面孔性别判断的反应时要显著长于悲伤条件。(2)在愉悦情绪启动词条件下,线索提示有效性差异显著;在无效线索提示条件下,启动刺激的不同情绪效价差异显著。(3)对情绪面孔性别与被试性别一致性/非一致性两种条件下反应时对比发现,男、女被试在情绪面孔性别判断任务中均表现出异性相吸效应。综上所述,保存在工作记忆中情绪刺激会对面孔性别的识别产生自上而下的影响。  相似文献   

11.
采用无任务浏览范式,选取积极、中性和消极的情绪面孔作为材料,将面孔划分为3个兴趣区(眼睛、鼻子和嘴),考察中国个体在加工本族与高加索情绪面孔的眼动注视特点。研究结果显示,在浏览本族的(积极、中性和消极)情绪面孔时,中国被试对眼睛和鼻子的注视次数和时长显著多于对嘴部的注视次数和时长。在浏览高加索的(积极、中性和消极)情绪面孔时,中国被试对眼睛和鼻子的注视次数和时长显著多于其对嘴部的注视次数和时长,并且对鼻子的注视次数和时长也显著多于眼睛的注视次数和时长。结果表明,在加工本族面孔表情,中国被试将鼻子和眼睛作为主要注视区域,而加工高加索面孔表情时则是将鼻子作为最主要注视区域,即中国被试加工本族和异族面孔表情的注视特点存在差异。  相似文献   

12.
This study investigated facial expression recognition in peripheral relative to central vision, and the factors accounting for the recognition advantage of some expressions in the visual periphery. Whole faces or only the eyes or the mouth regions were presented for 150 ms, either at fixation or extrafoveally (2.5° or 6°), followed by a backward mask and a probe word. Results indicated that (a) all the basic expressions were recognized above chance level, although performance in peripheral vision was less impaired for happy than for non-happy expressions, (b) the happy face advantage remained when only the mouth region was presented, and (c) the smiling mouth was the most visually salient and most distinctive facial feature of all expressions. This suggests that the saliency and the diagnostic value of the smile account for the advantage in happy face recognition in peripheral vision. Because of saliency, the smiling mouth accrues sensory gain and becomes resistant to visual degradation due to stimulus eccentricity, thus remaining accessible extrafoveally. Because of diagnostic value, the smile provides a distinctive single cue of facial happiness, thus bypassing integration of face parts and reducing susceptibility to breakdown of configural processing in peripheral vision.  相似文献   

13.
The present study investigated whether facial expressions modulate visual attention in 7-month-old infants. First, infants' looking duration to individually presented fearful, happy, and novel facial expressions was compared to looking duration to a control stimulus (scrambled face). The face with a novel expression was included to examine the hypothesis that the earlier findings of greater allocation of attention to fearful as compared to happy faces could be due to the novelty of fearful faces in infants' rearing environment. The infants looked longer at the fearful face than at the control stimulus, whereas no such difference was found between the other expressions and the control stimulus. Second, a gap/overlap paradigm was used to determine whether facial expressions affect the infants' ability to disengage their fixation from a centrally presented face and shift attention to a peripheral target. It was found that infants disengaged their fixation significantly less frequently from fearful faces than from control stimuli and happy faces. Novel facial expressions did not have a similar effect on attention disengagement. Thus, it seems that adult-like modulation of the disengagement of attention by threat-related stimuli can be observed early in life, and that the influence of emotionally salient (fearful) faces on visual attention is not simply attributable to the novelty of these expressions in infants' rearing environment.  相似文献   

14.
Event-related brain potentials (ERPs) were recorded to assess the processing time course of ambiguous facial expressions with a smiling mouth but neutral, fearful, or angry eyes, in comparison with genuinely happy faces (a smile and happy eyes) and non-happy faces (neutral, fearful, or angry mouth and eyes). Participants judged whether the faces looked truly happy or not. Electroencephalographic recordings were made from 64 scalp electrodes to generate ERPs. The neural activation patterns showed early P200 sensitivity (differences between negative and positive or neutral expressions) and EPN sensitivity (differences between positive and neutral expressions) to emotional valence. In contrast, sensitivity to ambiguity (differences between genuine and ambiguous expressions) emerged only in later LPP components. Discrimination of emotional vs. neutral affect occurs between 180 and 430 ms from stimulus onset, whereas the detection and resolution of ambiguity takes place between 470 and 720 ms. In addition, while blended expressions involving a smile with angry eyes can be identified as not happy in the P200 (175–240 ms) component, smiles with fearful or neutral eyes produce the same ERP pattern as genuinely happy faces, thus revealing poor discrimination.  相似文献   

15.
The present study investigated whether dysphoric individuals have a difficulty in disengaging attention from negative stimuli and/or reduced attention to positive information. Sad, neutral and happy facial stimuli were presented in an attention-shifting task to 18 dysphoric and 18 control participants. Reaction times to neutral shapes (squares and diamonds) and the event-related potentials to emotional faces were recorded. Dysphoric individuals did not show impaired attentional disengagement from sad faces or facilitated disengagement from happy faces. Right occipital lateralisation of P100 was absent in dysphoric individuals, possibly indicating reduced attention-related sensory facilitation for faces. Frontal P200 was largest for sad faces among dysphoric individuals, whereas controls showed larger amplitude to both sad and happy as compared with neutral expressions, suggesting that dysphoric individuals deployed early attention to sad, but not happy, expressions. Importantly, the results were obtained controlling for the participants' trait anxiety. We conclude that at least under some circumstances the presence of depressive symptoms can modulate early, automatic stages of emotional processing.  相似文献   

16.
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).  相似文献   

17.
Do threatening stimuli draw or hold visual attention in subclinical anxiety?   总被引:20,自引:0,他引:20  
Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.  相似文献   

18.
There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion.  相似文献   

19.
Face recognition involves both processing of information relating to features (e.g., eyes, nose, mouth, hair, i.e., featural processing), as well as the spatial relation between these features (configural processing). In a sequential matching task, participants had to decide whether two faces that differed in either featural or relational aspects were identical or different. In order to test for the microgenesis of face recognition (the development of processing onsets), presentation times of the backward-masked target face were varied (32, 42, 53, 63, 74, 84, or 94 msec.). To test for specific processing onsets and the processing of different facial areas, both featurally and relationally modified faces were manipulated in terms of changes to one facial area (eyes or nose or mouth), two, or three facial areas. For featural processing, an early onset for the eyes and mouth was at 32 msec. of presentation time, but a late onset for the nose was detected. For relationally differing faces, all onsets were delayed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号