首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 320 毫秒
1.
The present study investigated whether facial expressions of emotion presented outside consciousness awareness will elicit evaluative responses as assessed in affective priming. Participants were asked to evaluate pleasant and unpleasant target words that were preceded by masked or unmasked schematic (Experiment 1) or photographic faces (Experiments 1 and 2) with happy or angry expressions. They were either required to perform the target evaluation only or to perform the target evaluation and to name the emotion expressed by the face prime. Prime-target interval was 300 ms in Experiment 1 and 80 ms in Experiment 2. Naming performance confirmed the effectiveness of the masking procedure. Affective priming was evident after unmasked primes in tasks that required naming of the facial expressions for schematic and photographic faces and after unmasked primes in tasks that did not require naming for photographic faces. No affective priming was found after masked primes. The present study failed to provide evidence for affective priming with masked face primes, however, it indicates that voluntary attention to the primes enhances affective priming.  相似文献   

2.
In two experiments, prime face stimuli with an emotional or a neutral expression were presented individually for 25 to 125 ms, either in foveal or parafoveal vision; following a mask, a probe face or a word label appeared for recognition. Accurate detection and sensitivity (A') were higher for angry, happy, and sad faces than for nonemotional (neutral) or novel (scheming) faces at short exposure times (25-75 ms), in both the foveal and the parafoveal field, and with both the probe face and the probe word. These results indicate that there is a low perceptual threshold for unambiguous emotional faces, which are especially likely to be detected both within and outside the focus of attention; and that this facilitated detection involves processing of the affective meaning of faces, not only discrimination of formal visual features.  相似文献   

3.
This study investigated facial expression recognition in peripheral relative to central vision, and the factors accounting for the recognition advantage of some expressions in the visual periphery. Whole faces or only the eyes or the mouth regions were presented for 150 ms, either at fixation or extrafoveally (2.5° or 6°), followed by a backward mask and a probe word. Results indicated that (a) all the basic expressions were recognized above chance level, although performance in peripheral vision was less impaired for happy than for non-happy expressions, (b) the happy face advantage remained when only the mouth region was presented, and (c) the smiling mouth was the most visually salient and most distinctive facial feature of all expressions. This suggests that the saliency and the diagnostic value of the smile account for the advantage in happy face recognition in peripheral vision. Because of saliency, the smiling mouth accrues sensory gain and becomes resistant to visual degradation due to stimulus eccentricity, thus remaining accessible extrafoveally. Because of diagnostic value, the smile provides a distinctive single cue of facial happiness, thus bypassing integration of face parts and reducing susceptibility to breakdown of configural processing in peripheral vision.  相似文献   

4.
Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.  相似文献   

5.
Three experiments examined the recognition speed advantage for happy faces. The results replicated earlier findings by showing that positive (happy) facial expressions were recognized faster than negative (disgusted or sad) facial expressions (Experiments 1 and 2). In addition, the results showed that this effect was evident even when low-level physical differences between positive and negative faces were controlled by using schematic faces (Experiment 2), and that the effect was not attributable to an artifact arising from facilitated recognition of a single feature in the happy faces (up-turned mouth line, Experiment 3). Together, these results suggest that the happy face advantage may reflect a higher-level asymmetry in the recognition and categorization of emotionally positive and negative signals.  相似文献   

6.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

7.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

8.
愉快面孔识别优势表现为被试对高兴面孔比对其他情绪面孔的识别正确率更高、反应时更短。大量研究以简笔画和面孔图片为材料, 在情绪分类任务和视觉搜索任务中均发现这一优势。该优势存在诊断性价值假说、情绪独特性假说和出现频率假说三种不同的理论解释。近年来, 研究者采用ERP技术发现这一优势形成于反应选择阶段, 但其起始阶段尚无一致结论。未来可借助fMRI技术进一步研究其认知神经机制。  相似文献   

9.
Priming of affective word evaluation by pictures of faces showing positive and negative emotional expressions was investigated in two experiments that used a double task procedure where participants were asked to respond to the prime or to the target on different trials. The experiments varied between-subjects the prime task assignment and the prime-target interval (SOA, stimulus onset asynchrony). Significant congruency effects (that is, faster word evaluation when prime and target had the same valence than when they were of opposite valence) were observed in both experiments. When the prime task oriented the subjects to an affectively irrelevant property of the faces (their gender), priming was observed at SOA 300 ms but not at SOA 1000 ms (Experiment 1). However, when the prime task assignment explicitly oriented the subjects to the valence of the face, priming was observed at both SOA durations (Experiment 2). These results show, first, that affective priming by pictures of facial emotion can be obtained even when the subject has an explicit goal to process a non-affective property of the prime. Second, sensitivity of the priming effect to SOA duration seems to depend on whether it is mediated by intentional or unintentional activation of the valence of the face prime.  相似文献   

10.
Earlier research has indicated that some characteristics of facial expressions may be automatically processed. This study investigated automaticity as evidenced by involuntary interference in a word evaluation task. Compound stimuli, consisting of words superimposed on pictures of affective faces, were presented to subjects who were given the task of evaluating the affective valence of the words while disregarding the faces. Results of three experiments showed that word evaluation was influenced by the concurrently shown affective faces. Overall, negative words were found to require longer latencies, indicating that more processing resources are invested in negative than in positive stimuli. This speed advantage for positive words was modified by the faces. Negative words were facilitated, relative to positive ones, when shown with a negative expression (e.g. a sad face). Correspondingly, negative words were inhibited, relative to positive ones, when shown with a positive expression (e.g. a happy face). The results are consistent with automatic, involuntary semantic processing of affective facial expressions.  相似文献   

11.
Theoretical models of attention for affective information have assigned a special status to the cognitive processing of emotional facial expressions. One specific claim in this regard is that emotional faces automatically attract visual attention. In three experiments, the authors investigated attentional cueing by angry, happy, and neutral facial expressions that were presented under conditions of limited awareness. In these experiments, facial expressions were presented in a masked (14 ms or 34 ms, masked by a neutral face) and unmasked fashion (34 ms or 100 ms). Compared with trials containing neutral cues, delayed responding was found on trials with emotional cues in the unmasked, 100-ms condition, suggesting stronger allocation of cognitive resources to emotional faces. However, in both masked and unmasked conditions, the hypothesized cueing of visual attention to the location of emotional facial expression was not found. In contrary, attentional cueing by emotional faces was less strong compared with neutral faces in the unmasked, 100-ms condition. These data suggest that briefly presented emotional faces influence cognitive processing but do not automatically capture visual attention.  相似文献   

12.
We investigated the source of the visual search advantage of some emotional facial expressions. An emotional face target (happy, surprised, disgusted, fearful, angry, or sad) was presented in an array of neutral faces. A faster detection was found for happy targets, with angry and, especially, sad targets being detected more poorly. Physical image properties (e.g., luminance) were ruled out as a potential source of these differences in visual search. In contrast, the search advantage is partly due to the facilitated processing of affective content, as shown by an emotion identification task. Happy expressions were identified faster than the other expressions and were less likely to be confounded with neutral faces, whereas misjudgements occurred more often for angry and sad expressions. Nevertheless, the distinctiveness of some local features (e.g., teeth) that are consistently associated with emotional expressions plays the strongest role in the search advantage pattern. When the contribution of these features to visual search was factored out statistically, the advantage disappeared.  相似文献   

13.
Emotion influences memory in many ways. For example, when a mood-dependent processing shift is operative, happy moods promote global processing and sad moods direct attention to local features of complex visual stimuli. We hypothesized that an emotional context associated with to-be-learned facial stimuli could preferentially promote global or local processing. At learning, faces with neutral expressions were paired with a narrative providing either a happy or a sad context. At test, faces were presented in an upright or inverted orientation, emphasizing configural or analytical processing, respectively. A recognition advantage was found for upright faces learned in happy contexts relative to those in sad contexts, whereas recognition was better for inverted faces learned in sad contexts than for those in happy contexts. We thus infer that a positive emotional context prompted more effective storage of holistic, configural, or global facial information, whereas a negative emotional context prompted relatively more effective storage of local or feature-based facial information  相似文献   

14.
A new model of mental representation is applied to social cognition: the attractor field model. Using the model, the authors predicted and found a perceptual advantage but a memory disadvantage for faces displaying evaluatively congruent expressions. In Experiment 1, participants completed a same/different perceptual discrimination task involving morphed pairs of angry-to-happy Black and White faces. Pairs of faces displaying evaluatively incongruent expressions (i.e., happy Black, angry White) were more likely to be labeled as similar and were less likely to be accurately discriminated from one another than faces displaying evaluatively congruent expressions (i.e., angry Black, happy White). Experiment 2 replicated this finding and showed that objective discriminability of stimuli moderated the impact of attractor field effects on perceptual discrimination accuracy. In Experiment 3, participants completed a recognition task for angry and happy Black and White faces. Consistent with the attractor field model, memory accuracy was better for faces displaying evaluatively incongruent expressions. Theoretical and practical implications of these findings are discussed.  相似文献   

15.
不同愉悦度面孔阈下情绪启动效应:来自ERP的证据   总被引:2,自引:0,他引:2  
吕勇  张伟娜  沈德立 《心理学报》2010,42(9):929-938
采用事件相关电位技术,研究阈下情绪启动效应。实验中的因素是阈下呈现的情绪启动面孔的愉悦度,分为高、低两个水平。被试的任务是对中性靶刺激面孔进行情绪判断。结果发现:被试在对靶刺激进行情绪判断时出现与启动刺激愉悦度趋于一致的启动效应;低愉悦度面孔作启动刺激条件下N1和P2的波幅显著大于高愉悦度面孔作为启动刺激的条件;不同愉悦度情绪面孔的阈下启动效应是由于启动刺激影响了对靶刺激的知觉加工所致。  相似文献   

16.
An affective priming paradigm with pictures of environmental scenes and facial expressions as primes and targets, respectively, was employed in order to investigate the role of natural (e.g., vegetation) and built elements (e.g., buildings) in eliciting rapid affective responses. In Experiment 1, images of environmental scenes were digitally manipulated to make continua of priming pictures with a gradual increase of natural elements (and a decrease of built elements). The primes were followed by presentations of facial expressions of happiness and disgust as to-be-recognized target stimuli. The recognition times of happy faces decreased and the recognition times of disgusted faces increased as the quantity of natural/built material present in the primes increased/decreased. The physical changes also influenced the evaluated restorativeness and affective valence of the primes. In Experiment 2, the primes used in Experiment 1 were manipulated in such a way that they were void of any recognizable natural or built elements but contained either similar colours or similar shapes as primes in Experiment 1. This time the results showed no effect of priming. These results were interpreted to give support for a view that the priming effect by environmental pictures is due to the primes representing environmental scenes and not due to the presence of certain low-level colour or shape information in the primes. In all, the present results provide evidence that perception of environmental scenes elicits automatic affective responses and influences recognition of facial expressions.  相似文献   

17.
We examined dysfunctional memory processing of facial expressions in relation to alexithymia. Individuals with high and low alexithymia, as measured by the Toronto Alexithymia Scale (TAS-20), participated in a visual search task (Experiment 1A) and a change-detection task (Experiments 1B and 2), to assess differences in their visual short-term memory (VSTM). In the visual search task, the participants were asked to judge whether all facial expressions (angry and happy faces) in the search display were the same or different. In the change-detection task, they had to decide whether all facial expressions changed between successive two displays. We found individual differences only in the change-detection task. Individuals with high alexithymia showed lower sensitivity for the happy faces compared to the angry faces, while individuals with low alexithymia showed sufficient recognition for both facial expressions. Experiment 2 examined whether individual differences were observed during early storage or later retrieval stage of the VSTM process using a single-probe paradigm. We found no effect of single-probe, indicating that individual differences occurred at the storage stage. The present results provide new evidence that individuals with high alexithymia show specific impairment in VSTM processes (especially the storage stage) related to happy but not to angry faces.  相似文献   

18.
为了考察自闭症谱系障碍(autism spectrum disorder,ASD)儿童对不同情绪面孔的觉察和加工情况,设计两个眼动实验任务,要求14名7~10岁ASD儿童和20名同年龄正常儿童观看图片。实验一采用将情绪面孔嵌入风景图片中引起语义不一致的刺激;实验二采用含有情绪面孔的无意义背景乱序图片刺激。结果发现:(1)ASD儿童对不同情绪面孔的觉察时间都显著长于正常儿童;(2)与正常儿童一样,ASD儿童表现出对恐惧面孔的注意偏向;(3)实验二中,ASD儿童对不同情绪面孔内部特征区的注意分配与正常儿童不同;正常儿童能注意最能展示该类情绪特征信息的区域,如恐惧的眼睛、愉快的嘴巴,而ASD儿童对三类情绪面孔特征区的注意分配方式相似;(4)两个实验条件下,ASD者对不同情绪面孔的觉察、加工模式与正常儿童相似。  相似文献   

19.
We used the remember-know procedure (Tulving, 1985 ) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   

20.
We systematically examined the impact of emotional stimuli on time perception in a temporal reproduction paradigm where participants reproduced the duration of a facial emotion stimulus using an oval-shape stimulus or vice versa. Experiment 1 asked participants to reproduce the duration of an angry face (or the oval) presented for 2,000 ms. Experiment 2 included a range of emotional expressions (happy, sad, angry, and neutral faces as well as the oval stimulus) presented for different durations (500, 1,500, and 2,000 ms). We found that participants over-reproduced the durations of happy and sad faces using the oval stimulus. By contrast, there was a trend of under-reproduction when the duration of the oval stimulus was reproduced using the angry face. We suggest that increased attention to a facial emotion produces the relativity of time perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号