首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
以往情绪标注的研究发现描述情绪面孔的情绪(外部情绪标注)可以降低被试的情绪。但是描述自己的情绪(内部情绪标注)结果如何尚存争议。研究采用了ERPs技术,以愉快和悲伤情绪面孔为实验材料,选择18名大学本科生为被试,比较了内部情绪标注、外部情绪标注及性别标注之间的差异。结果发现:外部情绪标注和性别标注相比,其LPP波幅均小于性别标注下LPP波幅,并且这种差异在Pz、CPz、Cz、FCz、Fz五个电极点均类似;而内部情绪标注和性别标注相比,其LPP波幅均小于性别标注下LPP波幅,但仅表现在Pz、CPz、Cz三个点上。这说明:无论内部情绪标注还是外部情绪标注,在命名后均能抑制情绪。研究结果支持阻断理论。  相似文献   

2.
We examined proactive and reactive control effects in the context of task-relevant happy, sad, and angry facial expressions on a face-word Stroop task. Participants identified the emotion expressed by a face that contained a congruent or incongruent emotional word (happy/sad/angry). Proactive control effects were measured in terms of the reduction in Stroop interference (difference between incongruent and congruent trials) as a function of previous trial emotion and previous trial congruence. Reactive control effects were measured in terms of the reduction in Stroop interference as a function of current trial emotion and previous trial congruence. Previous trial negative emotions exert greater influence on proactive control than the positive emotion. Sad faces in the previous trial resulted in greater reduction in the Stroop interference for happy faces in the current trial. However, current trial angry faces showed stronger adaptation effects compared to happy faces. Thus, both proactive and reactive control mechanisms are dependent on emotional valence of task-relevant stimuli.  相似文献   

3.
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.  相似文献   

4.
Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults’ perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants’ rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.  相似文献   

5.
The current study investigated 6-, 9- and 12-month old infants’ ability to categorically perceive facial emotional expressions depicting faces from two continua: happy–sad and happy–angry. In a between-subject design, infants were tested on their ability to discriminate faces that were between-category (across the category boundary) or within-category (within emotion category). Results suggest that 9- and 12 month-olds can discriminate between but not within categories, for the happy–angry continuum. Infants could not discriminate between cross-boundary facial expressions in the happy–sad continuum at any age. We suggest a functional account; categorical perception may develop in conjunction with the emotion's relevance to the infant.  相似文献   

6.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed.  相似文献   

7.
A signal-detection task was used to assess sex differences in emotional face recognition under conditions of uncertainty. Computer images of Ekman faces showing sad, angry, happy, and fearful emotional states were presented for 50 ms to thirty-six men and thirty-seven women. All participants monitored for presentation of either happy, angry, or sad emotional expressions in three separate blocks. Happy faces were the most easily discriminated. Sad and angry expressions were most often mistaken for each other. Analyses of d' values, hit rates, and reaction times all yielded similar results, with no sex differences for any of the measures.  相似文献   

8.
Previous studies on gender differences in facial imitation and verbally reported emotional contagion have investigated emotional responses to pictures of facial expressions at supraliminal exposure times. The aim of the present study was to investigate how gender differences are related to different exposure times, representing information processing levels from subliminal (spontaneous) to supraliminal (emotionally regulated). Further, the study aimed at exploring correlations between verbally reported emotional contagion and facial responses for men and women. Masked pictures of angry, happy and sad facial expressions were presented to 102 participants (51 men) at exposure times from subliminal (23 ms) to clearly supraliminal (2500 ms). Myoelectric activity (EMG) from the corrugator and the zygomaticus was measured and the participants reported their hedonic tone (verbally reported emotional contagion) after stimulus exposures. The results showed an effect of exposure time on gender differences in facial responses as well as in verbally reported emotional contagion. Women amplified imitative responses towards happy vs. angry faces and verbally reported emotional contagion with prolonged exposure times, whereas men did not. No gender differences were detected at the subliminal or borderliminal exposure times, but at the supraliminal exposure gender differences were found in imitation as well as in verbally reported emotional contagion. Women showed correspondence between their facial responses and their verbally reported emotional contagion to a greater extent than men. The results were interpreted in terms of gender differences in emotion regulation, rather than as differences in biologically prepared emotional reactivity.  相似文献   

9.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

10.
抑郁个体对情绪面孔的返回抑制能力不足   总被引:2,自引:0,他引:2  
戴琴  冯正直 《心理学报》2009,41(12):1175-1188
探讨抑郁对情绪面孔返回抑制能力的影响。以贝克抑郁量表、自评抑郁量表、CCMD-3和汉密顿抑郁量表为工具筛选出了正常对照组、抑郁康复组和抑郁患者组各17名被试进行了真人情绪面孔线索-靶子任务的行为学实验和事件相关电位(ERP)实验。在线索靶子范式中, 靶子在线索消失后出现, 被试对靶子的位置作出反应。行为学实验显示线索靶子间隔时间(stimulus onset asynchronies, SOA)为14ms时, 正常对照组对中性面孔有返回抑制效应, 抑郁康复组对所有面孔均存在返回抑制效应, 患者组对愤怒、悲伤面孔和中性面孔存在返回抑制效应; SOA为250ms时三组被试均对悲伤面孔存在返回抑制能力不足, 以患者组最突出, 康复组对高兴面孔存在返回抑制能力不足; SOA为750ms时正常组对悲伤面孔存在返回抑制效应, 康复组对高兴和悲伤面孔存在返回抑制能力不足, 患者组对悲伤面孔存在返回抑制能力不足, 对愤怒面孔存在返回抑制效应。在SOA为750ms的条件下, ERP波形特点为正常组对高兴面孔线索P3波幅大于其他组, 对高兴面孔无效提示P1波幅小于其他面孔, 对悲伤面孔有效提示P1波幅小于高兴面孔, 对高兴面孔有效提示P3波幅大于患者组, 对悲伤面孔无效提示P3波幅大于其他组; 康复组对悲伤面孔线索P3波幅大于其他面孔, 对高兴面孔有效提示P3波幅大于患者组, 对悲伤面孔无效提示P3波幅小于正常组; 患者组对悲伤面孔线索P1波幅大于其他组、P3波幅大于其他面孔, 对悲伤面孔无效提示P3波幅小于正常组, 高兴面孔有效提示P3波幅小于其他组。提示抑郁患者对负性刺激有返回抑制能力不足, 这种对负性刺激抑制能力的缺失导致抑郁个体难以抗拒负性事件的干扰而受到不良情绪状态的困扰, 所以他们可能更多的体验到抑郁情绪, 并致使抑郁持续和发展。而抑郁康复个体对高兴、悲伤面孔均有返回抑制能力不足, 这让康复个体能同时感受到正、负性刺激, 从而能保持一种认知和情绪上特定的平衡。  相似文献   

11.
Using signal detection methods, possible effects of emotion type (happy, angry), gender of the stimulus face, and gender of the participant on the detection and response bias of emotion in briefly presented faces were investigated. Fifty-seven participants (28 men, 29 women) viewed 90 briefly presented faces (30 happy, 30 angry, and 30 neutral, each with 15 male and 15 female faces) answering yes if the face was perceived as emotional and no if it was not perceived as emotional. Sensitivity [d', z(hit rate) minus z(false alarm rate)] and response bias (β, likelihood ratio of "signal plus noise" vs. "noise") were measured for each face combination for each presentation time (6.25, 12.50, 18.75, 25.00, 31.25 ms). The d' values were higher for happy than for angry faces and higher for angry-male than for angry-female faces, and there were no effects of gender-of-participant. Results also suggest a greater tendency for participants to judge happy-female faces as emotional, as shown by lower β values for these faces as compared to the other emotion-gender combinations. This happy-female response bias suggests, at least, a partial explanation to happy-superiority effects in studies where performance is only measured as percent correct responses, and, in general, that women are expected to be happy.  相似文献   

12.
Memories of objects are biased toward what is typical of the category to which they belong. Prior research on memory for emotional facial expressions has demonstrated a bias towards an emotional expression prototype (e.g., slightly happy faces are remembered as happier). We investigate an alternate source of bias in memory for emotional expressions – the central tendency bias. The central tendency bias skews reconstruction of a memory trace towards the center of the distribution for a particular attribute. This bias has been attributed to a Bayesian combination of an imprecise memory for a particular object with prior information about its category. Until now, studies examining the central tendency bias have focused on simple stimuli. We extend this work to socially relevant, complex, emotional facial expressions. We morphed facial expressions on a continuum from sad to happy. Different ranges of emotion were used in four experiments in which participants viewed individual expressions and, after a variable delay, reproduced each face by adjusting a morph to match it. Estimates were biased toward the center of the presented stimulus range, and the bias increased at longer memory delays, consistent with the Bayesian prediction that as trace memory loses precision, category knowledge is given more weight. The central tendency effect persisted within and across emotion categories (sad, neutral, and happy). This article expands the scope of work on inductive category effects to memory for complex, emotional stimuli.  相似文献   

13.
We investigated the source of the visual search advantage of some emotional facial expressions. An emotional face target (happy, surprised, disgusted, fearful, angry, or sad) was presented in an array of neutral faces. A faster detection was found for happy targets, with angry and, especially, sad targets being detected more poorly. Physical image properties (e.g., luminance) were ruled out as a potential source of these differences in visual search. In contrast, the search advantage is partly due to the facilitated processing of affective content, as shown by an emotion identification task. Happy expressions were identified faster than the other expressions and were less likely to be confounded with neutral faces, whereas misjudgements occurred more often for angry and sad expressions. Nevertheless, the distinctiveness of some local features (e.g., teeth) that are consistently associated with emotional expressions plays the strongest role in the search advantage pattern. When the contribution of these features to visual search was factored out statistically, the advantage disappeared.  相似文献   

14.
The purpose of the present research was to examine if anxiety is linked to a memory-based attentional bias, in which attention to threat is thought to depend on implicit learning. Memory-based attentional biases were defined and also demonstrated in two experiments. A total of 168 university students were shown a pair of faces that varied in their emotional $ content (angry, neutral, and happy), with each type of emotion being consistently preceded by a particular neutral cue face, appearing in the same position. Eye movements were measured during these cue faces and during the emotional faces. The results of two experiments indicated that anxiety was connected with a tendency to avert one's gaze from the positions of angry faces to the positions of happy faces, before these were shown on the screen. This, in turn, caused a reduced perception of angry relative to happy faces. In Experiment 2, participants were also not aware of having a memory-based attentional bias.  相似文献   

15.
This study examined how the awareness of emotion-related time distortions modifies the effect of emotion on time perception. Before performing a temporal bisection task with stimulus durations presented in the form of neutral or emotional facial expressions (angry, disgusted and ashamed faces), some of the participants read a scientific text providing either correct or incorrect information on the emotion–time relationship. Other participants did not receive any information. The results showed that the declarative knowledge allowed the participants to regulate (decrease) the intensity of emotional effects on the perception of time, but did not trigger temporal effects when the emotional stimuli did not automatically induce emotional reactions that distorted time.  相似文献   

16.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

17.
Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1, participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics.  相似文献   

18.
Are emotions perceived automatically? Two psychological refractory period experiments were conducted to ascertain whether emotion perception requires central attentional resources. Task 1 required an auditory discrimination (tone vs. noise), whereas Task 2 required a discrimination between happy and angry faces. The difficulty of Task 2 was manipulated by varying the degree of emotional expression. The stimulus onset asynchrony (SOA) between Task 1 and Task 2 was also varied. Experiment 1 revealed additive effects of SOA and Task 2 emotion-perception difficulty. Experiment 2 replicated the additive relationship with a stronger manipulation of emotion-perception difficulty. According to locus-of-slack logic, our participants did not process emotional expressions while central resources were devoted to Task 1. We conclude that emotion perception is not fully automatic.  相似文献   

19.
The study aimed to determine if the memory bias for negative faces previously demonstrated in depression and dysphoria generalises from long- to short-term memory. A total of 29 dysphoric (DP) and 22 non-dysphoric (ND) participants were presented with a series of faces and asked to identify the emotion portrayed (happiness, sadness, anger, or neutral affect). Following a delay, four faces were presented (the original plus three distractors) and participants were asked to identify the target face. Half of the trials assessed memory for facial emotion, and the remaining trials examined memory for facial identity. At encoding, no group differences were apparent. At memory testing, relative to ND participants, DP participants exhibited impaired memory for all types of facial emotion and for facial identity when the faces featured happiness, anger, or neutral affect, but not sadness. DP participants exhibited impaired identity memory for happy faces relative to angry, sad, and neutral, whereas ND participants exhibited enhanced facial identity memory when faces were angry. In general, memory for faces was not related to performance at encoding. However, in DP participants only, memory for sad faces was related to sadness recognition at encoding. The results suggest that the negative memory bias for faces in dysphoria does not generalise from long- to short-term memory.  相似文献   

20.
Facial expressions are critical for effective social communication, and as such may be processed by the visual system even when it might be advantageous to ignore them. Previous research has shown that categorising emotional words was impaired when faces of a conflicting valence were simultaneously presented. In the present study, we examined whether emotional word categorisation would also be impaired when faces of the same (negative) valence but different emotional category (either angry, sad or fearful) were simultaneously presented. Behavioural results provided evidence for involuntary processing of basic emotional facial expression category, with slower word categorisation when the face and word categories were incongruent (e.g., angry word and sad face) than congruent (e.g., angry word and angry face). Event-related potentials (ERPs) time-locked to the presentation of the word–face pairs also revealed that emotional category congruency effects were evident from approximately 170 ms after stimulus onset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号