首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

2.
It is well-known that patients having sustained frontal-lobe traumatic brain injury (TBI) are severely impaired on tests of emotion recognition. Indeed, these patients have significant difficulty recognizing facial expressions of emotion, and such deficits are often associated with decreased social functioning and poor quality of life. As of yet, no studies have examined the response patterns which underlie facial emotion recognition impairment in TBI and which may lend clarity to the interpretation of deficits. Therefore, the present study aimed to characterize response patterns in facial emotion recognition in 14 patients with frontal TBI compared to 22 matched control subjects, using a task which required participants to rate the intensity of each emotion (happiness, sadness, anger, disgust, surprise and fear) of a series of photographs of emotional and neutral faces. Results first confirmed the presence of facial emotion recognition impairment in TBI, and further revealed that patients displayed a liberal bias when rating facial expressions, leading them to associate intense ratings of incorrect emotional labels to sad, disgusted, surprised and fearful facial expressions. These findings are generally in line with prior studies which also report important facial affect recognition deficits in TBI patients, particularly for negative emotions.  相似文献   

3.
In a sample of 325 college students, we examined how context influences judgments of facial expressions of emotion, using a newly developed facial affect recognition task in which emotional faces are superimposed upon emotional and neutral contexts. This research used a larger sample size than previous studies, included more emotions, varied the intensity level of the expressed emotion to avoid potential ceiling effects from very easy recognition, did not explicitly direct attention to the context, and aimed to understand how recognition is influenced by non-facial information, both situationally-relevant and situationally-irrelevant. Both accuracy and RT varied as a function of context. For all facial expressions of emotion other than happiness, accuracy increased when the emotion of the face and context matched, and decreased when they mismatched. For all emotions, participants responded faster when the emotion of the face and image matched and slower when they mismatched. Results suggest that the judgment of the facial expression is itself influenced by the contextual information instead of both being judged independently and then combined. Additionally, the results have implications for developing models of facial affect recognition and indicate that there are factors other than the face that can influence facial affect recognition judgments.  相似文献   

4.
区分度在面部表情与面孔身份识别交互中的作用   总被引:1,自引:1,他引:0  
汪亚珉  傅小兰 《心理学报》2007,39(2):191-200
已有研究表明,身份对面部表情识别的影响较常见,但表情对面孔身份识别的影响却很罕见。只有个别研究发现了表情对熟悉面孔身份识别的影响。最近有研究使用非常相似的模特面孔图片为实验材料,也发现了表情对身份识别的影响,并提出在以往研究中表情不影响身份识别是因为身份的区分度高于表情的区分度。本研究置疑表情区分度低于身份区分度的必然性,认为过去的研究使用静态表情图片,使得表情自然具有的强度变化线索缺失,才导致表情的区分度相对较低。本研究假设,当表情的强度变化线索得以体现时,表情的区分度就会提高,此时表情识别可能就不会受身份的影响。实验结果支持该假设,证明表情与身份的区分度水平是决定二者识别交互模式的重要因素。表情身份识别之间的相互影响并不能完全证明这两者之间的独立加工关系。此外,研究结果也提示在一定条件下可以部分分离表情与身份信息  相似文献   

5.
Three studies investigated the importance of movement for the recognition of subtle and intense expressions of emotion. In the first experiment, 36 facial emotion displays were duplicated in three conditions either upright or inverted in orientation. A dynamic condition addressed the perception of motion by using four still frames run together to encapsulate a moving sequence to show the expression emerging from neutral to the subtle emotion. The multi‐static condition contained the same four stills presented in succession, but with a visual noise mask (200 ms) between each frame to disrupt the apparent motion, whilst in the single‐static condition, only the last still image (subtle expression) was presented. Results showed a significant advantage for the dynamic condition, over the single‐ and multi‐static conditions, suggesting that motion signals provide a more accurate and robust mental representation of the expression. A second experiment demonstrated that the advantage of movement was reduced with expressions of a higher intensity, and the results of the third experiment showed that the advantage for the dynamic condition for recognizing subtle emotions was due to the motion signal rather than additional static information contained in the sequence. It is concluded that motion signals associated with the emergence of facial expressions can be a useful cue in the recognition process, especially when the expressions are subtle.  相似文献   

6.
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.  相似文献   

7.
Facial autonomic responses may contribute to emotional communication and reveal individual affective style. In this study, the authors examined how observed pupillary size modulates processing of facial expression, extending the finding that incidentally perceived pupils influence ratings of sadness but not those of happy, angry, or neutral facial expressions. Healthy subjects rated the valence and arousal of photographs depicting facial muscular expressions of sadness, surprise, fear, and disgust. Pupil sizes within the stimuli were experimentally manipulated. Subjects themselves were scored with an empathy questionnaire. Diminishing pupil size linearly enhanced intensity and valence judgments of sad expressions (but not fear, surprise, or disgust). At debriefing, subjects were unaware of differences in pupil size across stimuli. These observations complement an earlier study showing that pupil size directly influences processing of sadness but not other basic emotional facial expressions. Furthermore, across subjects, the degree to which pupil size influenced sadness processing correlated with individual differences in empathy score. Together, these data demonstrate a central role of sadness processing in empathetic emotion and highlight the salience of implicit autonomic signals in affective communication.  相似文献   

8.
This study
  • 1 The studies reported here were conducted within the TELEMED project which is funded by the European Community within the RACE program.
  • examines whether the recognition of emotion from facial expressions is impaired by deterioration of spatial resolution, contrast resolution, and picture size. Eighty judges rated 65 stimuli under 11 conditions: Undistorted, reduced spatial resolution (three steps), reduced contrast resolution (three steps), reduced picture size (three steps), and a very ‘hard’ condition combining the severest spatial and contrast resolution. Variation in picture quality was achieved by using a digital video recorder. Recognition rate and intensity ratings were not significantly affected by variations in contrast resolution or picture size. The only significant reduction of recognition rate and intensity ratings resulted from reduction in spatial resolution, but only with the largest deterioration in such resolution. Results are discussed with respect to the fundamental importance of facial expressions in interaction and communication, and with respect to applications, such as tele-conferencing systems.  相似文献   

    9.
    注意捕获是指与任务无关的刺激能够不自觉地吸引注意的现象。实验一采用视觉搜索任务,考察与主任务无关的情绪面孔的注意捕获水平及其机制,实验二进一步探究时间任务需求对无关情绪面孔注意捕获的影响。结果发现:与其他情绪面孔相比,愤怒的情绪面孔捕获了更多的注意,且受到整体情绪加工的影响;时间任务需求影响了目标刺激的注意选择,但愤怒优势效应不受到时间任务需求的影响,因此可能是一种较为自动化的加工过程。  相似文献   

    10.
    The goal of this research was to examine the effects of facial expressions on the speed of sex recognition. Prior research revealed that sex recognition of female angry faces was slower compared with male angry faces and that female happy faces are recognized faster than male happy faces. We aimed to replicate and extend the previous research by using different set of facial stimuli, different methodological approach and also by examining the effects of some other previously unexplored expressions (such as crying) on the speed of sex recognition. In the first experiment, we presented facial stimuli of men and women displaying anger, fear, happiness, sadness, crying and three control conditions expressing no emotion. Results showed that sex recognition of angry females was significantly slower compared with sex recognition in any other condition, while sad, crying, happy, frightened and neutral expressions did not impact the speed of sex recognition. In the second experiment, we presented angry, neutral and crying expressions in blocks and again only sex recognition of female angry expressions was slower compared with all other expressions. The results are discussed in a context of perceptive features of male and female facial configuration, evolutionary theory and social learning context.  相似文献   

    11.
    为探寻自闭症儿童在识别低强度(10%,30%)、中强度(40%,60%)和高强度(70%,90%)的愤怒和开心面部表情时,识别情绪类型的既有能力和差异。采用表情标签范式,用E-prime软件在电脑上呈现不同强度的3D合成面部表情刺激,分别对10名自闭症儿童、10名正常发育儿童和10名智障儿童进行了实验研究。结果发现,自闭症儿童在低强度表情时具有面部表情识别障碍,其对不同强度面部表情识别正确率显著低于智障儿童和正常发育儿童;自闭症儿童面部表情识别正确率与面部表情强度呈正相关,面部表情强度越大,自闭症儿童面部表情识别的正确率越高;自闭症儿童对低强度面部表情识别时,对开心表情的识别正确率高于愤怒表情,但是,在中强度和高强度面部表情识别时,存在显著的愤怒优势效应。  相似文献   

    12.
    Facial expressions frequently involve multiple individual facial actions. How do facial actions combine to create emotionally meaningful expressions? Infants produce positive and negative facial expressions at a range of intensities. It may be that a given facial action can index the intensity of both positive (smiles) and negative (cry-face) expressions. Objective, automated measurements of facial action intensity were paired with continuous ratings of emotional valence to investigate this possibility. Degree of eye constriction (the Duchenne marker) and mouth opening were each uniquely associated with smile intensity and, independently, with cry-face intensity. In addition, degree of eye constriction and mouth opening were each unique predictors of emotion valence ratings. Eye constriction and mouth opening index the intensity of both positive and negative infant facial expressions, suggesting parsimony in the early communication of emotion.  相似文献   

    13.
    Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD). Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.  相似文献   

    14.
    The effect of adaptation on facial expression recognition was investigated by measuring how identification performance of test stimuli falling along a particular expression continuum was affected after adapting to various prototype emotional faces or a control pattern. The results showed that for recognition of fear, happiness, and sadness, inhibition effects were observed on recognition of test expressions following 5 s adaptation to the same emotion, suggesting different neural populations tuned for the encoding of fearful, happy, and sad expressions. Facilitation of recognition of test stimuli differing in emotion to the adapting stimulus was also sometimes observed. The nature of these adaptation effects was investigated by introducing a size transformation or a delay between adapting and test stimuli and was found to survive these changes. The results of a further experiment argued against a criterion effect being the major source by demonstrating the importance of adapting time in generating the effects. Overall, the present study demonstrates the utility of adaptation effects for revealing functional characteristics of facial expression processing.  相似文献   

    15.
    A number of studies have reported cultural differences in intensity ratings of facial expressions of emotion. In the previous research, however, observers made only a single intensity rating; thus, it was not clear whether observers rated the external display, or made an inference about the subjective experience of the poser. In this study, we obtained these two intensity ratings separately from American and Japanese observers. Results indicated that Americans perceived greater intensity in display, but Japanese inferred greater intensity of subjective experience. When examined within-culture, Americans rated display more intensely than subjective experience, whereas there was no difference between the two ratings for the Japanese. We discuss these findings in relation to the concept of cultural decoding rules, and outline an agenda for future research that examines the exact nature of these rules, the relationship between decoding, display rules and self-construals, and the role of context in judging emotion.  相似文献   

    16.
    It has been proposed that self-face representations are involved in interpreting facial emotions of others. We experimentally primed participants' self-face representations. In Study 1, we assessed eye tracking patterns and performance on a facial emotion discrimination task, and in Study 2, we assessed emotion ratings between self and nonself groups. Results show that experimental priming of self-face representations increases visual exploration of faces, facilitates the speed of facial expression processing, and increases the emotional distance between expressions. These findings suggest that the ability to interpret facial expressions of others is intimately associated with the representations we have of our own faces.  相似文献   

    17.
    Algoe  Sara B.  Buswell  Brenda N.  DeLamater  John D. 《Sex roles》2000,42(3-4):183-208
    Participants' interpretations of facial expressions of emotion and judgments made about the poser as a function of gender, job status, and facial expression were examined. Two hypotheses regarding interpretation of expression stress either facial expression alone or a combination of facial expression and social context. Gender and status of target were expected to influence ratings of emotion and personality characteristics. In a 2 × 2 × 3 between-subjects design, 246 participants (90% non-Hispanic Whites) read a vignette of a workplace interaction manipulating gender and job status of target and viewed a slide of the target displaying a facial expression of emotion. Measures of perceived emotion and ratings of personality characteristics produced main effects and interactions in support of the context-specific hypothesis: Gender and job status were significant influences on interpretation.  相似文献   

    18.
    In the leading model of face perception, facial identity and facial expressions of emotion are recognized by separate mechanisms. In this report, we provide evidence supporting the independence of these processes by documenting an individual with severely impaired recognition of facial identity yet normal recognition of facial expressions of emotion. NM, a 40-year-old prosopagnosic, showed severely impaired performance on five of six tests of facial identity recognition. In contrast, she performed in the normal range on four different tests of emotion recognition. Because the tests of identity recognition and emotion recognition assessed her abilities in a variety of ways, these results provide solid support for models in which identity recognition and emotion recognition are performed by separate processes.  相似文献   

    19.
    This study investigates the discrimination accuracy of emotional stimuli in subjects with major depression compared with healthy controls using photographs of facial expressions of varying emotional intensities. The sample included 88 unmedicated male and female subjects, aged 18-56 years, with major depressive disorder (n = 44) or no psychiatric illness (n = 44), who judged the emotion of 200 facial pictures displaying an expression between 10% (90% neutral) and 80% (nuanced) emotion. Stimuli were presented in 10% increments to generate a range of intensities, each presented for a 500-ms duration. Compared with healthy volunteers, depressed subjects showed very good recognition accuracy for sad faces but impaired recognition accuracy for other emotions (e.g., harsh, surprise, and sad expressions) of subtle emotional intensity. Recognition accuracy improved for both groups as a function of increased intensity on all emotions. Finally, as depressive symptoms increased, recognition accuracy increased for sad faces, but decreased for surprised faces. Moreover, depressed subjects showed an impaired ability to accurately identify subtle facial expressions, indicating that depressive symptoms influence accuracy of emotional recognition.  相似文献   

    20.
    Levy Y  Bentin S 《Perception》2008,37(6):915-930
    We investigated the interactions between matching identity and expressions of unfamiliar faces. In experiment 1, participants matched expressions in frontal and in oblique views, while we manipulated facial identity. In experiment 2, participants matched identity in frontal and in oblique views, while facial expressions were manipulated. Labeling of expressions was not required. Results showed mutual facilitation between matching facial identity and facial expressions, in accuracy as well as in reaction times. Thus, matching expressions was better and faster for same-identity images in oblique as well as in frontal views (experiment 1), and matching identity was better and faster for same-expression images in oblique as well as in frontal views (experiment 2). The discussion focuses on the implications of these results for the structural encoding of facial identity and facial expressions.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号