首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
ABSTRACT

The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493–502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/.  相似文献   

2.
王亚鹏  董奇 《心理科学》2006,29(6):1512-1514
本文从情绪的效价载荷及其脑功能成像研究、面部表情的识别及其脑功能成像研究以及情绪的诱发及其脑功能成像研究等三方面介绍了情绪加工的脑机制及其研究现状。从现有的研究成果来看,大脑皮层在加工不同效价载荷的情绪时具有很大的重叠性;有关面部表情识别的研究表明,不同的神经环路负责调节对不同面部表情的反应;有关诱发的情绪的研究表明,前扣带回皮层在表征实验诱发的情绪时扮演着一个非常重要的角色。文章最后指出了情绪研究目前面临的一些问题,并指出在我国开展情绪的脑机制研究的重要意义。  相似文献   

3.
为探寻自闭症儿童在识别低强度(10%,30%)、中强度(40%,60%)和高强度(70%,90%)的愤怒和开心面部表情时,识别情绪类型的既有能力和差异。采用表情标签范式,用E-prime软件在电脑上呈现不同强度的3D合成面部表情刺激,分别对10名自闭症儿童、10名正常发育儿童和10名智障儿童进行了实验研究。结果发现,自闭症儿童在低强度表情时具有面部表情识别障碍,其对不同强度面部表情识别正确率显著低于智障儿童和正常发育儿童;自闭症儿童面部表情识别正确率与面部表情强度呈正相关,面部表情强度越大,自闭症儿童面部表情识别的正确率越高;自闭症儿童对低强度面部表情识别时,对开心表情的识别正确率高于愤怒表情,但是,在中强度和高强度面部表情识别时,存在显著的愤怒优势效应。  相似文献   

4.
There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children’s emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children’s emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.  相似文献   

5.
Some theories of emotion emphasise a close relationship between interoception and subjective experiences of emotion. In this study, we used facial expressions to examine whether interoceptive sensibility modulated emotional experience in a social context. Interoceptive sensibility was measured using the heartbeat detection task. To estimate individual emotional sensitivity, we made morphed photos that ranged between a neutral and an emotional facial expression (i.e., anger, sadness, disgust and happy). Recognition rates of particular emotions from these photos were calculated and considered as emotional sensitivity thresholds. Our results indicate that participants with accurate interoceptive awareness are sensitive to the emotions of others, especially for expressions of sadness and happy. We also found that false responses to sad faces were closely related with an individual's degree of social anxiety. These results suggest that interoceptive awareness modulates the intensity of the subjective experience of emotion and affects individual traits related to emotion processing.  相似文献   

6.
Recently, cross-cultural facial-expression recognition has become a research hotspot, and a standardised facial-expression material system can significantly help researchers compare and demonstrate the results of other studies. We developed a facial-expression database of Chinese Han, Hui and Tibetan ethnicities. In this study, six basic human facial expressions (and one neutral expression) were collected from 200 Han, 220 Hui and 210 Tibetan participants who lived in these regions. Four experts on each ethnicity evaluated the facial-expression images according to the expressions, and only those achieving inter-rater agreement were retained. Subsequently, 240 raters evaluated these images according to the seven emotions and rated the intensity of the expressions. Consequently, 2980 images were included in the database, including 930 images of Han individuals, 962 images of Hui individuals and 1088 images of Tibetan individuals. In conclusion, the facial-expression database of Chinese Han, Hui and Tibetan people was representative and reliable with a recognition rate of over 60%, making it well-suited for cross-cultural research on emotions.  相似文献   

7.
The goal of the present paper was to demonstrate the influence of general evaluations and stereotype associations on emotion recognition. Earlier research has shown that evaluative connotations between social category members and emotional expression predict whether recognition of positive or negative emotional expressions will be facilitated (e.g. Hugenberg, 2005). In the current paper we tested the hypothesis that stereotype associations influence emotion recognition processes, especially when the difference between valences of emotional expressions does not come into play. In line with this notion, when participants in the present two studies were asked to classify positive versus negative emotional expressions (i.e. happy versus anger, or happy versus sadness), valence congruency effects were found. Importantly, however, in a comparative context without differences in valence in which participants were asked to classify two distinct negative emotions (i.e. anger versus sadness) we found that recognition facilitation occurred for stereotypically associated discrete emotional expressions. With this, the current results indicate that a distinction between general evaluative and cognitive routes can be made in emotion recognition processes.  相似文献   

8.
An immense body of research demonstrates that emotional facial expressions can be processed unconsciously. However, it has been assumed that such processing takes place solely on a global valence-based level, allowing individuals to disentangle positive from negative emotions but not the specific emotion. In three studies, we investigated the specificity of emotion processing under conditions of limited awareness using a modified variant of an affective priming task. Faces with happy, angry, sad, fearful, and neutral expressions were presented as masked primes for 33 ms (Study 1) or 14 ms (Studies 2 and 3) followed by emotional target faces (Studies 1 and 2) or emotional adjectives (Study 3). Participants’ task was to categorise the target emotion. In all three studies, discrimination of targets was significantly affected by the emotional primes beyond a simple positive versus negative distinction. Results indicate that specific aspects of emotions might be automatically disentangled in addition to valence, even under conditions of subjective unawareness.  相似文献   

9.
如何揭示情绪性面孔加工的认知神经机制一直是心理学和社会神经科学的热点课题。以往研究主要采用单独面孔表情作为情绪诱发或呈现方式, 但对群体情绪知觉与体验的关注极其缺乏, 而群体面孔表情作为群体情绪的主要表达方式, 亟待深入关注。因此, 本项目将采用群体面孔(面孔群)表情作为群体情绪刺激, 拟通过事件相关电位(ERP)、核磁共振(fMRI)以及经颅磁刺激(TMS)等技术结合行为研究, 尝试从情绪信息(效价和强度)、朝向信息(正面、侧面、倒置)、完整性(局部呈现、完整呈现)、空间频率信息(完整、高频、低频)等方面探明群体面孔表情加工的时间动态特征和大脑激活模式。这将有助于全面认识和深入了解群体情绪识别的一般规律, 对于更好地优化社会互动也具有现实意义。  相似文献   

10.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

11.
High levels of trait hostility are associated with wide-ranging interpersonal deficits and heightened physiological response to social stressors. These deficits may be attributable in part to individual differences in the perception of social cues. The present study evaluated the ability to recognize facial emotion among 48 high hostile (HH) and 48 low hostile (LH) smokers and whether experimentally-manipulated acute nicotine deprivation moderated relations between hostility and facial emotion recognition. A computer program presented series of pictures of faces that morphed from a neutral emotion into increasing intensities of happiness, sadness, fear, or anger, and participants were asked to identify the emotion displayed as quickly as possible. Results indicated that HH smokers, relative to LH smokers, required a significantly greater intensity of emotion expression to recognize happiness. No differences were found for other emotions across HH and LH individuals, nor did nicotine deprivation moderate relations between hostility and emotion recognition. This is the first study to show that HH individuals are slower to recognize happy facial expressions and that this occurs regardless of recent tobacco abstinence. Difficulty recognizing happiness in others may impact the degree to which HH individuals are able to identify social approach signals and to receive social reinforcement.  相似文献   

12.
People tend to mimic the facial expression of others. It has been suggested that this helps provide social glue between affiliated people but it could also aid recognition of emotions through embodied cognition. The degree of facial mimicry, however, varies between individuals and is limited in people with autism spectrum conditions (ASC). The present study sought to investigate the effect of promoting facial mimicry during a facial-emotion-recognition test. In two experiments, participants without an ASC diagnosis had their autism quotient (AQ) measured. Following a baseline test, they did an emotion-recognition test again but half of the participants were asked to mimic the target face they saw prior to making their responses. Mimicry improved emotion recognition, and further analysis revealed that the largest improvement was for participants who had higher scores on the autism traits. In fact, recognition performance was best overall for people who had high AQ scores but also received the instruction to mimic. Implications for people with ASC are explored.  相似文献   

13.
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.  相似文献   

14.
胡治国  刘宏艳 《心理科学》2015,(5):1087-1094
正确识别面部表情对成功的社会交往有重要意义。面部表情识别受到情绪背景的影响。本文首先介绍了情绪背景对面部表情识别的增强作用,主要表现为视觉通道的情绪一致性效应和跨通道情绪整合效应;然后介绍了情绪背景对面部表情识别的阻碍作用,主要表现为情绪冲突效应和语义阻碍效应;接着介绍了情绪背景对中性和歧义面孔识别的影响,主要表现为背景的情绪诱发效应和阈下情绪启动效应;最后对现有研究进行了总结分析,提出了未来研究的建议。  相似文献   

15.
This study investigates the discrimination accuracy of emotional stimuli in subjects with major depression compared with healthy controls using photographs of facial expressions of varying emotional intensities. The sample included 88 unmedicated male and female subjects, aged 18-56 years, with major depressive disorder (n = 44) or no psychiatric illness (n = 44), who judged the emotion of 200 facial pictures displaying an expression between 10% (90% neutral) and 80% (nuanced) emotion. Stimuli were presented in 10% increments to generate a range of intensities, each presented for a 500-ms duration. Compared with healthy volunteers, depressed subjects showed very good recognition accuracy for sad faces but impaired recognition accuracy for other emotions (e.g., harsh, surprise, and sad expressions) of subtle emotional intensity. Recognition accuracy improved for both groups as a function of increased intensity on all emotions. Finally, as depressive symptoms increased, recognition accuracy increased for sad faces, but decreased for surprised faces. Moreover, depressed subjects showed an impaired ability to accurately identify subtle facial expressions, indicating that depressive symptoms influence accuracy of emotional recognition.  相似文献   

16.
Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser’s internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with “emotionless” computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.  相似文献   

17.
Deception has been reported to be influenced by task-relevant emotional information from an external stimulus. However, it remains unclear how task-irrelevant emotional information would influence deception. In the present study, facial expressions of different valence and emotion intensity were presented to participants, where they were asked to make either truthful or deceptive gender judgments according to the preceding cues. We observed the influence of facial expression intensity upon individuals’ cognitive cost of deceiving (mean difference of individuals’ truthful and deceptive response times). Larger cost was observed for high intensity faces compared to low intensity faces. These results provided insights on how automatic attraction of attention evoked by task-irrelevant emotional information in facial expressions influenced individuals’ cognitive cost of deceiving.  相似文献   

18.
The objectives of this study were to propose a method of presenting dynamic facial expressions to experimental subjects, in order to investigate human perception of avatar's facial expressions of different levels of emotional intensity. The investigation concerned how perception varies according to the strength of facial expression, as well as according to an avatar's gender. To accomplish these goals, we generated a male and a female virtual avatar with five levels of intensity of happiness and anger using a morphing technique. We then recruited 16 normal healthy subjects and measured each subject's emotional reaction by scoring affective arousal and valence after showing them the avatar's face. Through this study, we were able to investigate human perceptual characteristics evoked by male and female avatars' graduated facial expressions of happiness and anger. In addition, we were able to identify that a virtual avatar's facial expression could affect human emotion in different ways according to the avatar's gender and the intensity of its facial expressions. However, we could also see that virtual faces have some limitations because they are not real, so subjects recognized the expressions well, but were not influenced to the same extent. Although a virtual avatar has some limitations in conveying its emotion using facial expressions, this study is significant in that it shows that a new potential exists to use or manipulate emotional intensity by controlling a virtual avatar's facial expression linearly using a morphing technique. Therefore, it is predicted that this technique may be used for assessing emotional characteristics of humans, and may be of particular benefit for work with people with emotional disorders through a presentation of dynamic expression of various emotional intensities.  相似文献   

19.
In a sample of 325 college students, we examined how context influences judgments of facial expressions of emotion, using a newly developed facial affect recognition task in which emotional faces are superimposed upon emotional and neutral contexts. This research used a larger sample size than previous studies, included more emotions, varied the intensity level of the expressed emotion to avoid potential ceiling effects from very easy recognition, did not explicitly direct attention to the context, and aimed to understand how recognition is influenced by non-facial information, both situationally-relevant and situationally-irrelevant. Both accuracy and RT varied as a function of context. For all facial expressions of emotion other than happiness, accuracy increased when the emotion of the face and context matched, and decreased when they mismatched. For all emotions, participants responded faster when the emotion of the face and image matched and slower when they mismatched. Results suggest that the judgment of the facial expression is itself influenced by the contextual information instead of both being judged independently and then combined. Additionally, the results have implications for developing models of facial affect recognition and indicate that there are factors other than the face that can influence facial affect recognition judgments.  相似文献   

20.
Suzuki A  Hoshino T  Shigemasu K 《Cognition》2006,99(3):327-353
The assessment of individual differences in facial expression recognition is normally required to address two major issues: (1) high agreement level (ceiling effect) and (2) differential difficulty levels across emotions. We propose a new assessment method designed to quantify individual differences in the recognition of the six basic emotions, 'sensitivities to basic emotions in faces.' We attempted to address the two major assessment issues by using morphing techniques and item response theory (IRT). We used morphing to create intermediate, mixed facial expression stimuli with various levels of recognition difficulty. Applying IRT enabled us to estimate the individual latent trait levels underlying the recognition of respective emotions (sensitivity scores), unbiased by stimulus properties that constitute difficulty. In a series of two experiments we demonstrated that the sensitivity scores successfully addressed the two major assessment issues and their concomitant individual variability. Intriguingly, correlational analyses of the sensitivity scores to different emotions produced orthogonality between happy and non-happy emotion recognition. To our knowledge, this is the first report of the independence of happiness recognition, unaffected by stimulus difficulty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号