首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although affective facial pictures are widely used in emotion research, standardised affective stimuli sets are rather scarce, and the existing sets have several limitations. We therefore conducted a validation study of 490 pictures of human facial expressions from the Karolinska Directed Emotional Faces database (KDEF). Pictures were evaluated on emotional content and were rated on an intensity and arousal scale. Results indicate that the database contains a valid set of affective facial pictures. Hit rates, intensity, and arousal of the 20 best KDEF pictures for each basic emotion are provided in an appendix.  相似文献   

2.
We examined the concurrent validity properties of the Facial Discrimination Task (FDT), an instrument for the assessment of facial emotion recognition by comparing it with the widely used Pictures of Facial Affect (PFA). In Study 1, 100 adults with heterogeneous psychiatric diagnoses were administered items of the FDT Emotion Task, the FDT Age Task, and PFA. In Study 2, 25 normally developing preschool children were instructed to label happy, sad, or neutral facial expressions from the FDT and the PFA. Despite methodological differences between the two studies, very similar and high correlations were found between the FDT and the PFA overall correct scores (r = .79 and r = .77, respectively). The data suggest that the FDT and the PFA measure similar competencies in preschoolers and in adults with psychiatric disorders. This finding is important because it establishes the concurrent validity of the FDT in child and adult populations.  相似文献   

3.
How similar are the meanings of facial expressions of emotion and the emotion terms frequently used to label them? In three studies, subjects made similarity judgments and emotion self-report ratings in response to six emotion categories represented in Ekman and Friesen's Pictures of Facial Affect, and their associated labels. Results were analyzed with respect to the constituent facial movements using the Facial Action Coding System, and using consensus analysis, multidimensional scaling, and inferential statistics. Shared interpretation of meaning was found between individuals and the group, with congruence between the meaning in facial expressions, labeling using basic emotion terms, and subjects' reported emotional responses. The data suggest that (1) the general labels used by Ekman and Friesen are appropriate but may not be optimal, (2) certain facial movements contribute more to the perception of emotion than do others, and (3) perception of emotion may be categorical rather than dimensional.  相似文献   

4.
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic three-dimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants' recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and neuroscience research.  相似文献   

5.
Facial behaviors of medal winners of the judo competition at the 2004 Athens Olympic Games were coded with P. Ekman and W. V. Friesen's (1978) Facial Affect Coding System (FACS) and interpreted using their Emotion FACS dictionary. Winners' spontaneous expressions were captured immediately when they completed medal matches, when they received their medal from a dignitary, and when they posed on the podium. The 84 athletes who contributed expressions came from 35 countries. The findings strongly supported the notion that expressions occur in relation to emotionally evocative contexts in people of all cultures, that these expressions correspond to the facial expressions of emotion considered to be universal, that expressions provide information that can reliably differentiate the antecedent situations that produced them, and that expressions that occur without inhibition are different than those that occur in social and interactive settings.  相似文献   

6.
This study examined recognition of facial expressions of emotion among women diagnosed with borderline personality disorder (BPD; n = 21), compared to a group of women with histories of childhood sexual abuse with no current or prior diagnosis of BPD (n = 21) and a group of women with no history of sexual abuse or BPD (n = 20). Facial recognition was assessed by a slide set developed by Ekman and Matsumoto (Japanese and Caucasian Facial Expressions of Emotion and Neutral Faces, 1992), expanded and improved from previous slide sets, and utilized a coding system that allowed for free responses rather than the more typical fixed-response format. Results indicated that borderline individuals were primarily accurate perceivers of others' emotions and showed a tendency toward heightened sensitivity on recognition of fear, specifically. Results are discussed in terms of emotional appraisal ability and emotion dysregulation among individuals with BPD.  相似文献   

7.
基于信号检测论的框架, 考察不同的情绪效价以及时间间隔如何交互影响老年人在图片再认任务中的辨别力(d′)和判断标准(β), 进而影响其错误记忆。以21名老年人(女性14名)为被试, 平均年龄67.17 ± 5.03岁。根据中国老年人对国际情绪图片库(International Affective Picture System, IAPS)中图片评定的情绪参数, 选出积极、消极和中性图片各60张作为学习材料。另选积极、消极和中性图片各30张作为每次再认测验的干扰材料, 并且两次再认测验的干扰材料不同。短时间隔条件让老年人在图片记忆编码半个小时后完成对图片的再认测验; 长时间隔条件则为在三周后完成对图片的再认测验。结果发现: 1)在短时间隔条件下, β和d′共同影响虚报率; 而在长时间隔条件下, 只有β影响虚报率, d′不影响虚报率; 2)无论是在短时还是长时间隔条件下, 积极图片与消极图片的判别力d′都没有显著差别; 3)在短时间隔条件下, 老年人对消极图片的β更低、虚报率更高; 在长时间隔条件下, 老年人对积极图片的β更低、虚报率更高。研究结果表明: 老年人的错误记忆受判别力d′和判断标准β的影响。情绪效价通过作用于老年人的反应倾向(判断标准β)、而非记忆质量(判别力d′)来影响其错误记忆。时间间隔可以调节情绪效价对老年人判断标准β和错误记忆的作用, 使之随时间发生反转。老年人的“积极效应”在错误记忆中可能表现为随着时间的流逝, 老年人更愿意将积极信息报告为经历或学习过的信息。  相似文献   

8.
Facial emotions are important for human communication. Unfortunately, traditional facial emotion recognition tasks do not inform about how respondents might behave towards others expressing certain emotions. Approach‐avoidance tasks do measure behaviour, but only on one dimension. In this study 81 participants completed a novel Facial Emotion Response Task. Images displaying individuals with emotional expressions were presented in random order. Participants simultaneously indicated how communal (quarrelsome vs. agreeable) and how agentic (dominant vs. submissive) they would be in response to each expression. We found that participants responded differently to happy, angry, fearful, and sad expressions in terms of both dimensions of behaviour. Higher levels of negative affect were associated with less agreeable responses specifically towards happy and sad expressions. The Facial Emotion Response Task might complement existing facial emotion recognition and approach‐avoidance tasks.  相似文献   

9.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

10.
The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions.  相似文献   

11.
We report two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES). The ADFES is distinct from existing datasets in that it includes a face-forward version and two different head-turning versions (faces turning toward and away from viewers), North-European as well as Mediterranean models (male and female), and nine discrete emotions (joy, anger, fear, sadness, surprise, disgust, contempt, pride, and embarrassment). Study 1 showed that the ADFES received excellent recognition scores. Recognition was affected by social categorization of the model: displays of North-European models were better recognized by Dutch participants, suggesting an ingroup advantage. Head-turning did not affect recognition accuracy. Study 2 showed that participants more strongly perceived themselves to be the cause of the other's emotion when the model's face turned toward the respondents. The ADFES provides new avenues for research on emotion expression and is available for researchers upon request.  相似文献   

12.
The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions.  相似文献   

13.
Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only.  相似文献   

14.
The present study utilized a short‐term longitudinal research design to examine the hypothesis that shyness in preschoolers is differentially related to different aspects of emotion processing. Using teacher reports of shyness and performance measures of emotion processing, including (1) facial emotion recognition, (2) non‐facial emotion recognition, and (3) emotional perspective‐taking, we examined 337 Head Start attendees twice at a 24‐week interval. Results revealed significant concurrent and longitudinal relationships between shyness and facial emotion recognition, and either minimal or non‐existent relationships between shyness and the other aspects of emotion processing. Correlational analyses of concurrent assessments revealed that shyness predicted poorer facial emotion recognition scores for negative emotions (sad, angry, and afraid), but not a positive emotion (happy). Analyses of change over time, on the other hand, revealed that shyness predicted change in facial emotion recognition scores for all four measured emotions. Facial emotion recognition scores did not predict changes in shyness. Results are discussed with respect to expanding the scope of research on shyness and emotion processing to include time‐dependent studies that allow for the specification of developmental processes. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
The left and right hemispheres of the brain are differentially related to the processing of emotions. Although there is little doubt that the right hemisphere is relatively superior for processing negative emotions, controversy exists over the hemispheric role in the processing of positive emotions. Eighty right-handed normal male participants were examined for visual-field (left-right) differences in the perception of facial expressions of emotion. Facial composite (RR, LL) and hemifacial (R, L) sets depicting emotion expressions of happiness and sadness were prepared. Pairs of such photographs were presented bilaterally for 150 ms, and participants were asked to select the photographs that looked more expressive. A left visual-field superiority (a right-hemisphere function) was found for sad facial emotion. A hemispheric advantage in the perception of happy expression was not found.  相似文献   

16.
This study explored the relationship between children's sociometric status (SMS) and ability to recognize emotions from facial expressions. As expected, high SMS children obtained significantly higher emotion recognition scores than did low SMS children. However, there was an unanticipated inverse correlation between low SMS children's recognition scores from adult photgraphs and child photographs. Possible reasons for the observed relationships are discussed.  相似文献   

17.
There is evidence that some emotional expressions are characterized by diagnostic cues from individual face features. For example, an upturned mouth is indicative of happiness, whereas a furrowed brow is associated with anger. The current investigation explored whether motivating people to perceive stimuli in a local (i.e., feature-based) rather than global (i.e., holistic) processing orientation was advantageous for recognizing emotional facial expressions. Participants classified emotional faces while primed with local and global processing orientations, via a Navon letter task. Contrary to previous findings for identity recognition, the current findings are indicative of a modest advantage for face emotion recognition under conditions of local processing orientation. When primed with a local processing orientation, participants performed both significantly faster and more accurately on an emotion recognition task than when they were primed with a global processing orientation. The impacts of this finding for theories of emotion recognition and face processing are considered.  相似文献   

18.
Visual-field bias in the judgment of facial expression of emotion   总被引:2,自引:0,他引:2  
The left and right hemispheres of the brain are differentially related to the processing of emotions. Although there is little doubt that the right hemisphere is relatively superior for processing negative emotions, controversy exists over the hemispheric role in the processing of positive emotions. Eighty right-handed normal male participants were examined for visual-field (left-right) differences in the perception of facial expressions of emotion. Facial composite (RR, LL) and hemifacial (R, L) sets depicting emotion expressions of happiness and sadness were prepared. Pairs of such photographs were presented bilaterally for 150 ms, and participants were asked to select the photographs that looked more expressive. A left visual-field superiority (a right-hemisphere function) was found for sad facial emotion. A hemispheric advantage in the perception of happy expression was not found.  相似文献   

19.
To assemble a calibrated set of compassion-eliciting visual stimuli, 60 clinically healthy Mexican volunteers (36 women, 24 men; M age = 27.5 yr., SD = 2.4) assessed 84 pictures selected from the International Affective Picture System catalogue using the dimensions of Valence, Arousal, and Dominance included in the Self-assessment Manikin scale and an additional dimension of Compassion. Pictures showing suffering in social contexts and expressions of sadness elicited similar responses of compassion. The highest compassion response was reported for pictures showing illness and pain. Men and women differed in the intensity but not the quality of the compassionate responses. Compassion included attributes of negative emotions such as displeasure. The quality of the emotional response was not different from that previously reported for samples in the U.S.A., Spain, and Brazil. A set of 28 pictures was selected as high-compassion-evoking images and 28 as null-compassion controls suitable for studies designed to ascertain the neural substrates of this moral emotion.  相似文献   

20.
Facial expressions play a crucial role in emotion recognition as compared to other modalities. In this work, an integrated network, which is capable of recognizing emotion intensity levels from facial images in real time using deep learning technique is proposed. The cognitive study of facial expressions based on expression intensity levels are useful in applications such as healthcare, coboting, Industry 4.0 etc. This work proposes to augment emotion recognition with 2 other important parameters, valence and emotion intensity. This helps in better automated responses by a machine to an emotion. The valence model helps in classifying emotion as positive and negative emotions and discrete model classifies emotions as happy, anger, disgust, surprise and neutral state using Convolution Neural Network (CNN). Feature extraction and classification are carried out using CMU Multi-PIE database. The proposed architecture achieves 99.1% and 99.11% accuracy for valence model and discrete model respectively for offline image data with 5-fold cross validation. The average accuracy achieved in real time for valance model and discrete model is 95% & 95.6% respectively. Also, this work contributes to build a new database using facial landmarks, with three intensity levels of facial expressions which helps to classify expressions into low, mild and high intensities. The performance is also tested for different classifiers. The proposed integrated system is configured for real time Human Robot Interaction (HRI) applications on a test bed consisting of Raspberry Pi and RPA platform to assess its performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号