首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we investigate to what extent modern computer vision and machine learning techniques can assist social psychology research by automatically recognizing facial expressions. To this end, we develop a system that automatically recognizes the action units defined in the facial action coding system (FACS). The system uses a sophisticated deformable template, which is known as the active appearance model, to model the appearance of faces. The model is used to identify the location of facial feature points, as well as to extract features from the face that are indicative of the action unit states. The detection of the presence of action units is performed by a time series classification model, the linear-chain conditional random field. We evaluate the performance of our system in experiments on a large data set of videos with posed and natural facial expressions. In the experiments, we compare the action units detected by our approach with annotations made by human FACS annotators. Our results show that the agreement between the system and human FACS annotators is higher than 90% and underlines the potential of modern computer vision and machine learning techniques to social psychology research. We conclude with some suggestions on how systems like ours can play an important role in research on social signals.  相似文献   

2.
Automated assessment of facial expressions with machine vision software opens up new opportunities for the assessment of facial expression in a shrewd and economic way in psychological and applied research. We investigated the assessment quality of one machine vision algorithm (FACET) in a study using standardized databases of dynamic facial expressions in different conditions (angle, distance, lighting and resolution). We found high reliability in terms of ratings concordance across conditions for facial expressions (intraclass correlation, ICC = 0.96) and action units (ICC = 0.78). Signal detection analyses showed good classification for both facial expressions (area under the curve, AUC > 0.99) and action unit scores (AUC = 0.91). In a second study, we investigated the convergent validity of machine vision assessment and electromyography (EMG) with regard to reaction times measured during the production of smiles (action unit 12) and frowns (action unit 4). To this end, we simultaneously measured EMG and expression classification with machine vision software in a response priming task with validly and invalidly primed responses. Both, EMG and machine vision data revealed similar performance costs in reaction times of inhibiting the falsely prepared expression and reprogramming the correct one. These results support machine vision as a suitable tool for assessing experimental effects in facial reaction times.  相似文献   

3.
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic three-dimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants' recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and neuroscience research.  相似文献   

4.
5.
The pioneering work of Duchenne (1862/1990) was replicated in humans using intramuscular electrical stimulation and extended to another species (Pan troglodytes: chimpanzees) to facilitate comparative facial expression research. Intramuscular electrical stimulation, in contrast to the original surface stimulation, offers the opportunity to activate individual muscles as opposed to groups of muscles. In humans, stimulation resulted in appearance changes in line with Facial Action Coding System (FACS) action units (AUs), and chimpanzee facial musculature displayed functional similarity to human facial musculature. The present results provide objective identification of the muscle substrate of human and chimpanzee facial expressions- data that will be useful in providing a common language to compare the units of human and chimpanzee facial expression.  相似文献   

6.
Responses to surprising events are dynamic. We argue that initial responses are primarily driven by the unexpectedness of the surprising event and reflect an interrupted and surprised state in which the outcome does not make sense yet. Later responses, after sense-making, are more likely to incorporate the valence of the outcome itself. To identify initial and later responses to surprising stimuli, we conducted two repetition-change studies and coded the general valence of facial expressions using computerised facial coding and specific facial action using the Facial Action Coding System (FACS). Results partly supported our unfolding logic. The computerised coding showed that initial expressions to positive surprises were less positive than later expressions. Moreover, expressions to positive and negative surprises were initially similar, but after some time differentiated depending on the valence of the event. Importantly, these patterns were particularly pronounced in a subset of facially expressive participants, who also showed facial action in the FACS coding. The FACS data showed that the initial phase was characterised by limited facial action, whereas the later increase in positivity seems to be explained by smiling. Conceptual as well as methodological implications are discussed.  相似文献   

7.
Sato W  Yoshikawa S 《Cognition》2007,104(1):1-18
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing (Experiment 1) and videos (Experiment 2). The subjects' facial actions were unobtrusively videotaped and blindly coded using Facial Action Coding System [FACS; Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologist]. In the dynamic presentations common to both experiments, brow lowering, a prototypical action in angry expressions, occurred more frequently in response to angry expressions than to happy expressions. The pulling of lip corners, a prototypical action in happy expressions, occurred more frequently in response to happy expressions than to angry expressions in dynamic presentations. Additionally, the mean latency of these actions was less than 900 ms after the onset of dynamic changes in facial expression. Naive raters recognized the subjects' facial reactions as emotional expressions, with the valence corresponding to the dynamic facial expressions that the subjects were viewing. These results indicate that dynamic facial expressions elicit spontaneous and rapid facial mimicry, which functions both as a form of intra-individual processing and as inter-individual communication.  相似文献   

8.
Facial expression recognition in a wild situation is a challenging problem in computer vision research due to different circumstances, such as pose dissimilarity, age, lighting conditions, occlusions, etc. Numerous methods, such as point tracking, piecewise affine transformation, compact Euclidean space, modified local directional pattern, and dictionary-based component separation have been applied to solve this problem. In this paper, we have proposed a deep learning–based automatic wild facial expression recognition system where we have implemented an incremental active learning framework using the VGG16 model developed by the Visual Geometry Group. We have gathered a large amount of unlabeled facial expression data from Intelligent Technology Lab (ITLab) members at Inha University, Republic of Korea, to train our incremental active learning framework. We have collected these data under five different lighting conditions: good lighting, average lighting, close to the camera, far from the camera, and natural lighting and with seven facial expressions: happy, disgusted, sad, angry, surprised, fear, and neutral. Our facial recognition framework has been adapted from a multi-task cascaded convolutional network detector. Repeating the entire process helps obtain better performance. Our experimental results have demonstrated that incremental active learning improves the starting baseline accuracy from 63% to average 88% on ITLab dataset on wild environment. We also present extensive results on face expression benchmark such as Extended Cohn-Kanade Dataset, as well as ITLab face dataset captured in wild environment and obtained better performance than state-of-the-art approaches.  相似文献   

9.
In an attempt to discover the facial action units for affective states that occur during complex learning, this study adopted an emote-aloud procedure in which participants were recorded as they verbalised their affective states while interacting with an intelligent tutoring system (AutoTutor). Participants’ facial expressions were coded by two expert raters using Ekman's Facial Action Coding System and analysed using association rule mining techniques. The two expert raters received an overall kappa that ranged between .76 and .84. The association rule mining analysis uncovered facial actions associated with confusion, frustration, and boredom. We discuss these rules and the prospects of enhancing AutoTutor with non-intrusive affect-sensitive capabilities.  相似文献   

10.
This article reports results from a program that produces high-quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end, we have produced a high-level programming language for three-dimensional (3-D) animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: This includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus,”“topic,” and “comment,”“theme” and “rheme,” or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule-governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect, and facial expressions/affect. A meaning representation includes discourse information: What is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse? The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression, and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models.  相似文献   

11.
Previous researchin automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head motion). We have developed a system that detects a discrete and important facial action (e.g., eye blinking) in spontaneously occurring facial behavior that has been measured with a nonfrontal pose, moderate out-of-plane head motion, and occlusion. The system recovers three-dimensional motion parameters, stabilizes facial regions, extracts motion and appearance information, and recognizes discrete facial actions in spontaneous facial behavior. We tested the system in video data from a two-person interview. The 10 subjects were ethnically diverse, action units occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. The video data were originally collected to answer substantive questions in psychology and represent a substantial challenge to automated action unit recognition. In analysis of blinks, the system achieved 98% accuracy.  相似文献   

12.
当面孔以群体形式出现,认知神经系统会自动整合情绪信息提取平均情绪,此过程被称为群体面孔情绪的整体编码。探讨其与低水平整体表征的分离,与个体表征的关系及神经活动特点是揭示其加工机制的关键,但目前尚未形成系统性模型。未来应综合利用眼动、神经电生理和脑成像技术,结合注意、记忆及社会线索进一步拓展对其认知神经机制和影响因素的研究,同时关注具有认知情感障碍的特殊人群,并从毕生发展的角度探索其发展轨迹。  相似文献   

13.
A total of 64 children, aged 7 and 10, watched a clown performing three sketches rated as very funny by the children. Two experimental conditions were created by asking half of the participants to suppress their laughter. Facial expressions were videotaped and analysed with FACS. For both ages, the results show a significant shorter duration (but not a lower frequency) of episodes of laughter and Duchenne smiles, and greater frequency of facial control movements in the suppression compared to the free expression group. The detailed results on individual facial action units used to control amusement expressions suggest hypotheses on the nature of the underlying processes. The participants' explicit knowledge of their control strategies was assessed through standardised interviews. Although behavioural control strategies were reported equally frequently by the two age groups, 10-year-olds verbalised more mental control strategies than 7-year-olds. This theoretically expected difference was not related to the actual ability to control facial expression. This result challenges the commonly held assumption that explicit knowledge of control strategies results in a greater ability to execute such control in ongoing social interactions.  相似文献   

14.
ABSTRACT

Despite advances in the conceptualisation of facial mimicry, its role in the processing of social information is a matter of debate. In the present study, we investigated the relationship between mimicry and cognitive and emotional empathy. To assess mimicry, facial electromyography was recorded for 70 participants while they completed the Multifaceted Empathy Test, which presents complex context-embedded emotional expressions. As predicted, inter-individual differences in emotional and cognitive empathy were associated with the level of facial mimicry. For positive emotions, the intensity of the mimicry response scaled with the level of state emotional empathy. Mimicry was stronger for the emotional empathy task compared to the cognitive empathy task. The specific empathy condition could be successfully detected from facial muscle activity at the level of single individuals using machine learning techniques. These results support the view that mimicry occurs depending on the social context as a tool to affiliate and it is involved in cognitive as well as emotional empathy.  相似文献   

15.
16.
Facial expressions of emotion are nonverbal behaviors that allow us to interact efficiently in social life and respond to events affecting our welfare. This article reviews 21 studies, published between 1932 and 2015, examining the production of facial expressions of emotion by blind people. It particularly discusses the impact of visual experience on the development of this behavior from birth to adulthood. After a discussion of three methodological considerations, the review of studies reveals that blind subjects demonstrate differing capacities for producing spontaneous expressions and voluntarily posed expressions. Seventeen studies provided evidence that blind and sighted spontaneously produce the same pattern of facial expressions, even if some variations can be found, reflecting facial and body movements specific to blindness or differences in intensity and control of emotions in some specific contexts. This suggests that lack of visual experience seems to not have a major impact when this behavior is generated spontaneously in real emotional contexts. In contrast, eight studies examining voluntary expressions indicate that blind individuals have difficulty posing emotional expressions. The opportunity for prior visual observation seems to affect performance in this case. Finally, we discuss three new directions for research to provide additional and strong evidence for the debate regarding the innate or the culture-constant learning character of the production of emotional facial expressions by blind individuals: the link between perception and production of facial expressions, the impact of display rules in the absence of vision, and the role of other channels in expression of emotions in the context of blindness.  相似文献   

17.
疼痛表情是人类疼痛表达的行为方式之一,具有重要的生存适应和社会交流价值。疼痛表情研究应当遵循行为观察方法,基于面部运动编码系统(Facial Action Coding System, FACS)的表情编码有助于疼痛表情的量化分析。年龄、性别、社会因素、文化背景等多种因素会影响疼痛表情表达,使疼痛表情在不同个体之间表现出共性和个性并存的特性。在不断改进研究方法的基础上,未来可对疼痛表情的生理心理机制作出更多阐述,并有望建立完备的人类疼痛表情信息库。  相似文献   

18.
Facial expressions and the regulation of emotions   总被引:2,自引:0,他引:2  
In the two decades since contemporary psychologists produced strong evidence confirming Darwin's century-old hypothesis of the innateness and universality of certain facial expressions of emotions, research on expressive behavior has become well established in developmental, social, and personality psychology and in psychophysiology. There are also signs of increased interest in emotions in clinical psychology and the neurosciences. Despite the success of the work on emotion expression and the upward trend of interest in emotions in general, the fundamental issue of the relation between emotion expression and emotion experience or feeling state remains controversial. A new developmental model of expression-feeling relations provides a framework for reevaluating previous research and for understanding the conditions under which expressions are effective in activating and regulating feeling states. The model has implications for research, socialization practices, and psychotherapy.  相似文献   

19.
Rachael E. Jack 《Visual cognition》2013,21(9-10):1248-1286
With over a century of theoretical developments and empirical investigation in broad fields (e.g., anthropology, psychology, evolutionary biology), the universality of facial expressions of emotion remains a central debate in psychology. How near or far, then, is this debate from being resolved? Here, I will address this question by highlighting and synthesizing the significant advances in the field that have elevated knowledge of facial expression recognition across cultures. Specifically, I will discuss the impact of early major theoretical and empirical contributions in parallel fields and their later integration in modern research. With illustrative examples, I will show that the debate on the universality of facial expressions has arrived at a new juncture and faces a new generation of exciting questions.  相似文献   

20.
Recent application of theories of embodied or grounded cognition to the recognition and interpretation of facial expression of emotion has led to an explosion of research in psychology and the neurosciences. However, despite the accelerating number of reported findings, it remains unclear how the many component processes of emotion and their neural mechanisms actually support embodied simulation. Equally unclear is what triggers the use of embodied simulation versus perceptual or conceptual strategies in determining meaning. The present article integrates behavioral research from social psychology with recent research in neurosciences in order to provide coherence to the extant and future research on this topic. The roles of several of the brain's reward systems, and the amygdala, somatosensory cortices, and motor centers are examined. These are then linked to behavioral and brain research on facial mimicry and eye gaze. Articulation of the mediators and moderators of facial mimicry and gaze are particularly useful in guiding interpretation of relevant findings from neurosciences. Finally, a model of the processing of the smile, the most complex of the facial expressions, is presented as a means to illustrate how to advance the application of theories of embodied cognition in the study of facial expression of emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号