首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Facial expressions convey not only emotions but also communicative information. Therefore, facial expressions should be analysed to understand communication. The objective of this study is to develop an automatic facial expression analysis system for extracting nonverbal communicative information. This study focuses on specific communicative information: emotions expressed through facial movements and the direction of the expressions. We propose a multi-tasking deep convolutional network (DCN) to classify facial expressions, detect the facial regions, and estimate face angles. We reformulate facial region detection and face angle estimation as regression problems and add task-specific output layers in the DCN’s architecture. Experimental results show that the proposed method performs all tasks accurately. In this study, we show the feasibility of the multi-tasking DCN for extracting nonverbal communicative information from a human face.  相似文献   

2.
This study investigated whether observers' facial reactions to the emotional facial expressions of others represent an affective or a cognitive response to these emotional expressions. Three hypotheses were contrasted: (1) facial reactions to emotional facial expressions are due to mimicry as part of an affective empathic reaction; (2) facial reactions to emotional facial expressions are a reflection of shared affect due to emotion induction; and (3) facial reactions to emotional facial expressions are determined by cognitive load depending on task difficulty. Two experiments were conducted varying type of task, presentation of stimuli, and task difficulty. The results show that depending on the nature of the rating task, facial reactions to facial expressions may be either affective or cognitive. Specifically, evidence for facial mimicry was only found when individuals made judgements regarding the valence of an emotional facial expression. Other types of judgements regarding facial expressions did not seem to elicit mimicry but may lead to facial responses related to cognitive load.  相似文献   

3.
Studies on adults have revealed a disadvantageous effect of negative emotional stimuli on executive functions (EF), and it is suggested that this effect is amplified in children. The present study’s aim was to assess how emotional facial expressions affected working memory in 9- to 12-year-olds, using a working memory task with emotional facial expressions as stimuli. Additionally, we explored how degree of internalizing and externalizing symptoms in typically developing children was related to performance on the same task. Before employing the working memory task with emotional facial expressions as stimuli, an independent sample of 9- to 12-year-olds was asked to recognize the facial expressions intended to serve as stimuli for the working memory task and to rate the facial expressions on the degree to which the emotion was expressed and for arousal to obtain a baseline for how children during this age recognize and react to facial expressions. The first study revealed that children rated the facial expressions with similar intensity and arousal across age. When employing the working memory task with facial expressions, results revealed that negatively valenced expressions impaired working memory more than neutral and positively valenced expressions. The ability to successfully complete the working memory task increased between 9 to 12 years of age. Children’s total problems were associated with poorer performance on the working memory task with facial expressions. Results on the effect of emotion on working memory are discussed in light of recent models and empirical findings on how emotional information might interact and interfere with cognitive processes such as working memory.  相似文献   

4.
本研究采用面孔情绪探测任务, 通过状态-特质焦虑问卷筛选出高、低特质性焦虑被试, 考察场景对不同情绪效价以及不同情绪强度的面部表情加工的影响, 并探讨特质性焦虑在其中所发挥的作用。结果发现:(1)对于不同情绪效价的面部表情, 场景对其情绪探测的影响存在差异:对于快乐面部表情, 在100%、80%和20%三个情绪层级上, 在场景与面孔情绪性一致情况下, 被试对面孔情绪探测的正确率显著高于不一致情况; 对于恐惧面部表情, 在80%、60%、40%和20%四个情绪层级上, 均发现一致条件比不一致条件有着更高的情绪探测正确率。(2)对于高特质性焦虑组, 一致条件和不一致条件中的面孔情绪探测正确率并没有显著差异, 即高特质性焦虑组并未表现出显著的场景效应; 而低特质性焦虑组则差异显著, 即出现显著的场景效应。该研究结果表明:(1)对于情绪强度较低的面部表情, 快乐与恐惧面孔情绪探测都更容易受到场景的影响。(2)相比于中等强度快乐面孔, 场景更容易影响中等强度恐惧面孔情绪的探测。(3)特质性焦虑的个体因素在场景对面孔情绪探测的影响中发挥调节作用, 高特质性焦虑者在情绪识别中较少受到场景信息的影响。  相似文献   

5.
The process of discriminating among genuine, suppressed, and faked expressions of pain was examined. Untrained judges estimated the severity of pain being experienced when viewing videotaped facial expressions of chronic pain patients undergoing a painful diagnostic test or dissimulating reactions. Verbal feedback as to whether pain was experienced also was provided, so as to be either consistent or inconsistent with the facial expression. Judges were able to distinguish genuine pain faces from baseline expressions but, relative to genuine pain faces, attributed more pain to faked faces and less pain to suppressed ones. Advance warning of deception did not improve discrimination but led to a more conservative or nonempathic judging style. Verbal feedback increased or decreased judgments, as appropriate, but facial information consistently was assigned greater weight. An augmenting model of the judgment process that attaches considerable importance to the context in which information is provided was supported.  相似文献   

6.
Four experiments demonstrate that infants of 5 and 7 months can detect information that is invariant across the acoustic and optic presentations of a single affective expression. Infants were presented simultaneously with two filmed facial expressions accompanied by a single vocal expression characteristic of one of the facial expressions. The infants increased their looking time to a facial expression when it was sound-specified, as compared to when that filmed expression was projected silently. Even when synchrony relations were disrupted, infants looked proportionately longer to the film that was sound-specified, indicating that some factor other than temporal synchrony guided the infants' looking behavior. When infants viewed the filmed facial expressions either in a normal orientation or upside-down, those infants viewing the facial expressions in the normal orientation looked appropriately, while those viewing the inverted films did not. These findings support the view that infants are sensitive to amodal, potentially meaningful invariant relations in expressive behaviors. These results are discussed in the context of J. J. Gibson's theory of affordances.  相似文献   

7.
This article reports results from a program that produces high-quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end, we have produced a high-level programming language for three-dimensional (3-D) animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: This includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus,”“topic,” and “comment,”“theme” and “rheme,” or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule-governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect, and facial expressions/affect. A meaning representation includes discourse information: What is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse? The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression, and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models.  相似文献   

8.
Although facial information is distributed over spatial as well as temporal domains, thus far research on selective attention to disapproving faces has concentrated predominantly on the spatial domain. This study examined the temporal characteristics of visual attention towards facial expressions by presenting a Rapid Serial Visual Presentation (RSVP) paradigm to high (n=33) and low (n=34) socially anxious women. Neutral letter stimuli (p, q, d, b) were presented as the first target (T1), and emotional faces (neutral, happy, angry) as the second target (T2). Irrespective of social anxiety, the attentional blink was attenuated for emotional faces. Emotional faces as T2 did not influence identification accuracy of a preceding (neutral) target. The relatively low threshold for the (explicit) identification of emotional expressions is consistent with the view that emotional facial expressions are processed relatively efficiently.  相似文献   

9.
Recognition of facial expressions has traditionally been investigated by presenting facial expressions without any context information. However, we rarely encounter an isolated facial expression; usually, we perceive a person's facial reaction as part of the surrounding context. In the present study, we addressed the question of whether emotional scenes influence the explicit recognition of facial expressions. In three experiments, participants were required to categorize facial expressions (disgust, fear, happiness) that were shown against backgrounds of natural scenes with either a congruent or an incongruent emotional significance. A significant interaction was found between facial expressions and the emotional content of the scenes, showing a response advantage for facial expressions accompanied by congruent scenes. This advantage was robust against increasing task load. Taken together, the results show that the surrounding scene is an important factor in recognizing facial expressions.  相似文献   

10.
The facial expressions of 96 term and preterm neonates were recorded during the Brazelton Neonatal Behavior Assessment. The expressions that occurred most frequently during the neurological reflex items were interest, disgust, sadness, and crying. The predominant facial expression during the orienting items was that of interest. Although happy and surprised faces were more common during the orienting than the reflex items, they occurred very infrequently. Some of the reflex items elicited more negative expressions than others and some of the orienting items elicited more frequent expressions than others and some of the orienting items elicited more frequent expressions of interest than others, suggesting that facial expressions might reflect the degree to which the stimuli were experienced as pleasant or unpleasant and more or less interesting. Although the examiner's face and voice were more effective than inanimate stimuli in eliciting positive expressions in term neonates, the reverse was true for preterm neonates. Thus facial expressions may provide additional information on the degree to which neonates experience stimulation as pleasant/unpleasant and on individual differences in responsiveness to physical and social stimulation.  相似文献   

11.
In this paper, the role of self-reported anxiety and degree of conscious awareness as determinants of the selective processing of affective facial expressions is investigated. In two experiments, an attentional bias toward fearful facial expressions was observed, although this bias was apparent only for those reporting high levels of trait anxiety and only when the emotional face was presented in the left visual field. This pattern was especially strong when the participants were unaware of the presence of the facial stimuli. In Experiment 3, a patient with right-hemisphere brain damage and visual extinction was presented with photographs of faces and fruits on unilateral and bilateral trials. On bilateral trials, it was found that faces produced less extinction than did fruits. Moreover, faces portraying a fearful or a happy expression tended to produce less extinction than did neutral expressions. This suggests that emotional facial expressions may be less dependent on attention to achieve awareness. The implications of these results for understanding the relations between attention, emotion, and anxiety are discussed.  相似文献   

12.
Bimodal bilinguals, fluent in a signed and a spoken language, provide unique insight into the nature of syntactic integration and language control. We investigated whether bimodal bilinguals who are conversing with English monolinguals produce American Sign Language (ASL) grammatical facial expressions to accompany parallel syntactic structures in spoken English. In ASL, raised eyebrows mark conditionals, and furrowed eyebrows mark wh-questions; the grammatical brow movement is synchronized with the manual onset of the clause. Bimodal bilinguals produced more ASL-appropriate facial expressions than did nonsigners and synchronized their expressions with the onset of the corresponding English clauses. This result provides evidence for a dual-language architecture in which grammatical information can be integrated up to the level of phonological implementation. Overall, participants produced more raised brows than furrowed brows, which can convey negative affect. Bimodal bilinguals suppressed but did not completely inhibit ASL facial grammar when it conflicted with conventional facial gestures. We conclude that morphosyntactic elements from two languages can be articulated simultaneously and that complete inhibition of the nonselected language is difficult.  相似文献   

13.
韦程耀  赵冬梅 《心理科学进展》2012,20(10):1614-1622
近年来面部表情的跨文化研究显示出更多的跨文化一致性和差异性证据。自发面部表情的表达和识别、组内优势效应以及面部表情信息的上下不对称性已成为该领域的研究热点。方言理论、中国民间模型和EMPATH模型从三种不同的角度对面部表情跨文化研究的结果进行了理论解释。而表达规则和解码规则以及语言效应是面部表情跨文化表达与识别的重要影响因素。今后, 面部表情跨文化的表达和识别研究应更加关注面部表情特征信息和影响因素这两个方面。  相似文献   

14.
In a sample of 325 college students, we examined how context influences judgments of facial expressions of emotion, using a newly developed facial affect recognition task in which emotional faces are superimposed upon emotional and neutral contexts. This research used a larger sample size than previous studies, included more emotions, varied the intensity level of the expressed emotion to avoid potential ceiling effects from very easy recognition, did not explicitly direct attention to the context, and aimed to understand how recognition is influenced by non-facial information, both situationally-relevant and situationally-irrelevant. Both accuracy and RT varied as a function of context. For all facial expressions of emotion other than happiness, accuracy increased when the emotion of the face and context matched, and decreased when they mismatched. For all emotions, participants responded faster when the emotion of the face and image matched and slower when they mismatched. Results suggest that the judgment of the facial expression is itself influenced by the contextual information instead of both being judged independently and then combined. Additionally, the results have implications for developing models of facial affect recognition and indicate that there are factors other than the face that can influence facial affect recognition judgments.  相似文献   

15.
Recent studies have shown that cueing eye gaze can affect the processing of visual information, and this phenomenon is called the gaze-orienting effect (visual-GOE). Emerging evidence has shown that the cueing eye gaze also affects the processing of auditory information (auditory-GOE). However, it is unclear whether the auditory-GOE is modulated by emotion. We conducted three behavioural experiments to investigate whether cueing eye gaze influenced the orientation judgement to a sound, and whether the effect was modulated by facial expressions. The current study set four facial expressions (angry, fearful, happy, and neutral), manipulated the display type of facial expressions, and changed the sequence of gaze and emotional expressions. Participants were required to judge the sound orientation after facial expressions and gaze cues. The results showed that the orientation judgement of sound was influenced by gaze direction in all three experiments, and the orientation judgement of sound was faster when the face was oriented to the target location (congruent trials) than when the face was oriented away from the target location (incongruent trials). The modulation of emotion on auditory-GOE was observed only when gaze shifted followed by facial expression (Exp3); the auditory-GOE was significantly greater for angry faces than for neutral faces. These findings indicate that auditory-GOE as a social phenomenon exists widely, and the effect was modulated by facial expression. Gaze shift before the presentation of emotion was the key influencing factor for the emotional modulation in an auditory target gaze-orienting task. Our findings suggest that the integration of facial expressions and eye gaze was context-dependent.  相似文献   

16.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

17.
The effects of Parkinson's disease (PD) on spontaneous and posed facial activity and on the control of facial muscles were assessed by comparing 22 PD patients with 22 controls. Facial activity was analysed using the Facial Action Coding System (FACS; Ekman & Friesen, 1978). As predicted, PD patients showed reduced levels of spontaneous and posed facial expression in reaction to unpleasant odours compared to controls. PD patients were less successful than controls in masking or intensifying negative facial expressions. PD patients were also less able than controls to imitate specific facial muscle movements, but did not differ in the ability to pose emotional facial expressions. These results suggest that not only is spontaneous facial activity disturbed in PD, but also to some degree the ability to pose facial expressions, to mask facial expressions with other expressions, and to deliberately move specific muscles in the face.  相似文献   

18.
The current study aimed to extend the understanding of the early development of spontaneous facial reactions toward observed facial expressions. Forty-six 9- to 10-month-old infants observed video clips of dynamic human facial expressions that were artificially created with morphing technology. The infants’ facial responses were recorded, and the movements of the facial action unit 12 (e.g., lip-corner raising, associated with happiness) and facial action unit 4 (e.g., brow-lowering, associated with anger) were visually evaluated by multiple naïve raters. Results showed that (1) infants make congruent, observable facial responses to facial expressions, and (2) these specific facial responses are enhanced during repeated observation of the same emotional expressions. These results suggest the presence of observable congruent facial responses in the first year of life, and that they appear to be influenced by contextual information, such as the repetition of presentation of the target emotional expressions.  相似文献   

19.
Former research demonstrated that depression is associated with dysfunctional attentional processing of emotional information. Most studies examined this bias by registration of response latencies. The present study employed an ecologically valid measurement of attentive processing, using eye-movement registration. Dysphoric and non-dysphoric participants viewed slides presenting sad, angry, happy and neutral facial expressions. For each type of expression, three components of visual attention were analysed: the relative fixation frequency, fixation time and glance duration. Attentional biases were also investigated for inverted facial expressions to ensure that they were not related to eye-catching facial features. Results indicated that non-dysphoric individuals were characterised by longer fixating and dwelling on happy faces. Dysphoric individuals demonstrated a longer dwelling on sad and neutral faces. These results were not found for inverted facial expressions. The present findings are in line with the assumption that depression is associated with a prolonged attentional elaboration on negative information.  相似文献   

20.
Previous studies on gender differences in facial imitation and verbally reported emotional contagion have investigated emotional responses to pictures of facial expressions at supraliminal exposure times. The aim of the present study was to investigate how gender differences are related to different exposure times, representing information processing levels from subliminal (spontaneous) to supraliminal (emotionally regulated). Further, the study aimed at exploring correlations between verbally reported emotional contagion and facial responses for men and women. Masked pictures of angry, happy and sad facial expressions were presented to 102 participants (51 men) at exposure times from subliminal (23 ms) to clearly supraliminal (2500 ms). Myoelectric activity (EMG) from the corrugator and the zygomaticus was measured and the participants reported their hedonic tone (verbally reported emotional contagion) after stimulus exposures. The results showed an effect of exposure time on gender differences in facial responses as well as in verbally reported emotional contagion. Women amplified imitative responses towards happy vs. angry faces and verbally reported emotional contagion with prolonged exposure times, whereas men did not. No gender differences were detected at the subliminal or borderliminal exposure times, but at the supraliminal exposure gender differences were found in imitation as well as in verbally reported emotional contagion. Women showed correspondence between their facial responses and their verbally reported emotional contagion to a greater extent than men. The results were interpreted in terms of gender differences in emotion regulation, rather than as differences in biologically prepared emotional reactivity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号