共查询到20条相似文献,搜索用时 0 毫秒
1.
Inhibiting Facial Expressions: Limitations to the Voluntary Control of Facial Expressions of Emotion
Recently, A. J. Fridlund (e.g., 1994) and others suggested that facial expressions of emotion are not linked to emotion and can be completely accounted for by social motivation. To clarify the influence of social motivation on the production of facial displays, we created an explicit motivation by using facial inhibition instructions. While facial electromyographic activity was recorded at three sites, participants saw humorous video stimuli in two conditions (inhibition, spontaneous) and neutral stimuli in a spontaneous condition. Participants showed significantly more EMG activity in the cheek region and less EMG activity in the brow region when they tried to completely inhibit amused expressions as compared with the neutral control task. Our results suggest that explicit motivation in the sense of voluntary control is not sufficient to mask the effects of spontaneous facial activation linked to humorous stimuli. 相似文献
2.
Perception and Emotion: How We Recognize Facial Expressions 总被引:2,自引:0,他引:2
Ralph Adolphs 《Current directions in psychological science》2006,15(5):222-226
ABSTRACT— Perception and emotion interact, as is borne out by studies of how people recognize emotion from facial expressions. Psychological and neurological research has elucidated the processes, and the brain structures, that participate in facial emotion recognition. Studies have shown that emotional reactions to viewing faces can be very rapid and that these reactions may, in turn, be used to judge the emotion shown in the face. Recent experiments have argued that people actively explore facial expressions in order to recognize the emotion, a mechanism that emphasizes the instrumental nature of social cognition. 相似文献
3.
综述了近年来关于精神分裂症对情绪面部表情加工损伤的研究,讨论了这种损伤的性质,以及对这种损伤性质的解释,比如它属于一般性还是特异性的损伤,与临床症状以及认知特征之间的关系等。比较分析表明,精神分裂症情绪面部表情知觉损伤,可能兼有面部信息加工障碍和情绪信息知觉困难的特性。另外,介绍了国外关于针对精神分裂症面部表情再认和识别的康复训练研究以及近年来利用事件相关电位(ERPs)和功能磁共振成像(fMRI)等认知神经科学技术进行的神经生理机制研究 相似文献
4.
ABSTRACT— There is consensus that when emotions are aroused, the displays of those emotions are either universal or culture-specific. We investigated the idea that an individual's emotional displays in a given context can be both universal and culturally variable, as they change over time. We examined the emotional displays of Olympic athletes across time, classified their expressive styles, and tested the association between those styles and a number of characteristics associated with the countries the athletes represented. Athletes from relatively urban, individualistic cultures expressed their emotions more, whereas athletes from less urban, collectivistic cultures masked their emotions more. These culturally influenced expressions occurred within a few seconds after initial, immediate, and universal emotional displays. Thus, universal and culture-specific emotional displays can unfold across time in an individual in a single context. 相似文献
5.
We conducted two studies (Ns=52 and 60) to test the notion that the incentive salience of facial expressions of emotion (FEE) is a joint function of
perceivers’ implicit needs for power and affiliation and the FEE’s meaning as a dominance or affiliation signal. We used a
variant of the dot-probe task (Mogg & Bradley, 1999a) to measure attentional orienting. Joy, anger, surprise, and neutral
FEEs were presented for 12, 116, and 231 ms with backward masking. Implicit motives were assessed with a Picture Story Exercise.
We found that power-motivated individuals orient their attention towards faces signaling low dominance, but away from faces
that signal high dominance, and (b) that affiliation-motivated individuals show vigilance for faces signaling low affiliation
(rejection) and, to a lesser extent, orient attention towards faces signaling high affiliation (acceptance). 相似文献
6.
NAOMI E. GOLDSTEIN JAMES SEXTON ROBERT S. FELDMAN 《Journal of applied social psychology》2000,30(1):67-76
The relationship between knowledge of American Sign Language (ASL) and the ability to encode facial expressions of emotion was explored. Participants were 55 college students, half of whom were intermediate-level students of ASL and half of whom had no experience with a signed language. In front of a video camera, participants posed the affective facial expressions of happiness, sadness, fear, surprise, anger, and disgust. These facial expressions were randomized onto stimulus tapes that were then shown to 60 untrained judges who tried to identify the expressed emotions. Results indicated that hearing subjects knowledgeable in ASL were generally more adept than were hearing nonsigners at conveying emotions through facial expression. Results have implications for better understanding the nature of nonverbal communication in hearing and deaf individuals. 相似文献
7.
Cynthia R. Ellis Kathy L. Lindstrom Theresa M. Villani Nirbhay N. Singh Al M. Best Alan S.W. Winton Philip K. Axtell Donald P. Oswald J.P. Leung 《Journal of child and family studies》1997,6(4):453-470
Interpreting and responding appropriately to facial expressions of emotion are important aspects of social skills. Some children, adolescents, and adults with various psychological and psychiatric disorders recognize facial expressions less proficiently than their peers in the general population. We wished to determine if such deficits existed in a group of 133 children and adolescents with emotional and behavioral disorders (EBD). The subjects were receiving in-patient psychiatric services for at least one of substance-related disorders, adjustment disorders, anxiety disorders, mood disorders or disruptive behavior disorders. After being read stories describing various emotional reactions, all subjects were tested for their ability to recognize the 6 basic facial expressions of emotion depicted in Ekman and Friesen's (1976) normed photographs. Overall, they performed well on this task at levels comparable to those occurring in the general population. Accuracy increased with age, irrespective of gender, ethnicity, or clinical diagnosis. After adjusting for age effects, the subjects diagnosed with either adjustment disorders, mood disorders, or disruptive behavior disorders were significantly more accurate at identifying anger than those without those diagnoses. In addition, subjects with mood disorders identified sadness significantly more accurately than those without this diagnosis, although the effect was greatest with younger children. 相似文献
8.
The Face of Time: Temporal Cues in Facial Expressions of Emotion 总被引:2,自引:0,他引:2
Kari Edwards 《Psychological science》1998,9(4):270-276
Results of studies reported here indicate that humans are attuned to temporal cues in facial expressions of emotion. The experimental task required subjects to reproduce the actual progression of a target person"s spontaneous expression (i.e., onset to offset) from a scrambled set of photographs. Each photograph depicted a segment of the expression that corresponded to approximately 67 ms in real time. Results of two experiments indicated that (a) individuals could detect extremely subtle dynamic cues in a facial expression and could utilize these cues to reproduce the proper temporal progression of the display at above-chance levels of accuracy; (b) women performed significantly better than men on the task designed to assess this ability; (c) individuals were most sensitive to the temporal characteristics of the early stages of an expression; and (d) accuracy was inversely related to the amount of time allotted for the task. The latter finding may reflect the relative involvement of (error-prone) cognitively mediated or strategic processes in what is normally a relatively automatic, nonconscious process. 相似文献
9.
This study reanalyzes American and Japanese multiscalar ratings of universal facial expressions originally collected by Matsumoto (1986), of which only single emotion scales were analyzed and reported by Matsumoto and Ekman (1989). The nonanalysis of the entire data set ignored basic and important questions about the nature of judgments of universal facial expressions of emotion. These were addressed in this study. We found that (1) observers in both cultures perceived multiple emotions in universal facial expressions, not just one; (2) cultural differences occurred on multiple emotion scales for each expression, not just the target scale; (3) the directions of those differences differed according to the rating scale used and the expression being observed; and (4) no underlying dimension was evidenced that would account for these differences. These findings raise new questions about the nature of the judgment process and the role of judgment studies in supporting the universality thesis, the bases of which need to be explored in future research and incorporated in future theories of emotion and universality. 相似文献
10.
We propose a computational model for identifying emotional state of a facial expression from appraisal scores given by human observers utilizing their differences in perception. The appraisal model of human emotion is adopted as the basis of this evaluation process with appraisal variables as output. We investigated the performance for both categorical and continuous representation of the variables appraised by human observers. Analysis of the data exhibits higher degree of agreement between estimated Indian ratings and the available reference when these are rated through continuous domain. We also observed that emotional state with negative valence are influential in the perception of hybrid emotional state like ‘Surprise’, only when appraisal variables are labeled through categories of emotions. Thus, the proposed method has implications in developing software to detect emotion using appraisal variables in continuous domain, perceived from facial expression of an agent (or human subject). Further, this model can be customized to include cultural variability in recognizing emotions. 相似文献
11.
Michelle S.M. Yik Zhaolan Meng James A. Russell 《Cognition & emotion》2013,27(5):723-730
English-speaking Canadian, Cantonese-speaking Hong Kong Chinese, and Japanese speaking Japanese adults were shown 13 still photographs of the facial expressions of Chinese babies subjected to various emotion-elicitation procedures. Some respondents were asked to give an emotion label of their choice for each photograph, others to judge its pleasant-unpleasant quality. Only facial expressions taken during the “happy”; condition showed agreement by a majority across all three cultural samples on a specific basic emotion. Agreement on the pleasant-unpleasant quality of the baby's expressions was higher, but still varied with culture. 相似文献
12.
The authors investigated the ability of children with emotional and behavioral difficulties, divided according to their Psychopathy Screening Device scores (P. J. Frick & R. D. Hare, in press), to recognize emotional facial expressions and vocal tones. The Psychopathy Screening Device indexes a behavioral syndrome with two dimensions: affective disturbance and impulsive and conduct problems. Nine children with psychopathic tendencies and 9 comparison children were presented with 2 facial expression and 2 vocal tone subtests from the Diagnostic Analysis of Nonverbal Accuracy (S. Nowicki & M. P. Duke, 1994). These subtests measure the ability to name sad, fearful, happy, and angry facial expressions and vocal affects. The children with psychopathic tendencies showed selective impairments in the recognition of both sad and fearful facial expressions and sad vocal tone. In contrast, the two groups did not differ in their recognition of happy or angry facial expressions or fearful, happy, and angry vocal tones. The results are interpreted with reference to the suggestion that the development of psychopathic tendencies may reflect early amygdala dysfunction (R. J. R. Blair, J. S. Morris, C. D. Frith, D. I. Perrett, & R. Dolan, 1999). 相似文献
13.
Abstract Facial expressions of happiness, excitement, surprise, fear, anger, disgust, sadness, and calm were presented stereoscopically to create pair wise perceptual conflict. Dominance of one expression over another as the most common result, but basic emotions (happiness, fear, etc.) failed to dominate non-basic emotions (excitement, calm), Instead, extremely pleasant or extremely unpleasant emotions dominated less valenced emotions (e.g. surprise). Blends of the presented pairs also occurred, mainly when the emotions were adjacent according to a circumplex structure of emotion. Blends were most common among negatively valenced emotions, such as fear, anger, and disgust. 相似文献
14.
注意捕获是指与任务无关的刺激能够不自觉地吸引注意的现象。实验一采用视觉搜索任务,考察与主任务无关的情绪面孔的注意捕获水平及其机制,实验二进一步探究时间任务需求对无关情绪面孔注意捕获的影响。结果发现:与其他情绪面孔相比,愤怒的情绪面孔捕获了更多的注意,且受到整体情绪加工的影响;时间任务需求影响了目标刺激的注意选择,但愤怒优势效应不受到时间任务需求的影响,因此可能是一种较为自动化的加工过程。 相似文献
15.
This study examined whether African American children's ability to identify emotion in the facial expressions and tones of voice of European American stimuli was comparable to their European American peers and related to personality, social competence, and achievement. The Diagnostic Analysis of Nonverbal Accuracy (DANVA; Now-icki & Duke, 1994) was administered to 84 African American children. It was found that they performed less accurately on adult and child tones of voice and adult facial expressions. Further, girls' ability to read emotion in tones of voice was related to better social competence and achievement, whereas boys' ability to identify emotion in adult tones of voice was related to teacher-rated social competence. Results suggest that more research is needed with ethnic groups to clarify the impact of nonverbal processing skills on social and achievement outcomes. 相似文献
16.
Processing Faces and Facial Expressions 总被引:10,自引:0,他引:10
This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from traditional approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions. 相似文献
17.
This paper reports the results of three tasks comparing the development of the understanding of facial expressions of emotion in deaf and hearing children. Two groups of hearing and deaf children of elementary school age were tested for their ability to match photographs of facial expressions of emotion, and to produce and comprehend emotion labels for the expressions of happiness, sadness, anger, fear, disgust, and surprise. Accuracy data showed comparable levels of performance for deaf and hearing children of the same age. Happiness and sadness were the most accurately matched expressions and the most accurately produced and comprehended labels. Anger was the least accurately matched expression and the most poorly comprehended emotion label. Disgust was the least accurately labeled expression; however, deaf children were more accurate at labeling this expression, and also at labeling fear, than hearing children. Error data revealed that children confused anger with disgust, and fear with surprise. However, the younger groups of deaf and hearing children also showed a tendency to confuse the negative expressions of anger, disgust, and fear with sadness. The results suggest that, despite possible differences in the early socialisation of emotion, deaf and hearing children share a common understanding of the emotions conveyed by distinctive facial expressions. 相似文献
18.
摘 要:目的 探讨高特质愤怒个体是否对负性情绪面孔有注意偏向。方法 采用点探测任务,比较高低特质愤怒个体(高特质愤怒组23人,低特质愤怒组23人)对不同性质情绪面孔同异侧探测符号反应时的差异。结果 重复测量方差分析发现存在面孔性质边缘主效应(F=2.462,p=.073)及组别与面孔性质的交互效应,探测位置主效应(F=5.089,p=.029)及组别与探测位置的交互效应,组别、面孔性质与探测位置的交互效应。进一步分析发现,高特质愤怒组对愤怒面孔同侧探测刺激反应时显著快于异侧反应时[(386.12±50.09)ms VS.(403.33±59.39)ms,F=17.050,p=.000],快乐面孔同侧探测刺激反应时显著慢于异侧反应时[(396.88±53.87)ms VS.(38.78±41.06)ms,F=18.200,p=.000)],低特质愤怒组被试不同性质面孔同异侧反应时无显著差异。结论 高特质愤怒个体对与愤怒相关刺激存在注意偏向。 相似文献
19.
The authors aimed to examine the possible association between (a) accurately reading emotion in facial expressions and (b) social and academic competence among elementary school-aged children. Participants were 840 7-year-old children who completed a test of the ability to read emotion in facial expressions. Teachers rated children's social and academic behavior using behavioral rating scales. The authors found that children who had more difficulty identifying emotion in faces also were more likely to have more problems overall and, more specifically, with peer relationships among boys and with learning difficulties among girls. Findings suggest that nonverbal receptive skill plays a significant role in children's social and academic adjustment. 相似文献
20.
This article reports results from a program that produces high-quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end, we have produced a high-level programming language for three-dimensional (3-D) animation of facial expressions. We have been concerned primarily with expressions conveying information correlated with the intonation of the voice: This includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as “focus,”“topic,” and “comment,”“theme” and “rheme,” or “given” and “new” information. We are also interested in the relation of affect or emotion to facial expression. Until now, systems have not embodied such rule-governed translation from spoken utterance meaning to facial expressions. Our system embodies rules that describe and coordinate these relations: intonation/information, intonation/affect, and facial expressions/affect. A meaning representation includes discourse information: What is contrastive/background information in the given context, and what is the “topic” or “theme” of the discourse? The system maps the meaning representation into how accents and their placement are chosen, how they are conveyed over facial expression, and how speech and facial expressions are coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and manipulators. Our algorithms then impose synchrony, create coarticulation effects, and determine affectual signals, eye and head movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other facial models. 相似文献