共查询到20条相似文献,搜索用时 7 毫秒
1.
Positive and Negative: Infant Facial Expressions and Emotions 总被引:4,自引:0,他引:4
One path to understanding emotional processes and their development is the investigation of early facial expressions. Converging evidence suggests that although all infant smiles index positive emotion, some smiles are more positive than others. The evidence stems both from the situations in which infants produce different facial expressions and from naive observers' ratings of the emotional intensity of the expressions. The observers' ratings also suggest that similar facial actions—such as cheek raising—lead smiles to be perceived as more positive and lead negative expressions (cry-faces) to be perceived as more negative. One explanation for this parsimony is that certain facial actions are associated with the intensification of both positive and negative emotions. 相似文献
2.
The purpose of this study was to compare the recognition performance of children who identified facial expressions of emotions using adults' and children's stimuli. The subjects were 60 children equally distributed in six subgroups as a function of sex and three age levels: 5, 7, and 9 years. They had to identify the emotion that was expressed in 48 stimuli (24 adults' and 24 children's expressions) illustrating six emotions: happiness, surprise, fear, disgust, anger, and sadness. The task of the children consisted of selecting the facial stimulus that best matched a short story that clearly described an emotional situation. The results indicated that recognition performances were significantly affected by the age of the subjects: 5-year-olds were less accurate than 7- and 9-year-olds who did not differ among themselves. There were also differences in recognition levels between emotions. No effects related to the sex of the subjects and to the age of the facial stimuli were observed. 相似文献
3.
The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions. 相似文献
4.
The authors investigated children's ability to recognize emotions from the information available in the lower, middle, or upper face. School-age children were shown partial or complete facial expressions and asked to say whether they corresponded to a given emotion (anger, fear, surprise, or disgust). The results indicate that 5-year-olds were able to recognize fear, anger, and surprise from partial facial expressions. Fear was better recognized from the information located in the upper face than those located in the lower face. A similar pattern of results was found for anger, but only in girls. Recognition improved between 5 and 10 years old for surprise and anger, but not for fear and disgust. 相似文献
5.
6.
Inhibiting Facial Expressions: Limitations to the Voluntary Control of Facial Expressions of Emotion
Recently, A. J. Fridlund (e.g., 1994) and others suggested that facial expressions of emotion are not linked to emotion and can be completely accounted for by social motivation. To clarify the influence of social motivation on the production of facial displays, we created an explicit motivation by using facial inhibition instructions. While facial electromyographic activity was recorded at three sites, participants saw humorous video stimuli in two conditions (inhibition, spontaneous) and neutral stimuli in a spontaneous condition. Participants showed significantly more EMG activity in the cheek region and less EMG activity in the brow region when they tried to completely inhibit amused expressions as compared with the neutral control task. Our results suggest that explicit motivation in the sense of voluntary control is not sufficient to mask the effects of spontaneous facial activation linked to humorous stimuli. 相似文献
7.
As a first step in involving user emotion in human-computer interaction, a memory-based expert system (JANUS; Kearney, 1991) was designed to interpret facial expression in terms of the signaled emotion. Anticipating that a VDU-mounted camera will eventually supply face parameters automatically, JANUS now accepts manually made measurements on a digitized full-face photograph and returns emotion labels used by college students. An intermediate representation in terms of face actions (e.g., mouth open) is also used. Production rules convert the geometry into these. A dynamic memory (Kolodner, 1984; Schank, 1982) interprets the face actions in terms of emotion labels. The memory is dynamic in the sense that new emotion labels can be learned with experience. A prototype system has been implemented on a Sun 2/120 system using POPLOG. Validation studies on the prototype suggest that the interpretations achieved are generally consistent with those of college students without formal instruction in emotion signals. 相似文献
8.
Maria Guarnera Zira Hichy Maura Cascio Stefano Carrubba Stefania L. Buccheri 《The Journal of genetic psychology》2017,178(6):309-318
The authors sought to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust, and neutral emotions from facial information. They aimed to investigate if—regardless of age—this pattern changes. More specifically, the present study aimed to compare the difference between the performance of adults and 6- to 7-year-old children in detecting emotions from the whole face and a specific face region, namely the eyes and mouth. The findings seem to indicate that, for both groups, recognizing disgust, happiness, and surprise is facilitated when pictures represent the whole face. However, with regard to a specific region, a prevalence for children was not found between the eyes and mouth. Meanwhile, for adults, would seem to detect a greater role of the eye region. Finally, regarding the differences in the performance of emotions recognition, adults are better only in a few cases, whereas children are better in recognizing anger from the mouth. 相似文献
9.
Processing Faces and Facial Expressions 总被引:10,自引:0,他引:10
This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from traditional approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions. 相似文献
10.
Perception and Emotion: How We Recognize Facial Expressions 总被引:2,自引:0,他引:2
Ralph Adolphs 《Current directions in psychological science》2006,15(5):222-226
ABSTRACT— Perception and emotion interact, as is borne out by studies of how people recognize emotion from facial expressions. Psychological and neurological research has elucidated the processes, and the brain structures, that participate in facial emotion recognition. Studies have shown that emotional reactions to viewing faces can be very rapid and that these reactions may, in turn, be used to judge the emotion shown in the face. Recent experiments have argued that people actively explore facial expressions in order to recognize the emotion, a mechanism that emphasizes the instrumental nature of social cognition. 相似文献
11.
《Visual cognition》2013,21(2):81-118
Using computer-generated line-drawings, Etcoff and Magee (1992) found evidence of categorical perception of facial expressions. We report four experiments that replicated and extended Etcoff and Magee's findings with photographic-quality stimuli. Experiments 1 and 2 measured identification of the individual stimuli falling along particular expression continua (e.g. from happiness to sadness) and discrimination of these stimuli with an ABX task in which stimuli A, B, and X were presented sequentially; subjects had to decide whether X was the same as A or B. Our identification data showed that each expression continuum was perceived as two distinct sections separated by a category boundary. From these identification data we were able to predict subjects' performance in the ABX discrimination task and to demonstrate better discrimination of cross-boundary than within-category pairs; that is, two faces identified as different expressions (e.g. happy and sad) were easier to discriminate than two faces of equal physical difference identified as the same expression (e.g. both happy). Experiments 3 and 4 addressed two new issues arising from Etcoff and Magee's (1992) data and the results of our own Experiments 1 and 2: (1) that they might reflect artefacts inherent in the use of single continua ranging between two prototypes- for example, a range effect or an anchor effect, (2) given that the ABX procedure incorporates a short-term memory load, discrimination data obtained with this task might reflect a short-term memory rather than a perceptual phenomenon. We found no support for either of these reinterpretations and further evidence of categorical perception. 相似文献
12.
Two studies are presented that evaluate emotion recognition accuracy and interpretative processing of facial expressions in relation to depressive symptoms in women. Dysphoric women more often attributed themselves as the cause of negative expressions, more often made negative interpretations of others’ thoughts, and had more negative thoughts about themselves when viewing facial expressions. However, dysphoric women were not less accurate or rapid in recognizing facial expressions. An integrative model, the “levels of self-processing view,” is discussed as a synthesis of the results across studies. Limitations of the current studies and future research directions are discussed. 相似文献
13.
The authors examined the association between psychopathy and identification of facial expressions of emotion. Previous research in this area is scant and has produced contradictory findings (Blair et. al., 2001, 2004; Glass & Newman, 2006; Kosson et al., 2002). One hundred and forty-five male jail inmates, rated using the Hare Psychopathy Checklist: Screening Version participated in a facial affect recognition task. Participants were shown faces containing one of five emotions (happiness, sadness, fear, anger, or shame) displayed at one of two different levels of intensity of expression (100% or 60%). The authors predicted that psychopathy would be associated with decreased affect recognition, particularly for sad and fearful emotional expressions, and decreased recognition of less intense displays of facial affect. Results were largely consistent with expectations in that psychopathy was negatively correlated with overall facial recognition of affect, sad facial affect, and recognition of less intense displays of affect. An unexpected negative correlation with recognition of happy facial affect was also found. These results suggest that psychopathy may be associated with a general deficit in affect recognition. 相似文献
14.
近年来面部表情的跨文化研究显示出更多的跨文化一致性和差异性证据。自发面部表情的表达和识别、组内优势效应以及面部表情信息的上下不对称性已成为该领域的研究热点。方言理论、中国民间模型和EMPATH模型从三种不同的角度对面部表情跨文化研究的结果进行了理论解释。而表达规则和解码规则以及语言效应是面部表情跨文化表达与识别的重要影响因素。今后, 面部表情跨文化的表达和识别研究应更加关注面部表情特征信息和影响因素这两个方面。 相似文献
15.
综述了近年来关于精神分裂症对情绪面部表情加工损伤的研究,讨论了这种损伤的性质,以及对这种损伤性质的解释,比如它属于一般性还是特异性的损伤,与临床症状以及认知特征之间的关系等。比较分析表明,精神分裂症情绪面部表情知觉损伤,可能兼有面部信息加工障碍和情绪信息知觉困难的特性。另外,介绍了国外关于针对精神分裂症面部表情再认和识别的康复训练研究以及近年来利用事件相关电位(ERPs)和功能磁共振成像(fMRI)等认知神经科学技术进行的神经生理机制研究 相似文献
16.
Dacher Keltner 《Cognition & emotion》2013,27(2):155-172
Following proposals regarding the criteria for differentiating emotions, the current investigation examined whether the antecedents and facial expressions of embarrassment, shame, and guilt are distinct. In Study 1, participants wrote down events that had caused them to feel embarrassment, shame, and guilt. Coding of these events revealed that embarrassment was associated with transgressions of conventions that govern public interactions, shame with the failure to meet important personal standards, and guilt with actions that harm others or violate duties. Study 2 determined whether these three emotions are distinct in another domain of emotion-namely, facial expression. Observers were presented with slides of 14 different facial expressions, including those of embarrassment, shame, and candidates of guilt (self-contempt, sympathy, and pain). Observers accurately identified the expressions of embarrassment and shame, but did not reliably label any expression as guilt. 相似文献
17.
The Face of Time: Temporal Cues in Facial Expressions of Emotion 总被引:2,自引:0,他引:2
Kari Edwards 《Psychological science》1998,9(4):270-276
Results of studies reported here indicate that humans are attuned to temporal cues in facial expressions of emotion. The experimental task required subjects to reproduce the actual progression of a target person"s spontaneous expression (i.e., onset to offset) from a scrambled set of photographs. Each photograph depicted a segment of the expression that corresponded to approximately 67 ms in real time. Results of two experiments indicated that (a) individuals could detect extremely subtle dynamic cues in a facial expression and could utilize these cues to reproduce the proper temporal progression of the display at above-chance levels of accuracy; (b) women performed significantly better than men on the task designed to assess this ability; (c) individuals were most sensitive to the temporal characteristics of the early stages of an expression; and (d) accuracy was inversely related to the amount of time allotted for the task. The latter finding may reflect the relative involvement of (error-prone) cognitively mediated or strategic processes in what is normally a relatively automatic, nonconscious process. 相似文献
18.
19.
情绪加工老龄化已经成为老龄化研究的新热点, 但其内在机制仍然缺乏统一的解释。本研究以自动加工和控制加工为切入点, 综合行为实验、ERP和fMRI实验技术, 开展表情加工的老龄化研究, 从情绪与认知交互的角度进一步揭示情绪加工老龄化的内在机制。具体内容包括表情自动加工和控制加工的年龄差异及其神经机制, 表情自动加工与认知控制的相互作用, 以及个体差异在情绪加工老龄化中的作用等。本研究将加深对情绪加工老龄化的理解, 为相关理论模型的验证、修正和完善提供实证依据。 相似文献
20.
The present study examines the attentional bias hypothesis for individuals with generalised social phobia (GSPs). Socially phobic individuals were hypothesised to exhibit attentional bias towards threat stimuli relevant to interpersonal situations. This hypothesis was tested using the face-in-the-crowd paradigm. GSPs and nonanxious controls (NACs) detected an angry, happy, neutral, or disgust target face in a crowd of 12 distracter photographs. Results indicated that, compared to NACs, GSPs exhibited greater attentional biases for angry than for happy faces in a neutral crowd. GSPs were more slowed down in their performance by happy and angry versus neutral distracters; NACs did not exhibit such sensitivity to distracter type. Finally, GSPs were faster in detecting anger than disgust expressions; NACs detected both types of faces equally quickly. Implications of these findings for the maintenance of social phobia are discussed. 相似文献