首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two studies provided direct support for a recently proposed dialect theory of communicating emotion, positing that expressive displays show cultural variations similar to linguistic dialects, thereby decreasing accurate recognition by out-group members. In Study 1, 60 participants from Quebec and Gabon posed facial expressions. Dialects, in the form of activating different muscles for the same expressions, emerged most clearly for serenity, shame, and contempt and also for anger, sadness, surprise, and happiness, but not for fear, disgust, or embarrassment. In Study 2, Quebecois and Gabonese participants judged these stimuli and stimuli standardized to erase cultural dialects. As predicted, an in-group advantage emerged for nonstandardized expressions only and most strongly for expressions with greater regional dialects, according to Study 1.  相似文献   

2.
Three studies examined the effects of experimentally manipulated surprise expressions on the experience of surprise. Surprise was induced by a sudden, unannounced change of the stimulus presentation during a computerized task. Facial expression was manipulated by leading participants to adopt an expression akin to surprise, or by forcing them to look up steeply to a monitor. The expression manipulations had no intensifying effect on the experience of surprise, whereas manipulations of unexpectedness and mental load had strong effects. In addition, mental load was found to affect beliefs about facial expression, suggesting that the participants used their feelings of surprise to infer their probable facial displays. Path analyses supported this reverse self-inference hypothesis.  相似文献   

3.
经典JACBART微表情识别测验只考察平静表情背景下微表情识别,生态效度不高。本研究创建生态化微表情识别测验,考察所有7种基本表情背景下的6种微表情识别特征。结果发现:(1)该测验具有良好的重测信度、校标效度和生态效度,能够稳定有效地测量生态化微表情识别。(2)信效度检验揭示了生态化微表情识别特征。某些生态化微表情识别存在训练效应。生态化微表情与经典微表情或普通表情普遍相关。恐惧、悲伤、厌恶、愤怒微表情背景主效应显著;惊讶和愉快微表情背景主效应不显著,成对比较发现各背景下惊讶/愉快微表情差异不显著,但是与普通表情有广泛的显著差异。用不同背景下微表情识别正确率的标准差定义生态化微表情识别波动,发现生态化微表情识别具有稳定的波动性。  相似文献   

4.
Faces with expressions (happy, surprise, anger, fear) were presented at study. Memory for facial expressions was tested by presenting the same faces with neutral expressions and asking participants to determine the expression that had been displayed at study. In three experiments, happy expressions were remembered better than other expressions. The advantage of a happy face was observed even when faces were inverted (upside down) and even when the salient perceptual feature (broad grin) was controlled across conditions. These findings are couched in terms of source monitoring, in which memory for facial expressions reflects encoding of the dispositional context of a prior event.  相似文献   

5.
The six basic emotions (disgust, anger, fear, happiness, sadness, and surprise) have long been considered discrete categories that serve as the primary units of the emotion system. Yet recent evidence indicated underlying connections among them. Here we tested the underlying relationships among the six basic emotions using a perceptual learning procedure. This technique has the potential of causally changing participants’ emotion detection ability. We found that training on detecting a facial expression improved the performance not only on the trained expression but also on other expressions. Such a transfer effect was consistently demonstrated between disgust and anger detection as well as between fear and surprise detection in two experiments (Experiment 1A, n?=?70; Experiment 1B, n?=?42). Notably, training on any of the six emotions could improve happiness detection, while sadness detection could only be improved by training on sadness itself, suggesting the uniqueness of happiness and sadness. In an emotion recognition test using a large sample of Chinese participants (n?=?1748), the confusion between disgust and anger as well as between fear and surprise was further confirmed. Taken together, our study demonstrates that the “basic” emotions share some common psychological components, which might be the more basic units of the emotion system.  相似文献   

6.
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.  相似文献   

7.
The authors investigated children's ability to recognize emotions from the information available in the lower, middle, or upper face. School-age children were shown partial or complete facial expressions and asked to say whether they corresponded to a given emotion (anger, fear, surprise, or disgust). The results indicate that 5-year-olds were able to recognize fear, anger, and surprise from partial facial expressions. Fear was better recognized from the information located in the upper face than those located in the lower face. A similar pattern of results was found for anger, but only in girls. Recognition improved between 5 and 10 years old for surprise and anger, but not for fear and disgust.  相似文献   

8.
The Emotion Recognition Task is a computer-generated paradigm for measuring the recognition of six basic facial emotional expressions: anger, disgust, fear, happiness, sadness, and surprise. Video clips of increasing length were presented, starting with a neutral face that changes into a facial expression of different intensities (20%-100%). The present study describes methodological aspects of the paradigm and its applicability in healthy participants (N=58; 34 men; ages between 22 and 75), specifically focusing on differences in recognition performance between the six emotion types and age-related change. The results showed that happiness was the easiest emotion to recognize, while fear was the most difficult. Moreover, older adults performed worse than young adults on anger, sadness, fear, and happiness, but not on disgust and surprise. These findings indicate that this paradigm is probably more sensitive than emotion perception tasks using static images, suggesting it is a useful tool in the assessment of subtle impairments in emotion perception.  相似文献   

9.
Dynamic properties influence the perception of facial expressions   总被引:8,自引:0,他引:8  
Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.  相似文献   

10.
Equal numbers of male and female participants judged which of seven facial expressions (anger, disgust, fear, happiness, neutrality, sadness, and surprise) were displayed by a set of 336 faces, and we measured both accuracy and response times. In addition, the participants rated how well the expression was displayed (i.e., the intensity of the expression). These three measures are reported for each face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that some expressions were recognized more accurately in female than in male faces. The full set of these norms may be downloaded fromwww.psychonomic.org/archive/.  相似文献   

11.
Hosie  J. A.  Gray  C. D.  Russell  P. A.  Scott  C.  Hunter  N. 《Motivation and emotion》1998,22(4):293-313
This paper reports the results of three tasks comparing the development of the understanding of facial expressions of emotion in deaf and hearing children. Two groups of hearing and deaf children of elementary school age were tested for their ability to match photographs of facial expressions of emotion, and to produce and comprehend emotion labels for the expressions of happiness, sadness, anger, fear, disgust, and surprise. Accuracy data showed comparable levels of performance for deaf and hearing children of the same age. Happiness and sadness were the most accurately matched expressions and the most accurately produced and comprehended labels. Anger was the least accurately matched expression and the most poorly comprehended emotion label. Disgust was the least accurately labeled expression; however, deaf children were more accurate at labeling this expression, and also at labeling fear, than hearing children. Error data revealed that children confused anger with disgust, and fear with surprise. However, the younger groups of deaf and hearing children also showed a tendency to confuse the negative expressions of anger, disgust, and fear with sadness. The results suggest that, despite possible differences in the early socialisation of emotion, deaf and hearing children share a common understanding of the emotions conveyed by distinctive facial expressions.  相似文献   

12.
The goal of the present study was to test the Perceptual-Attentional Limitation Hypothesis in children and adults by manipulating the distinctiveness between expressions and recording eye movements. Children 3–5 and 9–11 years old as well as adults were presented pairs of expressions and required to identify a target emotion. Children 3–5 years old were less accurate than those 9–11 years old and adults. All children viewed pictures longer than adults but did not spend more time attending to the relevant cues. For all participants, accuracy for the recognition of fear was lower than for surprise when the distinctive cue was in the brow only. They also took longer and spent more time in both the mouth and brow zones than when a cue was in the mouth or both areas. Adults and children 9–11 years old made more comparisons between the expressions when fear comprised a single distinctive cue in the brow than when the distinctive cue was in the mouth only or when both cues were present. Children 3–5 years old made more comparisons for brow only than both. The results of the present study extend on the Perceptual-Attentional Limitation Hypothesis showing an importance of both decoder and stimuli, and an interaction between decoder and stimuli characteristics.  相似文献   

13.
Previous research suggests that neural and behavioral responses to surprised faces are modulated by explicit contexts (e.g., "He just found $500"). Here, we examined the effect of implicit contexts (i.e., valence of other frequently presented faces) on both valence ratings and ability to detect surprised faces (i.e., the infrequent target). In Experiment 1, we demonstrate that participants interpret surprised faces more positively when they are presented within a context of happy faces, as compared to a context of angry faces. In Experiments 2 and 3, we used the oddball paradigm to evaluate the effects of clearly valenced facial expressions (i.e., happy and angry) on default valence interpretations of surprised faces. We offer evidence that the default interpretation of surprise is negative, as participants were faster to detect surprised faces when presented within a happy context (Exp. 2). Finally, we kept the valence of the contexts constant (i.e., surprised faces) and showed that participants were faster to detect happy than angry faces (Exp. 3). Together, these experiments demonstrate the utility of the oddball paradigm to explore the default valence interpretation of presented facial expressions, particularly the ambiguously valenced facial expression of surprise.  相似文献   

14.
The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognition Task (ERT), was developed to overcome these difficulties. In this study, we examined the effects of age, sex, and intellectual ability on emotion perception using the ERT. In this test, emotional facial expressions are presented as morphs gradually expressing one of the six basic emotions from neutral to four levels of intensity (40%, 60%, 80%, and 100%). The task was administered in 373 healthy participants aged 8–75. In children aged 8–17, only small developmental effects were found for the emotions anger and happiness, in contrast to adults who showed age‐related decline on anger, fear, happiness, and sadness. Sex differences were present predominantly in the adult participants. IQ only minimally affected the perception of disgust in the children, while years of education were correlated with all emotions but surprise and disgust in the adult participants. A regression‐based approach was adopted to present age‐ and education‐ or IQ‐adjusted normative data for use in clinical practice. Previous studies using the ERT have demonstrated selective impairments on specific emotions in a variety of psychiatric, neurologic, or neurodegenerative patient groups, making the ERT a valuable addition to existing paradigms for the assessment of emotion perception.  相似文献   

15.
The relationship between knowledge of American Sign Language (ASL) and the ability to encode facial expressions of emotion was explored. Participants were 55 college students, half of whom were intermediate-level students of ASL and half of whom had no experience with a signed language. In front of a video camera, participants posed the affective facial expressions of happiness, sadness, fear, surprise, anger, and disgust. These facial expressions were randomized onto stimulus tapes that were then shown to 60 untrained judges who tried to identify the expressed emotions. Results indicated that hearing subjects knowledgeable in ASL were generally more adept than were hearing nonsigners at conveying emotions through facial expression. Results have implications for better understanding the nature of nonverbal communication in hearing and deaf individuals.  相似文献   

16.
Two studies examined the general prediction that one's emotional expression should facilitate memory for material that matches the expression. The authors focused on specific facial expressions of surprise. In the first study, participants who were mimicking a surprised expression showed better recall for the surprising words and worse recall for neutral words, relative to those who were mimicking a neutral expression. Study 2 replicated the results of Study 1, showing that participants who mimicked a surprised expression recalled more words spoken in a surprising manner compared with those that sounded neutral or sad. Conversely, participants who mimicked sad facial expressions showed greater recall for sad than neutral or surprising words. The results provide evidence of the importance of matching the emotional valence of the recall content to the facial expression of the recaller during the memorization period. (PsycINFO Database Record (c) 2008 APA, all rights reserved).  相似文献   

17.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

18.
Facial race and sex cues can influence the magnitude of the happy categorisation advantage. It has been proposed that implicit race or sex based evaluations drive this influence. Within this account a uniform influence of social category cues on the happy categorisation advantage should be observed for all negative expressions. Support has been shown with angry and sad expressions but evidence to the contrary has been found for fearful expressions. To determine the generality of the evaluative congruence account, participants categorised happiness with either sadness, fear, or surprise displayed on White male as well as White female, Black male, or Black female faces across three experiments. Faster categorisation of happy than negative expressions was observed for female faces when presented among White male faces, and for White male faces when presented among Black male faces. These results support the evaluative congruence account when both positive and negative expressions are presented.  相似文献   

19.
The authors tested the hypothesis that East Asians, because of their holistic reasoning, take contradiction and inconsistency for granted and consequently are less likely than Americans to experience surprise. Studies 1 and 2 showed that Korean participants displayed less surprise and greater hindsight bias than American participants did when a target's behavior contradicted their expectations. Studies 3 and 4 further demonstrated that even when contradiction was created in highly explicit ways, Korean participants experienced little surprise, whereas American participants reported substantial surprise. We discuss the implications of these findings for various issues, including the psychology of conviction, cognitive dissonance, and the development of science.  相似文献   

20.
Participants (N = 216) were administered a differential implicit learning task during which they were trained and tested on 3 maximally distinct 2nd-order visuomotor sequences, with sequence color serving as discriminative stimulus. During training, 1 sequence each was followed by an emotional face, a neutral face, and no face, using backward masking. Emotion (joy, surprise, anger), face gender, and exposure duration (12 ms, 209 ms) were varied between participants; implicit motives were assessed with a picture-story exercise. For power-motivated individuals, low-dominance facial expressions enhanced and high-dominance expressions impaired learning. For affiliation-motivated individuals, learning was impaired in the context of hostile faces. These findings did not depend on explicit learning of fixed sequences or on awareness of sequence-face contingencies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号