首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
By the age of 4 years, children (N=120) know the meaning of the word disgust as well as they know the meaning of anger and fear; for example, when asked, they are equally able to generate a plausible cause for each of these emotions. Yet, in tasks involving facial expressions (free labelling of faces, deciding whether or not a face expresses disgust, or finding a “disgust face” in an array of faces), a majority of 3- to 7-year-old children (N=144) associated the prototypical “disgust face” with anger and denied its association with disgust (25% of adults on the same tasks did so as well). These results challenge the assumption that all humans easily recognise disgust from its facial expression and that this recognition is a precursor to children's understanding of the emotion of disgust.  相似文献   

2.
This investigation examined whether impairment in configural processing could explain deficits in face emotion recognition in people with Parkinson’s disease (PD). Stimuli from the Radboud Faces Database were used to compare recognition of four negative emotion expressions by older adults with PD (n = 16) and matched controls (n = 17). Participants were tasked with categorizing emotional expressions from upright and inverted whole faces and facial composites; it is difficult to derive configural information from these two types of stimuli so featural processing should play a larger than usual role in accurate recognition of emotional expressions. We found that the PD group were impaired relative to controls in recognizing anger, disgust and fearful expressions in upright faces. Then, consistent with a configural processing deficit, participants with PD showed no composite effect when attempting to identify facial expressions of anger, disgust and fear. A face inversion effect, however, was observed in the performance of all participants in both the whole faces and facial composites tasks. These findings can be explained in terms of a configural processing deficit if it is assumed that the disruption caused by facial composites was specific to configural processing, whereas inversion reduced performance by making it difficult to derive both featural and configural information from faces.  相似文献   

3.
North American (Canadian) and Indian observers were shown photographs of six facial emotions; happiness, sadness, fear, anger, surprise, and disgust, expressed by American Caucasian and Indian subjects. Observers were asked to judge each photograph, on a 7-point scale, for the degree of (a) distinctiveness (free from blending with other emotion categories), (b) pleasantness-unpleasantness, and (c) arousal-nonarousal of expressed facial emotion. The results showed significant interaction of Observer × Expressor × Emotion for the distinctiveness judgement. It was found that fearful and angry expressions in Indian faces, in comparison to Caucasian faces, were judged as less distinctly identifiable by observers of both cultural origins. Indian observers rated these two emotion expressions as being more distinctive than did North Americans irrespective of the culture of the expressor. In addition, Indian observers judged fearful and angry expressions as more unpleasant than did North Americans. Caucasians, in comparison to Indians, were judged to have more arousal in most of the emotion expressions.  相似文献   

4.
The rapid detection of facial expressions of anger or threat has obvious adaptive value. In this study, we examined the efficiency of facial processing by means of a visual search task. Participants searched displays of schematic faces and were required to determine whether the faces displayed were all the same or whether one was different. Four main results were found: (1) When displays contained the faces, people were slower in detecting the absence of a discrepant face when the faces displayed angry (or sad/angry) rather than happy expressions. (2) When displays contained a discrepant face people were faster in detecting this when the discrepant face displayed an angry rather than a happy expression. (3) Neither of these patterns for same and different displays was apparent when face displays were inverted, or when just the mouth was presented in isolation. (4) The search slopes for angry targets were significantly lower than for happy targets. These results suggest that detection of angry facial expressions is fast and efficient, although does not “pop-out” in the traditional sense.  相似文献   

5.
Equal numbers of male and female participants judged which of seven facial expressions (anger, disgust, fear, happiness, neutrality, sadness, and surprise) were displayed by a set of 336 faces, and we measured both accuracy and response times. In addition, the participants rated how well the expression was displayed (i.e., the intensity of the expression). These three measures are reported for each face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that some expressions were recognized more accurately in female than in male faces. The full set of these norms may be downloaded fromwww.psychonomic.org/archive/.  相似文献   

6.
The six basic emotions (disgust, anger, fear, happiness, sadness, and surprise) have long been considered discrete categories that serve as the primary units of the emotion system. Yet recent evidence indicated underlying connections among them. Here we tested the underlying relationships among the six basic emotions using a perceptual learning procedure. This technique has the potential of causally changing participants’ emotion detection ability. We found that training on detecting a facial expression improved the performance not only on the trained expression but also on other expressions. Such a transfer effect was consistently demonstrated between disgust and anger detection as well as between fear and surprise detection in two experiments (Experiment 1A, n?=?70; Experiment 1B, n?=?42). Notably, training on any of the six emotions could improve happiness detection, while sadness detection could only be improved by training on sadness itself, suggesting the uniqueness of happiness and sadness. In an emotion recognition test using a large sample of Chinese participants (n?=?1748), the confusion between disgust and anger as well as between fear and surprise was further confirmed. Taken together, our study demonstrates that the “basic” emotions share some common psychological components, which might be the more basic units of the emotion system.  相似文献   

7.
Three experiments examined 3- and 5-year-olds’ recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression remained neutral (Experiment 1) or varied between immediate and delayed tests: from neutral to smile and anger (Experiment 2), from smile to neutral and anger (Experiment 3, condition 1), or from anger to neutral and smile (Experiment 3, condition 2). In all experiments, immediate face recognition was not influenced by emotional expression for either age group. Delayed face recognition was most accurate for faces in identical emotional expression. For 5-year-olds, delayed face recognition (with varied emotional expression) was not influenced by which emotional expression had been displayed during the immediate recognition test. Among 3-year-olds, accuracy decreased when facial expressions varied from neutral to smile and anger but was constant when facial expressions varied from anger or smile to neutral, smile or anger. Three-year-olds’ recognition was facilitated when faces initially displayed smile or anger expressions, but this was not the case for 5-year-olds. Results thus indicate a developmental progression in face identity recognition with varied emotional expressions between ages 3 and 5.  相似文献   

8.
ABSTRACT

Objective: The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods: We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results: OA were less accurate than YA at identifying fear (p < .05, r = .44) and more accurate at identifying disgust (p < .05, r = .39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p values < .05, r values ≥ .38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion: We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition.  相似文献   

9.
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.  相似文献   

10.
Facial stimuli are widely used in behavioural and brain science research to investigate emotional facial processing. However, some studies have demonstrated that dynamic expressions elicit stronger emotional responses compared to static images. To address the need for more ecologically valid and powerful facial emotional stimuli, we created Dynamic FACES, a database of morphed videos (n?=?1026) from younger, middle-aged, and older adults displaying naturalistic emotional facial expressions (neutrality, sadness, disgust, fear, anger, happiness). To assess adult age differences in emotion identification of dynamic stimuli and to provide normative ratings for this modified set of stimuli, healthy adults (n?=?1822, age range 18–86 years) categorised for each video the emotional expression displayed, rated the expression distinctiveness, estimated the age of the face model, and rated the naturalness of the expression. We found few age differences in emotion identification when using dynamic stimuli. Only for angry faces did older adults show lower levels of identification accuracy than younger adults. Further, older adults outperformed middle-aged adults’ in identification of sadness. The use of dynamic facial emotional stimuli has previously been limited, but Dynamic FACES provides a large database of high-resolution naturalistic, dynamic expressions across adulthood. Information on using Dynamic FACES for research purposes can be found at http://faces.mpib-berlin.mpg.de.  相似文献   

11.
Faces are widely used as stimuli in various research fields. Interest in emotion-related differences and age-associated changes in the processing of faces is growing. With the aim of systematically varying both expression and age of the face, we created FACES, a database comprising N=171 naturalistic faces of young, middle-aged, and older women and men. Each face is represented with two sets of six facial expressions (neutrality, sadness, disgust, fear, anger, and happiness), resulting in 2,052 individual images. A total of N=154 young, middleaged, and older women and men rated the faces in terms of facial expression and perceived age. With its large age range of faces displaying different expressions, FACES is well suited for investigating developmental and other research questions on emotion, motivation, and cognition, as well as their interactions. Information on using FACES for research purposes can be found at http://faces.mpib-berlin.mpg.de.  相似文献   

12.
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.  相似文献   

13.
14.
Equal numbers of male and female participants judged which of seven facial expressions (anger, disgust, fear, happiness, neutrality, sadness, and surprise) were displayed by a set of 336 faces, and we measured both accuracy and response times. In addition, the participants rated how well the expression was displayed (i.e., the intensity of the expression). These three measures are reported for each face. Sex of the rater did not interact with any of the three measures. However, analyses revealed that some expressions were recognized more accurately in female than in male faces. The full set of these norms may be downloaded from www.psychonomic.org/archive/.  相似文献   

15.
Participants judged which of seven facial expressions (neutrality, happiness, anger, sadness, surprise, fear, and disgust) were displayed by a set of 280 faces corresponding to 20 female and 20 male models of the Karolinska Directed Emotional Faces database (Lundqvist, Flykt, & Ohman, 1998). Each face was presented under free-viewing conditions (to 63 participants) and also for 25, 50, 100, 250, and 500 msec (to 160 participants), to examine identification thresholds. Measures of identification accuracy, types of errors, and reaction times were obtained for each expression. In general, happy faces were identified more accurately, earlier, and faster than other faces, whereas judgments of fearful faces were the least accurate, the latest, and the slowest. Norms for each face and expression regarding level of identification accuracy, errors, and reaction times may be downloaded from www.psychonomic.org/archive/.  相似文献   

16.
Other people’s emotional reactions to a third person’s behaviour are potentially informative about what is appropriate within a given situation. We investigated whether and how observers’ inferences of such injunctive norms are shaped by expressions of anger and disgust. Building on the moral emotions literature, we hypothesised that angry and disgusted expressions produce relative differences in the strength of autonomy-based versus purity-based norm inferences. We report three studies (plus three supplementary studies) using different types of stimuli (vignette-based, video clips) to investigate how emotional reactions shape norms about potential norm violations (eating snacks, drinking alcohol), and contexts (groups of friends, a university, a company). Consistent with our theoretical argument, the results indicate that observers use others’ emotional reactions not only to infer whether a particular behaviour is inappropriate, but also why it is inappropriate: because it primarily violates autonomy standards (as suggested relatively more strongly by expressions of anger) or purity standards (as suggested relatively more strongly by expressions of disgust). We conclude that the social functionality of emotions in groups extends to shaping norms based on moral standards.  相似文献   

17.
People recognize faces of their own race more accurately than faces of other races. The “contact” hypothesis suggests that this “other‐race effect” occurs as a result of the greater experience we have with own‐ versus other‐race faces. The computational mechanisms that may underlie different versions of the contact hypothesis were explored in this study. We replicated the other‐race effect with human participants and evaluated four classes of computational face recognition algorithms for the presence of an other‐race effect. Consistent with the predictions of a developmental contact hypothesis, “experience‐based models” demonstrated an other‐race effect only when the representational system was developed through experience that warped the perceptual space in a way that was sensitive to the overall structure of the model's experience with faces of different races. When the model's representation relied on a feature set optimized to encode the information in the learned faces, experience‐based algorithms recognized minority‐race faces more accurately than majority‐race faces. The results suggest a developmental learning process that warps the perceptual space to enhance the encoding of distinctions relevant for own‐race faces. This feature space limits the quality of face representations for other‐race faces.  相似文献   

18.
This study investigated how the cultural match or mismatch between observer and perceiver can affect the accuracy of judgements of facial emotion, and how acculturation can affect cross-cultural recognition accuracy. The sample consisted of 51 Caucasian-Australians, 51 people of Chinese heritage living in Australia (PCHA) and 51 Mainland Chinese. Participants were required to identify the emotions of happiness, sadness, fear, anger, surprise and disgust displayed in photographs of Caucasian and Chinese faces. The PCHA group also responded to an acculturation measure that assessed their adoption of Australian cultural values and adherence to heritage (Chinese) cultural values. Counter to the hypotheses, the Caucasian-Australian and PCHA groups were found to be significantly more accurate at identifying both the Chinese and Caucasian facial expressions than the Mainland Chinese group. Adoption of Australian culture was found to predict greater accuracy in recognising the emotions displayed on Caucasian faces for the PCHA group.  相似文献   

19.
The authors investigated children's ability to recognize emotions from the information available in the lower, middle, or upper face. School-age children were shown partial or complete facial expressions and asked to say whether they corresponded to a given emotion (anger, fear, surprise, or disgust). The results indicate that 5-year-olds were able to recognize fear, anger, and surprise from partial facial expressions. Fear was better recognized from the information located in the upper face than those located in the lower face. A similar pattern of results was found for anger, but only in girls. Recognition improved between 5 and 10 years old for surprise and anger, but not for fear and disgust.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号