首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.  相似文献   

2.
The present study investigated whether facial expressions of emotion are recognized holistically, i.e., all at once as an entire unit, as faces are or featurally as other nonface stimuli. Evidence for holistic processing of faces comes from a reliable decrement in recognition performance when faces are presented inverted rather than upright. If emotion is recognized holistically, then recognition of facial expressions of emotion should be impaired by inversion. To test this, participants were shown schematic drawings of faces showing one of six emotions (surprise, sadness, anger, happiness, disgust, and fear) in either an upright or inverted orientation and were asked to indicate the emotion depicted. Participants were more accurate in the upright than in the inverted orientation, providing evidence in support of holistic recognition of facial emotion. Because recognition of facial expressions of emotion is important in social relationships, this research may have implications for treatment of some social disorders.  相似文献   

3.
This investigation examined whether impairment in configural processing could explain deficits in face emotion recognition in people with Parkinson’s disease (PD). Stimuli from the Radboud Faces Database were used to compare recognition of four negative emotion expressions by older adults with PD (n = 16) and matched controls (n = 17). Participants were tasked with categorizing emotional expressions from upright and inverted whole faces and facial composites; it is difficult to derive configural information from these two types of stimuli so featural processing should play a larger than usual role in accurate recognition of emotional expressions. We found that the PD group were impaired relative to controls in recognizing anger, disgust and fearful expressions in upright faces. Then, consistent with a configural processing deficit, participants with PD showed no composite effect when attempting to identify facial expressions of anger, disgust and fear. A face inversion effect, however, was observed in the performance of all participants in both the whole faces and facial composites tasks. These findings can be explained in terms of a configural processing deficit if it is assumed that the disruption caused by facial composites was specific to configural processing, whereas inversion reduced performance by making it difficult to derive both featural and configural information from faces.  相似文献   

4.
Rapid evaluation of ecologically relevant stimuli may lead to their preferential access to awareness. Continuous flash suppression allows assessment of affective processing under conditions in which stimuli have been rendered invisible due to the strongly suppressive nature of dynamic noise relative to static images. The authors investigated whether fearful expressions emerge from suppression into awareness more quickly than images of neutral or happy expressions. Fearful faces were consistently detected faster than neutral or happy faces. Responses to inverted faces were slower than those to upright faces but showed the same effect of emotional expression, suggesting that some key feature or features in the inverted faces remained salient. When using stimuli solely representing the eyes, a similar bias for detecting fear emerged, implicating the importance of information from the eyes in the preconscious processing of fear expressions.  相似文献   

5.
Past literature has indicated that face inversion either attenuates emotion detection advantages in visual search, implying that detection of emotional expressions requires holistic face processing, or has no effect, implying that expression detection is feature based. Across six experiments that utilised different task designs, ranging from simple (single poser, single set size) to complex (multiple posers, multiple set sizes), and stimuli drawn from different databases, significant emotion detection advantages were found for both upright and inverted faces. Consistent with past research, the nature of the expression detection advantage, anger superiority (Experiments 1, 2 and 6) or happiness superiority (Experiments 3, 4 and 5), differed across stimulus sets. However both patterns were evident for upright and inverted faces. These results indicate that face inversion does not interfere with visual search for emotional expressions, and suggest that expression detection in visual search may rely on feature-based mechanisms.  相似文献   

6.
Event-related potentials (ERPs), accuracy scores, and reaction times were used to examine the recognition of emotional expressions. Adults and 7-year-old children saw upright and inverted chromatic slides of the facial expressions of happiness, fear, surprise, and anger, and were asked to press a button for either "happy" or "angry" faces. A positive-going waveform (P300) was apparent at parietal scalp (Pz) and at left and right temporal scalp. Although the behavioral data were similar for both children and adults (e.g., both had more difficulty recognizing angry expressions than happy ones, and angry expressions were more difficult to recognize upside-down than were happy faces), the ERPs indicated that children responded differently than adults did to happy and angry expressions. Adults showed greater P300 amplitude to happy faces, while children showed greater P300 amplitude to angry faces. In addition, for adults, but not children, there were greater P300 amplitude responses at right vs. left temporal scalp.  相似文献   

7.
We investigated whether categorical perception and dimensional perception can co-occur while decoding emotional facial expressions. In Experiment 1, facial continua with endpoints consisting of four basic emotions (i.e., happiness–fear and anger–disgust) were created by a morphing technique. Participants rated each facial stimulus using a categorical strategy and a dimensional strategy. The results show that the happiness–fear continuum was divided into two clusters based on valence, even when using the dimensional strategy. Moreover, the faces were arrayed in order of the physical changes within each cluster. In Experiment 2, we found a category boundary within other continua (i.e., surprise–sadness and excitement–disgust) with regard to the arousal and valence dimensions. These findings indicate that categorical perception and dimensional perception co-occurred when emotional facial expressions were rated using a dimensional strategy, suggesting a hybrid theory of categorical and dimensional accounts.  相似文献   

8.
N L Etcoff  J J Magee 《Cognition》1992,44(3):227-240
People universally recognize facial expressions of happiness, sadness, fear, anger, disgust, and perhaps, surprise, suggesting a perceptual mechanism tuned to the facial configuration displaying each emotion. Sets of drawings were generated by computer, each consisting of a series of faces differing by constant physical amounts, running from one emotional expression to another (or from one emotional expression to a neutral face). Subjects discriminated pairs of faces, then, in a separate task, categorized the emotion displayed by each. Faces within a category were discriminated more poorly than faces in different categories that differed by an equal physical amount. Thus emotional expressions, like colors and speech sounds, are perceived categorically, not as a direct reflection of their continuous physical properties.  相似文献   

9.
This study was designed to test three competing hypotheses (impaired configural processing; impaired Theory of Mind; atypical amygdala functioning) to explain the basic facial expression recognition profile of adults with autism spectrum disorders (ASD). In Experiment 1 the Ekman and Friesen (1976) series were presented upright and inverted. Individuals with ASD were significantly less accurate than controls at recognising upright facial expressions of fear, sadness and disgust and their pattern of errors suggested some configural processing difficulties. Impaired recognition of inverted facial expressions suggested some additional difficulties processing the facial features. Unexpectedly, the clinical group misidentified fear as anger. In Experiment 2 feature processing of facial expressions was investigated by presenting stimuli in a piecemeal fashion, starting with either just the eyes or the mouth. Individuals with ASD were impaired at recognising fear from the eyes and disgust from the mouth; they also confused fearful eyes as being angry. The findings are discussed in terms of the three competing hypotheses tested.  相似文献   

10.
Adults perceive emotional expressions categorically, with discrimination being faster and more accurate between expressions from different emotion categories (i.e. blends with two different predominant emotions) than between two stimuli from the same category (i.e. blends with the same predominant emotion). The current study sought to test whether facial expressions of happiness and fear are perceived categorically by pre-verbal infants, using a new stimulus set that was shown to yield categorical perception in adult observers (Experiments 1 and 2). These stimuli were then used with 7-month-old infants (N = 34) using a habituation and visual preference paradigm (Experiment 3). Infants were first habituated to an expression of one emotion, then presented with the same expression paired with a novel expression either from the same emotion category or from a different emotion category. After habituation to fear, infants displayed a novelty preference for pairs of between-category expressions, but not within-category ones, showing categorical perception. However, infants showed no novelty preference when they were habituated to happiness. Our findings provide evidence for categorical perception of emotional expressions in pre-verbal infants, while the asymmetrical effect challenges the notion of a bias towards negative information in this age group.  相似文献   

11.
Three experiments tested the hypothesis that explaining emotional expressions using specific emotion concepts at encoding biases perceptual memory for those expressions. In Experiment 1, participants viewed faces expressing blends of happiness and anger and created explanations of why the target people were expressing one of the two emotions, according to concepts provided by the experimenter. Later, participants attempted to identify the facial expressions in computer movies, in which the previously seen faces changed continuously from anger to happiness. Faces conceptualized in terms of anger were remembered as angrier than the same faces conceptualized in terms of happiness, regardless of whether the explanations were told aloud or imagined. Experiments 2 and 3 showed that explanation is necessary for the conceptual biases to emerge fully and extended the finding to anger-sad expressions, an emotion blend more common in real life.  相似文献   

12.
The present study aims to explore the influence of facial emotional expressions on pre-scholars' identity recognition was analyzed using a two-alternative forced-choice matching task. A decrement was observed in children's performance with emotional faces compared with neutral faces, both when a happy emotional expression remained unchanged between the target face and the test faces and when the expression changed from happy to neutral or from neutral to happy between the target and the test faces (Experiment 1). Negative emotional expressions (i.e. fear and anger) also interfered with children's identity recognition (Experiment 2). Obtained evidence suggests that in preschool-age children, facial emotional expressions are processed in interaction with, rather than independently from, the encoding of facial identity information. The results are discussed in relationship with relevant research conducted with adults and children.  相似文献   

13.
Infants can detect information specifying affect in infant- and adult-directed speech, familiar and unfamiliar facial expressions, and in point-light displays of facial expressions. We examined 3-, 5-, 7-, and 9-month-olds' discrimination of musical excerpts judged by adults and preschoolers as happy and sad. In Experiment 1, using an infant-controlled habituation procedure, 3-, 5-, 7-, and 9-month-olds heard three musical excerpts that were rated as either happy or sad. Following habituation, infants were presented with two new musical excerpts from the other affect group. Nine-month-olds discriminated the musical excerpts rated as affectively different. Five- and seven-month-olds discriminated the happy and sad excerpts when they were habituated to sad excerpts but not when they were habituated to happy excerpts. Three-month-olds showed no evidence of discriminating the sad and happy excerpts. In Experiment 2, 5-, 7-, and 9-month-olds were presented with two new musical excerpts from the same affective group as the habituation excerpts. At no age did infants discriminate these novel, yet affectively similar, musical excerpts. In Experiment 3, we examined 5-, 7-, and 9-month-olds' discrimination of individual excerpts rated as affectively similar. Only the 9-month-olds discriminated the affectively similar individual excerpts. Results are discussed in terms of infants' ability to discriminate affect across a variety of events and its relevance for later social-communicative development.  相似文献   

14.
Facial stimuli are widely used in behavioural and brain science research to investigate emotional facial processing. However, some studies have demonstrated that dynamic expressions elicit stronger emotional responses compared to static images. To address the need for more ecologically valid and powerful facial emotional stimuli, we created Dynamic FACES, a database of morphed videos (n?=?1026) from younger, middle-aged, and older adults displaying naturalistic emotional facial expressions (neutrality, sadness, disgust, fear, anger, happiness). To assess adult age differences in emotion identification of dynamic stimuli and to provide normative ratings for this modified set of stimuli, healthy adults (n?=?1822, age range 18–86 years) categorised for each video the emotional expression displayed, rated the expression distinctiveness, estimated the age of the face model, and rated the naturalness of the expression. We found few age differences in emotion identification when using dynamic stimuli. Only for angry faces did older adults show lower levels of identification accuracy than younger adults. Further, older adults outperformed middle-aged adults’ in identification of sadness. The use of dynamic facial emotional stimuli has previously been limited, but Dynamic FACES provides a large database of high-resolution naturalistic, dynamic expressions across adulthood. Information on using Dynamic FACES for research purposes can be found at http://faces.mpib-berlin.mpg.de.  相似文献   

15.
Facial attributes such as race, sex, and age can interact with emotional expressions; however, only a couple of studies have investigated the nature of the interaction between facial age cues and emotional expressions and these have produced inconsistent results. Additionally, these studies have not addressed the mechanism/s driving the influence of facial age cues on emotional expression or vice versa. In the current study, participants categorised young and older adult faces expressing happiness and anger (Experiment 1) or sadness (Experiment 2) by their age and their emotional expression. Age cues moderated categorisation of happiness vs. anger and sadness in the absence of an influence of emotional expression on age categorisation times. This asymmetrical interaction suggests that facial age cues are obligatorily processed prior to emotional expressions. Finding a categorisation advantage for happiness expressed on young faces relative to both anger and sadness which are negative in valence but different in their congruence with old age stereotypes or structural overlap with age cues suggests that the observed influence of facial age cues on emotion perception is due to the congruence between relatively positive evaluations of young faces and happy expressions.  相似文献   

16.
The human body is an important source of information to infer a person’s emotional state. Research with adult observers indicate that the posture of the torso, arms and hands provide important perceptual cues for recognising anger, fear and happy expressions. Much less is known about whether infants process body regions differently for different body expressions. To address this issue, we used eye tracking to investigate whether infants’ visual exploration patterns differed when viewing body expressions. Forty-eight 7-months-old infants were randomly presented with static images of adult female bodies expressing anger, fear and happiness, as well as an emotionally-neutral posture. Facial cues to emotional state were removed by masking the faces. We measured the proportion of looking time, proportion and number of fixations, and duration of fixations on the head, upper body and lower body regions for the different expressions. We showed that infants explored the upper body more than the lower body. Importantly, infants at this age fixated differently on different body regions depending on the expression of the body posture. In particular, infants spent a larger proportion of their looking times and had longer fixation durations on the upper body for fear relative to the other expressions. These results extend and replicate the information about infant processing of emotional expressions displayed by human bodies, and they support the hypothesis that infants’ visual exploration of human bodies is driven by the upper body.  相似文献   

17.
Human faces are among the most important visual stimuli that we encounter at all ages. This importance partly stems from the face as a conveyer of information on the emotional state of other individuals. Previous research has demonstrated specific scanning patterns in response to threat-related compared to non-threat-related emotional expressions. This study investigated how visual scanning patterns toward faces which display different emotional expressions develop during infancy. The visual scanning patterns of 4-month-old and 7-month-old infants and adults when looking at threat-related (i.e., angry and fearful) versus non-threat-related (i.e., happy, sad, and neutral) emotional faces were examined. We found that infants as well as adults displayed an avoidant looking pattern in response to threat-related emotional expressions with reduced dwell times and relatively less fixations to the inner features of the face. In addition, adults showed a pattern of eye contact avoidance when looking at threat-related emotional expressions that was not yet present in infants. Thus, whereas a general avoidant reaction to threat-related facial expressions appears to be present from very early in life, the avoidance of eye contact might be a learned response toward others' anger and fear that emerges later during development.  相似文献   

18.
Numerous studies have shown an exacerbation of attentional bias towards threat in anxiety states. However, the cognitive mechanisms responsible for these attentional biases remain largely unknown. Further, the authors outline the need to consider the nature of the attentional processes in operation (hypervigilance, avoidance, or disengagement). We adapted a dot-probe paradigm to record behavioral and electrophysiological responses in 26 participants reporting high or low fear of evaluation, a major component of social anxiety. Pairs of faces including a neutral and an emotional face (displaying anger, fear, disgust, or happiness) were presented during 200 ms and then replaced by a neutral target to discriminate. Results show that anxious participants were characterized by an increased P1 in response to pairs of faces, irrespective of the emotional expression included in the pair. They also showed an increased P2 in response to angry–neutral pairs selectively. Finally, in anxious participants, the P1 response to targets was enhanced when replacing emotional faces, whereas non-anxious subjects showed no difference between the two conditions. These results indicate an early hypervigilance to face stimuli in social anxiety, coupled with difficulty in disengaging from threat and sustained attention to emotional stimuli. They are discussed within the framework of current models of anxiety and psychopathology.  相似文献   

19.
If humans can detect the wealth of tactile and haptic information potentially available in live facial expressions of emotion (FEEs), they should be capable of haptically recognizing the six universal expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) at levels well above chance. We tested this hypothesis in the experiments reported here. With minimal training, subjects' overall mean accuracy was 51% for static FEEs (Experiment 1) and 74% for dynamic FEEs (Experiment 2). All FEEs except static fear were successfully recognized above the chance level of 16.7%. Complementing these findings, overall confidence and information transmission were higher for dynamic than for corresponding static faces. Our performance measures (accuracy and confidence ratings, plus response latency in Experiment 2 only) confirmed that happiness, sadness, and surprise were all highly recognizable, and anger, disgust, and fear less so.  相似文献   

20.
Three experiments examined 3- and 5-year-olds’ recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression remained neutral (Experiment 1) or varied between immediate and delayed tests: from neutral to smile and anger (Experiment 2), from smile to neutral and anger (Experiment 3, condition 1), or from anger to neutral and smile (Experiment 3, condition 2). In all experiments, immediate face recognition was not influenced by emotional expression for either age group. Delayed face recognition was most accurate for faces in identical emotional expression. For 5-year-olds, delayed face recognition (with varied emotional expression) was not influenced by which emotional expression had been displayed during the immediate recognition test. Among 3-year-olds, accuracy decreased when facial expressions varied from neutral to smile and anger but was constant when facial expressions varied from anger or smile to neutral, smile or anger. Three-year-olds’ recognition was facilitated when faces initially displayed smile or anger expressions, but this was not the case for 5-year-olds. Results thus indicate a developmental progression in face identity recognition with varied emotional expressions between ages 3 and 5.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号