首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study demonstrates that when people attempt to identify a facial expression of emotion (FEE) by haptically exploring a 3D facemask, they are affected by viewing a simultaneous, task-irrelevant visual FEE portrayed by another person. In comparison to a control condition, where visual noise was presented, the visual FEE facilitated haptic identification when congruent (visual and haptic FEEs same category). When the visual and haptic FEEs were incongruent, haptic identification was impaired, and error responses shifted toward the visually depicted emotion. In contrast, visual emotion labels that matched or mismatched the haptic FEE category produced no such effects. The findings indicate that vision and touch interact in FEE recognition at a level where featural invariants of the emotional category (cf. precise facial geometry or general concepts) are processed, even when the visual and haptic FEEs are not attributable to a common source. Processing mechanisms behind these effects are considered.  相似文献   

2.
Inversion interferes with the encoding of configural and holistic information more than it does with the encoding of explicitly represented and isolated parts. Accordingly, if facial expressions are explicitly represented in the face representation, their recognition should not be greatly affected by face orientation. In the present experiment, response times to detect a difference in hair color in line-drawn faces were unaffected by face orientation, but response times to detect the presence of brows and mouth were longer with inverted than with upright faces, independent of the emergent expression (neutral, happy, sad, and angry). Expressions are not explicitly represented; rather, they and the face configuration are represented as undecomposed wholes.  相似文献   

3.
Rachael E. Jack 《Visual cognition》2013,21(9-10):1248-1286
With over a century of theoretical developments and empirical investigation in broad fields (e.g., anthropology, psychology, evolutionary biology), the universality of facial expressions of emotion remains a central debate in psychology. How near or far, then, is this debate from being resolved? Here, I will address this question by highlighting and synthesizing the significant advances in the field that have elevated knowledge of facial expression recognition across cultures. Specifically, I will discuss the impact of early major theoretical and empirical contributions in parallel fields and their later integration in modern research. With illustrative examples, I will show that the debate on the universality of facial expressions has arrived at a new juncture and faces a new generation of exciting questions.  相似文献   

4.
Previous studies indicate that the encoding of new facial identities in memory is influenced by the type of expression displayed by the faces. In the current study, the authors investigated whether or not this influence requires attention to be explicitly directed toward the affective meaning of facial expressions. In a first experiment, the authors found that facial identity was better recognized when the faces were initially encountered with a happy rather than an angry expression, even when attention was oriented toward facial features other than expression. Using the Remember/Know/Guess paradigm in a second experiment, the authors found that the influence of facial expressions on the conscious recollection of facial identity was even more pronounced when participants' attention was not directed toward expressions. It is suggested that the affective meaning of facial expressions automatically modulates the encoding of facial identity in memory.  相似文献   

5.
Despite the fact that facial expressions of emotion have signal value, there is surprisingly little research examining how that signal can be detected under various conditions, because most judgment studies utilize full-face, frontal views. We remedy this by obtaining judgments of frontal and profile views of the same expressions displayed by the same expressors. We predicted that recognition accuracy when viewing faces in profile would be lower than when judging the same faces from the front. Contrarily, there were no differences in recognition accuracy as a function of view, suggesting that emotions are judged equally well regardless of from what angle they are viewed.  相似文献   

6.
Abstract

Some experiments have shown that a face having an expression different from the others in a crowd can be detected in a time that is independent of crowd size. Although this pop-out effect suggests that the valence of a face is available preattentively, it is possible that it is only the detection of sign features (e.g. angle of brow) which triggers an internal code for valence. In experiments testing the merits of valence and feature explanations, subjects searched displays of schematic faces having sad, happy, and vacant mouth expressions for a face having a discrepant sad or happy expression. Because inversion destroys holistic face processing and the implicit representation of valence, a critical test was whether pop-out occurred for inverted faces. Flat search functions (pop-out) for upright and inverted faces provided equivocal support for both explanations. But intercept effects found only with normal faces indicated valences had been analysed at an early stage of stimulus encoding.  相似文献   

7.
Recent studies have shown that smiling faces are judged as more familiar than those showing a neutral expression (Baudouin, Gilibert, Sansone, & Tiberghien, 2000). Here we compare judged familiarity of unknown and famous faces when displaying a positive, neutral, or negative expression. Our results confirm a smiling familiarity bias, with positive-expression faces judged as being more familiar. Importantly, we also show significantly reduced familiarity for negative-expression faces, compared with neutral- or positive-expression faces. This difference in judged familiarity is not due to differences in expression intensity, but instead related to expression valence. Results are discussed with regard to the independence of facial identity and expression processing, and in terms of factors that influence face familiarity and memory.  相似文献   

8.
9.
Facial expressions of anger and fear have been seen to elicit avoidance behavior in the perceiver due to their negative valence. However, recent research uncovered discrepancies regarding these immediate motivational implications of fear and anger, suggesting that not all negative emotions trigger avoidance to a comparable extent. To clarify those discrepancies, we considered recent theoretical and methodological advances, and investigated the role of social preferences and processing focus on approach-avoidance tendencies (AAT) to negative facial expressions. We exposed participants to dynamic facial expressions of anger, disgust, fear, or sadness, while they processed either the emotional expression or the gender of the faces. AATs were assessed by reaction times of lever movements, and by posture changes via head-tracking. We found that—relative to angry faces-, fearful and sad faces triggered more approach, with a larger difference between fear and anger in prosocial compared to individualistic participants. Interestingly, these findings are in line with a recently developed concern hypothesis, suggesting that—relative to other negative expressions—expressions of distress may facilitate approach, especially in participants with prosocial preferences.  相似文献   

10.
Two experiments were conducted to explore whether representational momentum (RM) emerges in the perception of dynamic facial expression and whether the velocity of change affects the size of the effect. Participants observed short morphing animations of facial expressions from neutral to one of the six basic emotions. Immediately afterward, they were asked to select the last images perceived. The results of the experiments revealed that the RM effect emerged for dynamic facial expressions of emotion: The last images of dynamic stimuli that an observer perceived were of a facial configuration showing stronger emotional intensity than the image actually presented. The more the velocity increased, the more the perceptual image of facial expression intensified. This perceptual enhancement suggests that dynamic information facilitates shape processing in facial expression, which leads to the efficient detection of other people's emotional changes from their faces.  相似文献   

11.
How similar are the meanings of facial expressions of emotion and the emotion terms frequently used to label them? In three studies, subjects made similarity judgments and emotion self-report ratings in response to six emotion categories represented in Ekman and Friesen's Pictures of Facial Affect, and their associated labels. Results were analyzed with respect to the constituent facial movements using the Facial Action Coding System, and using consensus analysis, multidimensional scaling, and inferential statistics. Shared interpretation of meaning was found between individuals and the group, with congruence between the meaning in facial expressions, labeling using basic emotion terms, and subjects' reported emotional responses. The data suggest that (1) the general labels used by Ekman and Friesen are appropriate but may not be optimal, (2) certain facial movements contribute more to the perception of emotion than do others, and (3) perception of emotion may be categorical rather than dimensional.  相似文献   

12.
Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals.  相似文献   

13.
This article examines the importance of semantic processes in the recognition of emotional expressions, through a series of three studies on false recognition. The first study found a high frequency of false recognition of prototypical expressions of emotion when participants viewed slides and video clips of nonprototypical fearful and happy expressions. The second study tested whether semantic processes caused false recognition. The authors found that participants made significantly higher error rates when asked to detect expressions that corresponded to semantic labels than when asked to detect visual stimuli. Finally, given that previous research reported that false memories are less prevalent in younger children, the third study tested whether false recognition of prototypical expressions increased with age. The authors found that 67% of eight- to nine-year-old children reported nonpresent prototypical expressions of fear in a fearful context, but only 40% of 6- to 7-year-old children did so. Taken together, these three studies demonstrate the importance of semantic processes in the detection and categorization of prototypical emotional expressions.  相似文献   

14.
Nonverbal "accents": cultural differences in facial expressions of emotion   总被引:5,自引:0,他引:5  
We report evidence for nonverbal "accents," subtle differences in the appearance of facial expressions of emotion across cultures. Participants viewed photographs of Japanese nationals and Japanese Americans in which posers' muscle movements were standardized to eliminate differences in expressions, cultural or otherwise. Participants guessed the nationality of posers displaying emotional expressions at above-chance levels, and with greater accuracy than they judged the nationality of the same posers displaying neutral expressions. These findings indicate that facial expressions of emotion can contain nonverbal accents that identify the expresser's nationality or culture. Cultural differences are intensified during the act of expressing emotion, rather than residing only in facial features or other static elements of appearance. This evidence suggests that extreme positions regarding the universality of emotional expressions are incomplete.  相似文献   

15.
Increasing evidence suggests that facial emotion recognition is impaired in bipolar disorder (BD). However, patient–control differences are small owing to ceiling effects on the tasks used to assess them. The extant literature is also limited by a relative absence of attention towards identifying patterns of emotion misattribution or understanding whether neutral faces are mislabelled in the same way as ones displaying emotion. We addressed these limitations by comparing facial emotion recognition performance in BD patients and healthy controls on a novel and challenging task. Thirty-four outpatients with BD I and 32 demographically matched healthy controls completed a facial emotion recognition task requiring the labelling of neutral and emotive faces displayed at low emotional intensities. Results indicated that BD patients were significantly less accurate at labelling faces than healthy controls, particularly if they displayed fear or neutral expressions. There were no between-group differences in response times or patterns of emotion mislabelling, with both groups confusing sad and neutral faces, although BD patients also mislabelled sad faces as angry. Task performance did not significantly correlate with mood symptom severity in the BD group. These findings suggest that facial emotion recognition impairments in BD extend to neutral face recognition. Emotion misattribution occurs in a similar, albeit exaggerated manner in patients with BD compared to healthy controls. Future behavioural and neuroimaging research should reconsider the use of neutral faces as baseline stimuli in their task designs.  相似文献   

16.
Discrimination of facial expressions of emotion by depressed subjects   总被引:2,自引:0,他引:2  
A frequent complaint of depressed people concerns their poor interpersonal relationships. Yet, although nonverbal cues are considered of primary importance in interpersonal communication, the major theories of depression focus little attention on nonverbal social perception. The present study investigated the ability of depressed, disturbed control, and normal American adults to make rapid discriminations of facial emotion. We predicted and found that depressed subjects were slower than normal subjects in facial emotion discrimination but were not slower in word category discrimination. These findings suggest that current theories of depression may need to address difficulties with nonverbal information processing. There were also no significant differences between depressed and disturbed control subjects, suggesting that the unique social-behavioral consequences of depression have yet to be identified.  相似文献   

17.
In two studies, subjects judged a set of facial expressions of emotion by either providing labels of their own choice to describe the stimuli (free-choice condition), choosing a label from a list of emotion words, or choosing a story from a list of emotion stories (fixed-choice conditions). In the free-choice condition, levels of agreement between subjects on the predicted emotion categories for six basic emotions were significantly greater than chance levels, and comparable to those shown in fixed-choice studies. As predicted, there was little to no agreement on a verbal label for contempt. Agreement on contempt was greatly improved when subjects were allowed to identify the expression in terms of an antecedent event for that emotion rather than in terms of a single verbal label, a finding that could not be attributed to the methodological artifact of exclusion in a fixed-choice paradigm. These findings support two conclusions: (1) that the labels used in fixed-choice paradigms accurately reflect the verbal categories people use when free labeling facial expressions of emotion, and (2) that lexically ambiguous emotions, such as contempt, are understood in terms of their situational meanings.This research was supported in part by a Research Scientist Award from the National Institute of Mental Health (MH 06091) to Paul Ekman.  相似文献   

18.
We present here new evidence of cross-cultural agreement in the judgement of facial expression. Subjects in 10 cultures performed a more complex judgment task than has been used in previous cross-cultural studies. Instead of limiting the subjects to selecting only one emotion term for each expression, this task allowed them to indicate that multiple emotions were evident and the intensity of each emotion. Agreement was very high across cultures about which emotion was the most intense. The 10 cultures also agreed about the second most intense emotion signaled by an expression and about the relative intensity among expressions of the same emotion. However, cultural differences were found in judgments of the absolute level of emotional intensity.  相似文献   

19.
We examined dysfunctional memory processing of facial expressions in relation to alexithymia. Individuals with high and low alexithymia, as measured by the Toronto Alexithymia Scale (TAS-20), participated in a visual search task (Experiment 1A) and a change-detection task (Experiments 1B and 2), to assess differences in their visual short-term memory (VSTM). In the visual search task, the participants were asked to judge whether all facial expressions (angry and happy faces) in the search display were the same or different. In the change-detection task, they had to decide whether all facial expressions changed between successive two displays. We found individual differences only in the change-detection task. Individuals with high alexithymia showed lower sensitivity for the happy faces compared to the angry faces, while individuals with low alexithymia showed sufficient recognition for both facial expressions. Experiment 2 examined whether individual differences were observed during early storage or later retrieval stage of the VSTM process using a single-probe paradigm. We found no effect of single-probe, indicating that individual differences occurred at the storage stage. The present results provide new evidence that individuals with high alexithymia show specific impairment in VSTM processes (especially the storage stage) related to happy but not to angry faces.  相似文献   

20.
Facial emotions are important for human communication. Unfortunately, traditional facial emotion recognition tasks do not inform about how respondents might behave towards others expressing certain emotions. Approach‐avoidance tasks do measure behaviour, but only on one dimension. In this study 81 participants completed a novel Facial Emotion Response Task. Images displaying individuals with emotional expressions were presented in random order. Participants simultaneously indicated how communal (quarrelsome vs. agreeable) and how agentic (dominant vs. submissive) they would be in response to each expression. We found that participants responded differently to happy, angry, fearful, and sad expressions in terms of both dimensions of behaviour. Higher levels of negative affect were associated with less agreeable responses specifically towards happy and sad expressions. The Facial Emotion Response Task might complement existing facial emotion recognition and approach‐avoidance tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号