首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many research reports have concluded that emotional information can be processed without observers being aware of it. The case for perception without awareness has almost always been made with the use of facial expressions. In view of the similarities between facial and bodily expressions for rapid perception and communication of emotional signals, we conjectured that perception of bodily expressions may also not necessarily require visual awareness. Our study investigates the role of visual awareness in the perception of bodily expressions using a backward masking technique in combination with confidence ratings on a trial-by-trial basis. Participants had to detect in three separate experiments masked fearful, angry and happy bodily expressions among masked neutral bodily actions as distractors and subsequently the participants had to indicate their confidence. The onset between target and mask (Stimulus Onset Asynchrony, SOA) varied from -50 to +133 ms. Sensitivity measurements (d-prime) as well as the confidence of the participants showed that the bodies could be detected reliably in all SOA conditions. In an important finding, a lack of covariance was observed between the objective and subjective measurements when the participants had to detect fearful bodily expressions, yet this was not the case when participants had to detect happy or angry bodily expressions.  相似文献   

2.
Inversion interferes with the encoding of configural and holistic information more than it does with the encoding of explicitly represented and isolated parts. Accordingly, if facial expressions are explicitly represented in the face representation, their recognition should not be greatly affected by face orientation. In the present experiment, response times to detect a difference in hair color in line-drawn faces were unaffected by face orientation, but response times to detect the presence of brows and mouth were longer with inverted than with upright faces, independent of the emergent expression (neutral, happy, sad, and angry). Expressions are not explicitly represented; rather, they and the face configuration are represented as undecomposed wholes.  相似文献   

3.
Rachael E. Jack 《Visual cognition》2013,21(9-10):1248-1286
With over a century of theoretical developments and empirical investigation in broad fields (e.g., anthropology, psychology, evolutionary biology), the universality of facial expressions of emotion remains a central debate in psychology. How near or far, then, is this debate from being resolved? Here, I will address this question by highlighting and synthesizing the significant advances in the field that have elevated knowledge of facial expression recognition across cultures. Specifically, I will discuss the impact of early major theoretical and empirical contributions in parallel fields and their later integration in modern research. With illustrative examples, I will show that the debate on the universality of facial expressions has arrived at a new juncture and faces a new generation of exciting questions.  相似文献   

4.
Abstract

Some experiments have shown that a face having an expression different from the others in a crowd can be detected in a time that is independent of crowd size. Although this pop-out effect suggests that the valence of a face is available preattentively, it is possible that it is only the detection of sign features (e.g. angle of brow) which triggers an internal code for valence. In experiments testing the merits of valence and feature explanations, subjects searched displays of schematic faces having sad, happy, and vacant mouth expressions for a face having a discrepant sad or happy expression. Because inversion destroys holistic face processing and the implicit representation of valence, a critical test was whether pop-out occurred for inverted faces. Flat search functions (pop-out) for upright and inverted faces provided equivocal support for both explanations. But intercept effects found only with normal faces indicated valences had been analysed at an early stage of stimulus encoding.  相似文献   

5.
Emotional facial expressions are often asymmetrical, with the left half of the face typically displaying the stronger affective intensity cues. During facial perception, however, most right-handed individuals are biased toward facial affect cues projecting to their own left visual hemifield. Consequently, mirror-reversed faces are typically rated as more emotionally intense than when presented normally. Mirror-reversal permits the most intense side of the expresser's face to project to the visual hemifield biased for processing facial affect cues. This study replicated the mirror-reversal effect in 21 men and 49 women (aged 18-52 yr.) using a videotaped free viewing presentation but also showed the effect of facial orientation is moderated by the sex of the perceiver. The mirror-reversal effect was significant only for men but not for women, suggesting possible sex differences in cerebral organization of systems for facial perception.  相似文献   

6.
Despite the fact that facial expressions of emotion have signal value, there is surprisingly little research examining how that signal can be detected under various conditions, because most judgment studies utilize full-face, frontal views. We remedy this by obtaining judgments of frontal and profile views of the same expressions displayed by the same expressors. We predicted that recognition accuracy when viewing faces in profile would be lower than when judging the same faces from the front. Contrarily, there were no differences in recognition accuracy as a function of view, suggesting that emotions are judged equally well regardless of from what angle they are viewed.  相似文献   

7.
Using a dual-task methodology we examined the interaction of perceiving and producing facial expressions. In one task, participants were asked to produce a smile or a frown (Task 2) in response to a tone stimulus. This auditory-facial task was embedded in a dual-task context, where the other task (Task 1) required a manual response to visual face stimuli (visual-manual task). These face stimuli showed facial expressions that were either compatible or incompatible to the to-be-produced facial expression. Both reaction times and error rates (measured by facial electromyography) revealed a robust stimulus–response compatibility effect across tasks, suggesting that perceived social actions automatically activate corresponding actions even if perceived and produced actions belong to different tasks. The dual-task nature of this compatibility effect further testifies that encoding of facial expressions is highly automatic.  相似文献   

8.
9.
Two experiments were conducted to explore whether representational momentum (RM) emerges in the perception of dynamic facial expression and whether the velocity of change affects the size of the effect. Participants observed short morphing animations of facial expressions from neutral to one of the six basic emotions. Immediately afterward, they were asked to select the last images perceived. The results of the experiments revealed that the RM effect emerged for dynamic facial expressions of emotion: The last images of dynamic stimuli that an observer perceived were of a facial configuration showing stronger emotional intensity than the image actually presented. The more the velocity increased, the more the perceptual image of facial expression intensified. This perceptual enhancement suggests that dynamic information facilitates shape processing in facial expression, which leads to the efficient detection of other people's emotional changes from their faces.  相似文献   

10.
How similar are the meanings of facial expressions of emotion and the emotion terms frequently used to label them? In three studies, subjects made similarity judgments and emotion self-report ratings in response to six emotion categories represented in Ekman and Friesen's Pictures of Facial Affect, and their associated labels. Results were analyzed with respect to the constituent facial movements using the Facial Action Coding System, and using consensus analysis, multidimensional scaling, and inferential statistics. Shared interpretation of meaning was found between individuals and the group, with congruence between the meaning in facial expressions, labeling using basic emotion terms, and subjects' reported emotional responses. The data suggest that (1) the general labels used by Ekman and Friesen are appropriate but may not be optimal, (2) certain facial movements contribute more to the perception of emotion than do others, and (3) perception of emotion may be categorical rather than dimensional.  相似文献   

11.
Discrimination of facial expressions of emotion by depressed subjects   总被引:2,自引:0,他引:2  
A frequent complaint of depressed people concerns their poor interpersonal relationships. Yet, although nonverbal cues are considered of primary importance in interpersonal communication, the major theories of depression focus little attention on nonverbal social perception. The present study investigated the ability of depressed, disturbed control, and normal American adults to make rapid discriminations of facial emotion. We predicted and found that depressed subjects were slower than normal subjects in facial emotion discrimination but were not slower in word category discrimination. These findings suggest that current theories of depression may need to address difficulties with nonverbal information processing. There were also no significant differences between depressed and disturbed control subjects, suggesting that the unique social-behavioral consequences of depression have yet to be identified.  相似文献   

12.
Six actors attempted to communicate by facial expression seven assumedly basic emotions (pleasure, surprise, fear, hate, sorrow, disgust and interest), and all pairwise blends, e.g., fear+sorrow. One hundred and eighty-two subjects (divided into groups as to the six actors) judged pictures of these emotions by three methods: (1) mapping, placing the pictures on coordinate systems with denotated axes, (2) identification and (3) sorting similar emotions into the same pile, followed by multidimensional scaling and cluster analyses. Recognition of the emotions was fairly good, though not equally good for all emotions and their blends; also the actors' ability to express emotions varied. Emotions of opposite hedonic tone did not blend well. Interest seemed to lend poignancy to the basic emotion with which it was blended rather than to constitute an emotion in itself. Expressions seemed to be more easily identified if the actor did not try to feel the emotion too deeply.  相似文献   

13.
The claim that specific discrete emotions can be universally recognized from human facial expressions is based mainly on the study of expressions that were posed. The current study (N=50) examined recognition of emotion from 20 spontaneous expressions from Papua New Guinea photographed, coded, and labeled by P. Ekman (1980). For the 16 faces with a single predicted label, endorsement of that label ranged from 4.2% to 45.8% (mean 24.2%). For 4 faces with 2 predicted labels (blends), endorsement of one or the other ranged from 6.3% to 66.6% (mean 38.8%). Of the 24 labels Ekman predicted, 11 were endorsed at an above-chance level, and 13 were not. Spontaneous expressions do not achieve the level of recognition achieved by posed expressions.  相似文献   

14.
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion.  相似文献   

15.
Facial emotions are important for human communication. Unfortunately, traditional facial emotion recognition tasks do not inform about how respondents might behave towards others expressing certain emotions. Approach‐avoidance tasks do measure behaviour, but only on one dimension. In this study 81 participants completed a novel Facial Emotion Response Task. Images displaying individuals with emotional expressions were presented in random order. Participants simultaneously indicated how communal (quarrelsome vs. agreeable) and how agentic (dominant vs. submissive) they would be in response to each expression. We found that participants responded differently to happy, angry, fearful, and sad expressions in terms of both dimensions of behaviour. Higher levels of negative affect were associated with less agreeable responses specifically towards happy and sad expressions. The Facial Emotion Response Task might complement existing facial emotion recognition and approach‐avoidance tasks.  相似文献   

16.
The view that certain facial expressions of emotion are universally agreed on has been challenged by studies showing that the forced-choice paradigm may have artificially forced agreement. This article addressed this methodological criticism by offering participants the opportunity to select a none of these terms are correct option from a list of emotion labels in a modified forced-choice paradigm. The results show that agreement on the emotion label for particular facial expressions is still greater than chance, that artifactual agreement on incorrect emotion labels is obviated, that participants select the none option when asked to judge a novel expression, and that adding 4 more emotion labels does not change the pattern of agreement reported in universality studies. Although the original forced-choice format may have been prone to artifactual agreement, the modified forced-choice format appears to remedy that problem.  相似文献   

17.
This study demonstrates that when people attempt to identify a facial expression of emotion (FEE) by haptically exploring a 3D facemask, they are affected by viewing a simultaneous, task-irrelevant visual FEE portrayed by another person. In comparison to a control condition, where visual noise was presented, the visual FEE facilitated haptic identification when congruent (visual and haptic FEEs same category). When the visual and haptic FEEs were incongruent, haptic identification was impaired, and error responses shifted toward the visually depicted emotion. In contrast, visual emotion labels that matched or mismatched the haptic FEE category produced no such effects. The findings indicate that vision and touch interact in FEE recognition at a level where featural invariants of the emotional category (cf. precise facial geometry or general concepts) are processed, even when the visual and haptic FEEs are not attributable to a common source. Processing mechanisms behind these effects are considered.  相似文献   

18.
This article examines the importance of semantic processes in the recognition of emotional expressions, through a series of three studies on false recognition. The first study found a high frequency of false recognition of prototypical expressions of emotion when participants viewed slides and video clips of nonprototypical fearful and happy expressions. The second study tested whether semantic processes caused false recognition. The authors found that participants made significantly higher error rates when asked to detect expressions that corresponded to semantic labels than when asked to detect visual stimuli. Finally, given that previous research reported that false memories are less prevalent in younger children, the third study tested whether false recognition of prototypical expressions increased with age. The authors found that 67% of eight- to nine-year-old children reported nonpresent prototypical expressions of fear in a fearful context, but only 40% of 6- to 7-year-old children did so. Taken together, these three studies demonstrate the importance of semantic processes in the detection and categorization of prototypical emotional expressions.  相似文献   

19.
Findings from a recent study by Ekmanet al. (1987) provided evidence for cultural disagreement about the intensity ratings of universal facial expressions of emotion. We conducted a study that examined the basis of these cultural differences. Japanese and American subjects made two separate intensity ratings of Japanese and Caucacian posers portraying anger, disgust, fear, happiness, sadness, and surprise. The Americans had higher mean intensity ratings than the Japanese for all emotions except disgust, regardless of the culture or gender of the poser. Americans gave happy and angry photos the highest intensity ratings, while Japanese gave disgust photos the highest ratings. But there was considerable cross-cultural consistency in the relative differences among photos.This study was conducted as part of a doctoral dissertation by the first author, under the supervision of the second author. David Matsumoto was supported by a minority fellowship from the American Psychological Association, under a Clinical Training Grant from NIMH (MH13833), and by a Regents Fellowship from the University of California, Berkeley. Paul Ekman was supported by a Research Scientist Award from NIMH (MH06092).We gratefully acknowledge the contribution of Wallace V. Friesen to the development of the facial stimuli used in this study, Tsutomu Kudoh for his aid in collecting the Japanese data, and Wallace Friesen and Maureen O'Sullivan for their aid during data analysis.  相似文献   

20.
Three studies investigated the importance of movement for the recognition of subtle and intense expressions of emotion. In the first experiment, 36 facial emotion displays were duplicated in three conditions either upright or inverted in orientation. A dynamic condition addressed the perception of motion by using four still frames run together to encapsulate a moving sequence to show the expression emerging from neutral to the subtle emotion. The multi‐static condition contained the same four stills presented in succession, but with a visual noise mask (200 ms) between each frame to disrupt the apparent motion, whilst in the single‐static condition, only the last still image (subtle expression) was presented. Results showed a significant advantage for the dynamic condition, over the single‐ and multi‐static conditions, suggesting that motion signals provide a more accurate and robust mental representation of the expression. A second experiment demonstrated that the advantage of movement was reduced with expressions of a higher intensity, and the results of the third experiment showed that the advantage for the dynamic condition for recognizing subtle emotions was due to the motion signal rather than additional static information contained in the sequence. It is concluded that motion signals associated with the emergence of facial expressions can be a useful cue in the recognition process, especially when the expressions are subtle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号