首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study demonstrates that when people attempt to identify a facial expression of emotion (FEE) by haptically exploring a 3D facemask, they are affected by viewing a simultaneous, task-irrelevant visual FEE portrayed by another person. In comparison to a control condition, where visual noise was presented, the visual FEE facilitated haptic identification when congruent (visual and haptic FEEs same category). When the visual and haptic FEEs were incongruent, haptic identification was impaired, and error responses shifted toward the visually depicted emotion. In contrast, visual emotion labels that matched or mismatched the haptic FEE category produced no such effects. The findings indicate that vision and touch interact in FEE recognition at a level where featural invariants of the emotional category (cf. precise facial geometry or general concepts) are processed, even when the visual and haptic FEEs are not attributable to a common source. Processing mechanisms behind these effects are considered.  相似文献   

2.
Inversion interferes with the encoding of configural and holistic information more than it does with the encoding of explicitly represented and isolated parts. Accordingly, if facial expressions are explicitly represented in the face representation, their recognition should not be greatly affected by face orientation. In the present experiment, response times to detect a difference in hair color in line-drawn faces were unaffected by face orientation, but response times to detect the presence of brows and mouth were longer with inverted than with upright faces, independent of the emergent expression (neutral, happy, sad, and angry). Expressions are not explicitly represented; rather, they and the face configuration are represented as undecomposed wholes.  相似文献   

3.
Rachael E. Jack 《Visual cognition》2013,21(9-10):1248-1286
With over a century of theoretical developments and empirical investigation in broad fields (e.g., anthropology, psychology, evolutionary biology), the universality of facial expressions of emotion remains a central debate in psychology. How near or far, then, is this debate from being resolved? Here, I will address this question by highlighting and synthesizing the significant advances in the field that have elevated knowledge of facial expression recognition across cultures. Specifically, I will discuss the impact of early major theoretical and empirical contributions in parallel fields and their later integration in modern research. With illustrative examples, I will show that the debate on the universality of facial expressions has arrived at a new juncture and faces a new generation of exciting questions.  相似文献   

4.
Despite the fact that facial expressions of emotion have signal value, there is surprisingly little research examining how that signal can be detected under various conditions, because most judgment studies utilize full-face, frontal views. We remedy this by obtaining judgments of frontal and profile views of the same expressions displayed by the same expressors. We predicted that recognition accuracy when viewing faces in profile would be lower than when judging the same faces from the front. Contrarily, there were no differences in recognition accuracy as a function of view, suggesting that emotions are judged equally well regardless of from what angle they are viewed.  相似文献   

5.
Abstract

Some experiments have shown that a face having an expression different from the others in a crowd can be detected in a time that is independent of crowd size. Although this pop-out effect suggests that the valence of a face is available preattentively, it is possible that it is only the detection of sign features (e.g. angle of brow) which triggers an internal code for valence. In experiments testing the merits of valence and feature explanations, subjects searched displays of schematic faces having sad, happy, and vacant mouth expressions for a face having a discrepant sad or happy expression. Because inversion destroys holistic face processing and the implicit representation of valence, a critical test was whether pop-out occurred for inverted faces. Flat search functions (pop-out) for upright and inverted faces provided equivocal support for both explanations. But intercept effects found only with normal faces indicated valences had been analysed at an early stage of stimulus encoding.  相似文献   

6.
7.
8.
Research suggests that states of the body, such as postures, facial expressions, and arm movements, play central roles in social information processing. This study investigated the effects of approach/avoidance movements on memory for facial information. Faces displaying a happy or a sad expression were presented and participants were induced to perform either an approach (arm flexion) or an avoidance (arm extension) movement. States of awareness associated with memory for facial identity and memory for facial expression were then assessed with the Remember/Know/Guess paradigm. The results showed that performing avoidance movements increased Know responses for the identity, and Know/Guess responses for the expression, of valence-compatible stimuli (i.e., sad faces as compared to happy faces), whereas this was not the case for approach movements. Based on these findings, it is suggested that approach/avoidance motor actions influence memory encoding by increasing the ease of processing for valence-compatible information.  相似文献   

9.
We used the remember-know procedure (Tulving, 1985 ) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   

10.
We used the Remember–Know procedure (Tulving, 1985 Tulving, E. 1985. Memory and consciousness. Canadian Psychology, 26: 112. [Crossref], [Web of Science ®] [Google Scholar]) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   

11.
Facial asymmetry in posed and spontaneous expressions of emotion   总被引:1,自引:0,他引:1  
Patterns of facial asymmetry (i.e., extent of movement) as a function of elicitation condition, emotional valence, and sex of subjects are examined. Thirty-seven right-handed adult males and females were videotaped making positive and negative expressions of emotion under posed (verbal, visual) and spontaneous conditions. There were no differences in facial asymmetry as a function of condition. Overall, expressions were significantly left-sided, a finding implicating the right hemisphere. When sex and valence were considered, negative expressions were left-sided for all subjects, while positive expressions were left-sided for males only. Further, positive expressions were significantly less lateralized than negative ones for females. Measures of hemiface mobility and ocular dominance did not mediate these patterns of facial lateralization.  相似文献   

12.
Normal observers demonstrate a bias to process the left sides of faces during perceptual judgments about identity or emotion. This effect suggests a right cerebral hemisphere processing bias. To test the role of the right hemisphere and the involvement of configural processing underlying this effect, young and older control observers and patients with right hemisphere damage completed two chimeric faces tasks (emotion judgment and face identity matching) with both upright and inverted faces. For control observers, the emotion judgment task elicited a strong left-sided perceptual bias that was reduced in young controls and eliminated in older controls by face inversion. Right hemisphere damage reversed the bias, suggesting the right hemisphere was dominant for this task, but that the left hemisphere could be flexibly recruited when right hemisphere mechanisms are not available or dominant. In contrast, face identity judgments were associated most clearly with a vertical bias favouring the uppermost stimuli that was eliminated by face inversion and right hemisphere lesions. The results suggest these tasks involve different neurocognitive mechanisms. The role of the right hemisphere and ventral cortical stream involvement with configural processes in face processing is discussed.  相似文献   

13.
Previous studies (Lanzetta & Orr, 1980, 1981; Orr & Lanzetta, 1980) have demonstrated that fear facial expressions have the functional properties of conditioned excitatory stimuli, while happy expressions behave as conditioned inhibitors of emotional responses. The present study uses a summation conditioning procedure to distinguish between associative and nonassociative (selective sensitizations, attentional) interpretations of these findings. A neutral tone was first established as a conditioned excitatory CS by reinforcing tone presentations with shock. In subsequent nonreinforced test trials the excitatory tone was paired with either fear, happy, or neutral facial expressions. A tone alone and a tone/nonface slide compound were used as controls. The results indicate that phasic and tonic skin conductance responses to the tone/fear expression compound were significantly larger during extinction than for all other experimental and control groups. No significant differences were found among these latter conditions. The findings support the assumption that the excitatory characteristics of fear expressions do not depend on associative mechanisms. In the presence of fear cues, fear facial expressions intensify the emotional reaction and disrupt extinction of a previously acquired fear response. Happy facial expressions however, do not function as conditioned inhibitors in the absence of reinforcement, suggesting that the previously found inhibition was associative in nature.This research was supported by NSF grant No. 77-08926 and by funds from the Lincoln Filene Endowment to Dartmouth College.  相似文献   

14.
Motivation and Emotion - Emotion expressions facilitate interpersonal communication by conveying information about a person’s affective state. The current work investigates how facial...  相似文献   

15.
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.  相似文献   

16.
Alexithymia, a characteristic involving a limited affective vocabulary appears to involve three components: difficulty identifying feelings, difficulty describing feelings, and externally oriented thinking. There is evidence that alexithymic characteristics are associated with differences in emotion information-processing. We examined the role of temporal factors in alexithymic emotion-processing deficits, taking into account the confound between alexithymic characteristics and positive and negative affectivity. One hundred forty-six participants completed the 20-item Toronto Alexithymia Scale and the Positive and Negative Affect Schedule. In a signal-detection paradigm, participants judged facial expressions depicting neutral or negative emotions under slow and rapid presentation conditions. The alexithymia component of difficulty in describing feelings was inversely related to the ability to detect expressions of negative emotion in the speeded condition. This relationship was independent of positive and negative affectivity. Alexithymic components positive and negative affectivity were unrelated to response bias. The results emphasize the influence of difficulty describing feelings within the alexithymia construct and its difference from positive and negative affectivity. They suggest that an alexithymic deficit in describing feelings is associated with a deficit in processing negative emotions that is most apparent when processing capacity is challenged. Theoretical and methodological implications are discussed.  相似文献   

17.
Two experiments were conducted to explore whether representational momentum (RM) emerges in the perception of dynamic facial expression and whether the velocity of change affects the size of the effect. Participants observed short morphing animations of facial expressions from neutral to one of the six basic emotions. Immediately afterward, they were asked to select the last images perceived. The results of the experiments revealed that the RM effect emerged for dynamic facial expressions of emotion: The last images of dynamic stimuli that an observer perceived were of a facial configuration showing stronger emotional intensity than the image actually presented. The more the velocity increased, the more the perceptual image of facial expression intensified. This perceptual enhancement suggests that dynamic information facilitates shape processing in facial expression, which leads to the efficient detection of other people's emotional changes from their faces.  相似文献   

18.
Facial expressions are one example of emotional behavior that illustrate the importance of emotions to both basic survival and social interaction. Basic facial responses to stimuli such as sweet and bitter taste are important for species fitness and governed by simple rules. Even at this basic level, facial responses have communicative value to other species members. During evolution simple facial responses were extended for use in more complex nonverbal communications; the responses are labile. The perception and production of facial expressions are cognitive processes and numerous subcortical and cortical areas contribute to these operations. We suggest that no specific emotion center exists over and above cognitive systems in the brain, and that emotion should not be divorced from cognition.  相似文献   

19.
How similar are the meanings of facial expressions of emotion and the emotion terms frequently used to label them? In three studies, subjects made similarity judgments and emotion self-report ratings in response to six emotion categories represented in Ekman and Friesen's Pictures of Facial Affect, and their associated labels. Results were analyzed with respect to the constituent facial movements using the Facial Action Coding System, and using consensus analysis, multidimensional scaling, and inferential statistics. Shared interpretation of meaning was found between individuals and the group, with congruence between the meaning in facial expressions, labeling using basic emotion terms, and subjects' reported emotional responses. The data suggest that (1) the general labels used by Ekman and Friesen are appropriate but may not be optimal, (2) certain facial movements contribute more to the perception of emotion than do others, and (3) perception of emotion may be categorical rather than dimensional.  相似文献   

20.
Three age groups of participants (6–8 years, 9–11 years, adults) performed two tasks: A face recognition task and a Garner task. In the face recognition task, the participants were presented with 20 faces and then had to recognize them among 20 new faces. In the Garner tasks, the participants had to sort, as fast as possible, the photographs of two persons expressing two emotions by taking into account only one of the two dimensions (identity or emotion). When the sorting task was on one dimension, the other dimension was varied either in a correlated, a constant or an orthogonal way in distinct subsessions. The results indicated an increase in face recognition abilities. They also showed an interference of identity in the emotion-sorting task that was similar in the three age groups. Nevertheless, an interference of emotion in the identity-sorting task was significant only for the children and was more important for the youngest group. These observations suggest that the development in face recognition ability rests on the development of the ability to attend selectively to identity, without paying attention to emotional facial expression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号