首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Classification of faces as to their sex or their expression—with sex and expression varying orthogonally—was studied in three experiments. In Experiment 1, expression classification was influenced by sex, with angry male faces being classified faster than angry female faces. Complementarily, sex classification was faster for happy than for angry female faces. In Experiment 2, mutual interaction of sex and expression was also found when the participants were asked to classify top and bottom face segments. In Experiment 3, a face inversion effect was found for both sex and expression classification of whole faces. However, a symmetrical interaction between sex and expression was again found. The results are discussed in terms of configural versus feature processing in the perception of face sex and expression and of their relevance to face perception models that postulate independent processing of different facial features. 2009 The Psychonomic Society, Inc.  相似文献   

2.
Previous research suggests that neural and behavioral responses to surprised faces are modulated by explicit contexts (e.g., "He just found $500"). Here, we examined the effect of implicit contexts (i.e., valence of other frequently presented faces) on both valence ratings and ability to detect surprised faces (i.e., the infrequent target). In Experiment 1, we demonstrate that participants interpret surprised faces more positively when they are presented within a context of happy faces, as compared to a context of angry faces. In Experiments 2 and 3, we used the oddball paradigm to evaluate the effects of clearly valenced facial expressions (i.e., happy and angry) on default valence interpretations of surprised faces. We offer evidence that the default interpretation of surprise is negative, as participants were faster to detect surprised faces when presented within a happy context (Exp. 2). Finally, we kept the valence of the contexts constant (i.e., surprised faces) and showed that participants were faster to detect happy than angry faces (Exp. 3). Together, these experiments demonstrate the utility of the oddball paradigm to explore the default valence interpretation of presented facial expressions, particularly the ambiguously valenced facial expression of surprise.  相似文献   

3.
Empirical tests of the "right hemisphere dominance" versus "valence" theories of emotion processing are confounded by known sex differences in lateralization. Moreover, information about the sex of the person posing an emotion might be processed differently by men and women because of an adaptive male bias to notice expressions of threat and vigilance in other male faces. The purpose of this study was to investigate whether sex of poser and emotion displayed influenced lateralization in men and women by analyzing "laterality quotient" scores on a test which depicts vertically split chimeric faces, formed with one half showing a neutral expression and the other half showing an emotional expression. We found that men (N = 50) were significantly more lateralized for emotions indicative of vigilance and threat (happy, sad, angry, and surprised) in male faces relative to female faces and compared to women (N = 44). These data indicate that sex differences in functional cerebral lateralization for facial emotion may be specific to the emotion presented and the sex of face presenting it.  相似文献   

4.
Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247?C279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.  相似文献   

5.
Facial expressions serve as cues that encourage viewers to learn about their immediate environment. In studies assessing the influence of emotional cues on behavior, fearful and angry faces are often combined into one category, such as "threat-related," because they share similar emotional valence and arousal properties. However, these expressions convey different information to the viewer. Fearful faces indicate the increased probability of a threat, whereas angry expressions embody a certain and direct threat. This conceptualization predicts that a fearful face should facilitate processing of the environment to gather information to disambiguate the threat. Here, we tested whether fearful faces facilitated processing of neutral information presented in close temporal proximity to the faces. In Experiment 1, we demonstrated that, compared with neutral faces, fearful faces enhanced memory for neutral words presented in the experimental context, whereas angry faces did not. In Experiment 2, we directly compared the effects of fearful and angry faces on subsequent memory for emotional faces versus neutral words. We replicated the findings of Experiment 1 and extended them by showing that participants remembered more faces from the angry face condition relative to the fear condition, consistent with the notion that anger differs from fear in that it directs attention toward the angry individual. Because these effects cannot be attributed to differences in arousal or valence processing, we suggest they are best understood in terms of differences in the predictive information conveyed by fearful and angry facial expressions.  相似文献   

6.
Using the item-method directed forgetting paradigm (i.e. intentionally forgetting specified information), we examined directed forgetting of facial identity as a function of facial expression and the sex of the expresser and perceiver. Participants were presented with happy and angry male and female faces cued for either forgetting or remembering, and were then asked to recognise previously studied faces from among a series of neutral faces. For each recognised test face, participants also recalled the face’s previously displayed emotional expression. We found that angry faces were more resistant to forgetting than were happy faces. Furthermore, angry expressions on male faces and happy expressions on female faces were recognised and recalled better than vice versa. Signal detection analyses revealed that male faces gave rise to a greater sensitivity than female faces did, and male participants, but not female participants, showed greater sensitivity to male faces than to female faces. Several theoretical implications are discussed.  相似文献   

7.
ABSTRACT

Dot-probe studies usually find an attentional bias towards threatening stimuli only in anxious participants, but not in non-anxious participants. In the present study, we conducted two experiments to investigate whether attentional bias towards angry faces in unselected samples is moderated by the extent to which the current task requires social processing. In Experiment 1, participants performed a dot-probe task involving classification of either socially meaningful targets (schematic faces) or meaningless targets (scrambled schematic faces). Targets were preceded by two photographic face cues, one angry and one neutral. Angry face cues only produced significant cueing scores (i.e. faster target responses if the target replaced the angry face compared to the neutral face) with socially meaningful targets, not with meaningless targets. In Experiment 2, participants classified only meaningful targets, which were either socially meaningful (schematic faces) or not (schematic houses). Again, mean cueing scores were significantly moderated by the social character of the targets. However, cueing scores in this experiment were non-significant in the social target condition and significantly negative in the non-social target condition. These results suggest that attentional bias towards angry faces in the dot-probe task is moderated by the activation of a social processing mode in unselected samples.  相似文献   

8.
Human beings live in an uncertain world, but they continuously generate top-down predictions about emotional face information of other people around them. Although the prediction processing has repeatedly been investigated in the literature of prediction, little is known about the impact of rejection sensitive (RS) on individuals’ emotional face prediction. To this end, high and low RS participants were asked to perform an identification task of emotional faces in which target faces were shown in either an angry or happy expression while their brain responses were recorded using an event-related potential (ERP) technique. The behavioral results suggested an effect of emotional face prediction. For the P100 component, low RS participants showed longer P100 peak latencies in the left than right hemisphere when they viewed predictable emotional faces. In addition, low RS participants showed larger N170 mean amplitudes for angry compared to happy faces when they perceived unpredictable faces, but not when these faces were predictable. This presumably reflected a sensibility to angry faces in the unpredictable trails. Interestingly, high RS participants did not demonstrate such a N170 difference, suggesting that high RS participants showed a reduced sensitivity to unpredictable angry faces. Furthermore, angry faces triggered increased LPP mean amplitudes compared to happy faces, which was consistent with the results of other ERP studies examining the processing of emotional faces. Finally, we observed significant negative correlations between behavioral and ERP prediction effect, indicating one consistency between the behavioral and electrophysiological data.  相似文献   

9.
ABSTRACT— This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., "angry,""happy") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with "happy" than in response to faces paired with "angry." Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.  相似文献   

10.
Neuroimaging data suggest that emotional information, especially threatening faces, automatically captures attention and receives rapid processing. While this is consistent with the majority of behavioral data, behavioral studies of the attentional blink (AB) additionally reveal that aversive emotional first target (T1) stimuli are associated with prolonged attentional engagement or "dwell" time. One explanation for this difference is that few AB studies have utilized manipulations of facial emotion as the T1. To address this, schematic faces varying in expression (neutral, angry, happy) served as the T1 in the current research. Results revealed that the blink associated with an angry T1 face was, primarily, of greater magnitude than that associated with either a neutral or happy T1 face, and also that initial recovery from this processing bias was faster following angry, compared with happy, T1 faces. The current data therefore provide important information regarding the time-course of attentional capture by angry faces: Angry faces are associated with both the rapid capture and rapid release of attention.  相似文献   

11.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky "sentences" spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky "sentences" in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   

12.
Two studies examined whether appraisals can be differentially affected by subliminal anger and sadness primes. Participants from Singapore (Experiment 1) and China (Experiment 2) were exposed to either subliminal angry faces or subliminal sad faces. Supporting appraisal theories of emotions, participants exposed to subliminal angry faces were more likely to appraise negative events as caused by other people and those exposed to subliminal sad faces were more likely to appraise the same events as caused by situational factors. The results provide the first evidence for subliminal emotion-specific cognitive effects. They show that cognitive functions such as appraisals can be affected by subliminal emotional stimuli of the same valence.  相似文献   

13.
We examined the influence of social anxiety on memory for both identity and emotional expressions of unfamiliar faces. Participants high and low in social anxiety were presented with happy and angry faces and were later asked to recognise the same faces displaying a neutral expression. They also had to remember what the initial expressions of the faces had been. Remember/know/guess judgements were asked both for identity and expression memory. For participants low in social anxiety, both identity and expression memory were more often associated with "remember" responses when the faces were previously seen with a happy rather than an angry expression. In contrast, the initial expression of the faces did not affect either identity or expression memory for participants high in social anxiety. We interpreted these findings by arguing that most people tend to preferentially elaborate positive rather than negative social stimuli that are important to the self and that this tendency may be reduced in high socially anxious individuals because of the negative meaning they tend to ascribe to positive social information.  相似文献   

14.
The present paper reports three new experiments suggesting that the valence of a face cue can influence attentional effects in a cueing paradigm. Moreover, heightened trait anxiety resulted in increased attentional dwell-time on emotional facial stimuli, relative to neutral faces. Experiment 1 presented a cueing task, in which the cue was either an "angry", "happy", or "neutral" facial expression. Targets could appear either in the same location as the face (valid trials) or in a different location to the face (invalid trials). Participants did not show significant variations across the different cue types (angry, happy, neutral) in responding to a target on valid trials. However, the valence of the face did affect response times on invalid trials. Specifically, participants took longer to respond to a target when the face cue was "angry" or "happy" relative to neutral. In Experiment 2, the cue-target stimulus onset asynchrony (SOA) was increased and an overall inhibition of return (IOR) effect was found (i.e., slower responses on valid trials). However, the "angry" face cue eliminated the IOR effect for both high and low trait anxious groups. In Experiment 3, threat-related and jumbled facial stimuli reduced the magnitude of IOR for high, but not for low, trait-anxious participants.These results suggest that: (i) attentional bias in anxiety may reflect a difficulty in disengaging from threat-related and emotional stimuli, and (ii) threat-related and ambiguous cues can influence the magnitude of the IOR effect.  相似文献   

15.
Research on the interaction of emotional expressions with social category cues in face processing has focused on whether specific emotions are associated with single-category identities, thus overlooking the influence of intersectional identities. Instead, we examined how quickly people categorise intersectional targets by their race, gender, or emotional expression. In Experiment 1, participants categorised Black and White faces displaying angry, happy, or neutral expressions by either race or gender. Emotion influenced responses to men versus women only when gender was made salient by the task. Similarly, emotion influenced responses to Black versus White targets only when participants categorised by race. In Experiment 2, participants categorised faces by emotion so that neither category was more salient. As predicted, responses to Black women differed from those to both Black men and White women. Thus, examining race and gender separately is insufficient to understanding how emotion and social category cues are processed.  相似文献   

16.
通过要求被试判断同时呈现的视听信息情绪效价的关系,考察视听情绪信息整合加工特点。实验一中词汇效价与韵律效价不冲突,实验二中词汇效价与韵律效价冲突。两个实验一致发现当面孔表情为积极时,被试对视听通道情绪信息关系判断更准确;实验二还发现,当面孔表情为消极时,相对于韵律线索,被试根据语义线索对视听信息关系判断更迅速。上述结果说明视听信息在同时呈现时,视觉信息可能先行加工,并影响到随后有关视听关系的加工。  相似文献   

17.
The results of two studies on the relationship between evaluations of trustworthiness, valence and arousal of faces are reported. In Experiment 1, valence and trustworthiness judgments of faces were positively correlated, while arousal was negatively correlated with both trustworthiness and valence. In Experiment 2, learning about faces based on their emotional expression and the extent to which this learning is influenced by perceived trustworthiness was investigated. Neutral faces of different models differing in trustworthiness were repeatedly associated with happy or with angry expressions and the participants were asked to categorize each neutral face as belonging to a "friend" or to an "enemy" based on these associations. Four pairing conditions were defined in terms of the congruency between trustworthiness level and expression: Trustworthy-congruent, trustworthy-incongruent, untrustworthy-congruent and untrustworthy-incongruent. Categorization accuracy during the learning phase and face evaluation after learning were measured. During learning, participants learned to categorize with similar efficiency trustworthy and untrustworthy faces as friends or enemies and thus no effects of congruency were found. In the evaluation phase, faces of enemies were rated as more negative and arousing than those of friends, thus showing that learning was effective to change the affective value of the faces. However, faces of untrustworthy models were still judged on average more negative and arousing than those of trustworthy ones. In conclusion, although face trustworthiness did not influence learning of associations between faces and positive or negative social information it did have a significant influence on face evaluation that was manifest even after that learning.  相似文献   

18.
Emotion regulation (ER) strategies differ in when and how they influence emotion experience, expression, and concomitant cognition. However, no study to date has directly compared cognition in individuals who have a clear disposition for either cognitive or behavioural ER strategies. The present study compared selective attention to angry faces in groups of high trait-suppressors (people who are hiding emotional reactions in response to emotional challenge) and high trait-reappraisers (people who cognitively reinterpret emotional events). Since reappraisers are also low trait-anxious and suppressors are high trait-anxious, high and low anxious control groups, both being low in trait-ER, were also included. Attention to angry faces was assessed using an emotional dot-probe task. Trait-reappraisers and high-anxious individuals both showed attentional biases towards angry faces. Trait-reappraisers’ vigilance for angry faces was significantly more pronounced compared to both trait-suppressors and low anxious controls. We suggest that threat prioritization in high trait-reappraisal may allow deeper cognitive processing of threat information without being associated with psychological maladjustment.  相似文献   

19.
We systematically examined the impact of emotional stimuli on time perception in a temporal reproduction paradigm where participants reproduced the duration of a facial emotion stimulus using an oval-shape stimulus or vice versa. Experiment 1 asked participants to reproduce the duration of an angry face (or the oval) presented for 2,000 ms. Experiment 2 included a range of emotional expressions (happy, sad, angry, and neutral faces as well as the oval stimulus) presented for different durations (500, 1,500, and 2,000 ms). We found that participants over-reproduced the durations of happy and sad faces using the oval stimulus. By contrast, there was a trend of under-reproduction when the duration of the oval stimulus was reproduced using the angry face. We suggest that increased attention to a facial emotion produces the relativity of time perception.  相似文献   

20.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号