首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Theoretical models of attention for affective information have assigned a special status to the cognitive processing of emotional facial expressions. One specific claim in this regard is that emotional faces automatically attract visual attention. In three experiments, the authors investigated attentional cueing by angry, happy, and neutral facial expressions that were presented under conditions of limited awareness. In these experiments, facial expressions were presented in a masked (14 ms or 34 ms, masked by a neutral face) and unmasked fashion (34 ms or 100 ms). Compared with trials containing neutral cues, delayed responding was found on trials with emotional cues in the unmasked, 100-ms condition, suggesting stronger allocation of cognitive resources to emotional faces. However, in both masked and unmasked conditions, the hypothesized cueing of visual attention to the location of emotional facial expression was not found. In contrary, attentional cueing by emotional faces was less strong compared with neutral faces in the unmasked, 100-ms condition. These data suggest that briefly presented emotional faces influence cognitive processing but do not automatically capture visual attention.  相似文献   

2.
The human face conveys important social signals when people interact in social contexts. The current study investigated the relationship between face recognition and emotional intelligence, and how societal factors of emotion and race influence people's face recognition. Participants’ recognition accuracy, reaction time, sensitivity, and response bias were measured to examine their face‐processing ability. Fifty Caucasian undergraduates (38 females, 12 males; average age = 21.76 years) participated in a face recognition task in which they discriminated previously presented target faces from novel distractor faces. A positive correlation between participants’ emotional intelligence scores and their performance on the face recognition task was observed, suggesting that face recognition ability was associated with emotional or social intelligence. Additionally, Caucasian participants recognized happy faces better than angry or neutral faces. It was also observed that people recognized Asian faces better than Caucasian ones, which appears to be contradictory to the classic other‐race effect. The present study suggests that some societal factors could influence face processing, and face recognition ability could in turn predict social intelligence.  相似文献   

3.
Facial expressions play a key role in affective and social behavior. However, the temporal dynamics of the brain responses to emotional faces remain still unclear, in particular an open question is at what stage of face processing expressions might influence encoding and recognition memory. To try and answer this question we recorded the event-related potentials (ERPs) elicited in an old/new recognition task. A novel aspect of the present design was that whereas faces were presented during the study phase with either a happy, fearful or neutral expression, they were always neutral during the memory retrieval task. The ERP results showed three main findings: An enhanced early fronto-central positivity for faces encoded as fearful, both during the study and the retrieval phase. During encoding subsequent memory (Dm effect) was influenced by emotion. At retrieval the early components P100 and N170 were modulated by the emotional expression of the face at the encoding phase. Finally, the later ERP components related to recognition memory were modulated by the previously encoded facial expressions. Overall, these results suggest that face recognition is modulated by top-down influences from brain areas associated with emotional memory, enhancing encoding and retrieval in particular for fearful emotional expressions.  相似文献   

4.
To study links between rapid ERP responses to fearful faces and conscious awareness, a backward‐masking paradigm was employed where fearful or neutral target faces were presented for different durations and were followed by a neutral face mask. Participants had to report target face expression on each trial. When masked faces were clearly visible (200 ms duration), an early frontal positivity, a later more broadly distributed positivity, and a temporo‐occipital negativity were elicited by fearful relative to neutral faces, confirming findings from previous studies with unmasked faces. These emotion‐specific effects were also triggered when masked faces were presented for only 17 ms, but only on trials where fearful faces were successfully detected. When masked faces were shown for 50 ms, a smaller but reliable frontal positivity was also elicited by undetected fearful faces. These results demonstrate that early ERP responses to fearful faces are linked to observers' subjective conscious awareness of such faces, as reflected by their perceptual reports. They suggest that frontal brain regions involved in the construction of conscious representations of facial expression are activated at very short latencies.  相似文献   

5.
Studies using facial emotional expressions as stimuli partially support the assumption of biased processing of social signals in social phobia. This pilot study explored for the first time whether individuals with social phobia display a processing bias towards emotional prosody. Fifteen individuals with generalized social phobia and fifteen healthy controls (HC) matched for gender, age, and education completed a recognition test consisting of meaningless utterances spoken in a neutral, angry, sad, fearful, disgusted or happy tone of voice. Participants also evaluated the stimuli with regard to valence and arousal. While these ratings did not differ significantly between groups, analysis of the recognition test revealed enhanced identification of sad and fearful voices and decreased identification of happy voices in individuals with social phobia compared with HC. The two groups did not differ in their processing of neutral, disgust, and anger prosody.  相似文献   

6.
Unconscious processing of stimuli with emotional content can bias affective judgments. Is this subliminal affective priming merely a transient phenomenon manifested in fleeting perceptual changes, or are long-lasting effects also induced? To address this question, we investigated memory for surprise faces 24 h after they had been shown with 30-ms fearful, happy, or neutral faces. Surprise faces subliminally primed by happy faces were initially rated as more positive, and were later remembered better, than those primed by fearful or neutral faces. Participants likely to have processed primes supraliminally did not respond differentially as a function of expression. These results converge with findings showing memory advantages with happy expressions, though here the expressions were displayed on the face of a different person, perceived subliminally, and not present at test. We conclude that behavioral biases induced by masked emotional expressions are not ephemeral, but rather can last at least 24 h.  相似文献   

7.
Face recognition occurs when a face is recognised despite changes between learning and test exposures. Yet there has been relatively little research on how variations in emotional expressions influence people’s ability to recognise these changes. We evaluated the ability to discriminate old and similar expressions of emotions (i.e. mnemonic discrimination) of the same face, as well as the discrimination ability between old and dissimilar (new) expressions of the same face, reflecting traditional discrimination. An emotional mnemonic discrimination task with morphed faces that were similar but not identical to the original face was used. Results showed greater mnemonic discrimination for learned neutral expressions that at test became slightly more fearful rather than happy. For traditional discrimination, there was greater accuracy for learned happy faces becoming fearful, rather than those changing from fearful-to-happy. These findings indicate that emotional expressions may have asymmetrical influences on mnemonic and traditional discrimination of the same face.  相似文献   

8.
We investigated the source of the visual search advantage of some emotional facial expressions. An emotional face target (happy, surprised, disgusted, fearful, angry, or sad) was presented in an array of neutral faces. A faster detection was found for happy targets, with angry and, especially, sad targets being detected more poorly. Physical image properties (e.g., luminance) were ruled out as a potential source of these differences in visual search. In contrast, the search advantage is partly due to the facilitated processing of affective content, as shown by an emotion identification task. Happy expressions were identified faster than the other expressions and were less likely to be confounded with neutral faces, whereas misjudgements occurred more often for angry and sad expressions. Nevertheless, the distinctiveness of some local features (e.g., teeth) that are consistently associated with emotional expressions plays the strongest role in the search advantage pattern. When the contribution of these features to visual search was factored out statistically, the advantage disappeared.  相似文献   

9.
Previous studies have shown that the human visual system can detect a face and elicit a saccadic eye movement toward it very efficiently compared to other categories of visual stimuli. In the first experiment, we tested the influence of facial expressions on fast face detection using a saccadic choice task. Face-vehicle pairs were simultaneously presented and participants were asked to saccade toward the target (the face or the vehicle). We observed that saccades toward faces were initiated faster, and more often in the correct direction, than saccades toward vehicles, regardless of the facial expressions (happy, fearful, or neutral). We also observed that saccade endpoints on face images were lower when the face was happy and higher when it was neutral. In the second experiment, we explicitly tested the detection of facial expressions. We used a saccadic choice task with emotional-neutral pairs of faces and participants were asked to saccade toward the emotional (happy or fearful) or the neutral face. Participants were faster when they were asked to saccade toward the emotional face. They also made fewer errors, especially when the emotional face was happy. Using computational modeling, we showed that this happy face advantage can, at least partly, be explained by perceptual factors. Also, saccade endpoints were lower when the target was happy than when it was fearful. Overall, we suggest that there is no automatic prioritization of emotional faces, at least for saccades with short latencies, but that salient local face features can automatically attract attention.  相似文献   

10.
Four experiments are reported investigating recognition of emotional expressions in very briefly presented facial stimulus. The faces were backwardly masked by neutral facial displays and recognition of facial expressions was analyzed as a function of the manipulation of different parameters in the masking procedure. The main conclusion was that stimulus onset asynchrony between target and mask proved to be the principal factor influencing recognition of the masked expressions. In general, confident recognitions of facial expressions required about 100–150 msec, with shorter time for happy than for angry expressions. The manipulation of the duration of both the target and the mask, by itself, had only minimal effects.  相似文献   

11.
Poor inhibitory control over negative emotional information has been identified as a possible contributor to affective disorders, but the distinct effects of emotional contrast and fearful versus angry faces on response inhibition remain unknown. In the present study, young adults completed an emotional go/no-go task involving happy, neutral, and either fearful or angry faces. Results did not reveal differences in accuracy or speed between angry and fearful face conditions. However, responses were slower and indicated poorer inhibition in blocks where threatening faces were paired with happy, versus neutral, faces. Results may reflect cognitive load of emotional valence contrast, such that higher contrast blocks (containing threatening with happy faces) produced more conflict and required more processing than lower contrast blocks (threatening with neutral faces). Preliminary findings also revealed higher anxiety and depression symptoms corresponded with slower responses and worse accuracy, consistent with patterns of adverse impacts of anxiety and depression on response inhibition to threatening faces, even at subclinical levels of symptomatology.  相似文献   

12.
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).  相似文献   

13.
People high in social anxiety experience fear of social situations due to the likelihood of social evaluation. Whereas happy faces are generally processed very quickly, this effect is impaired by high social anxiety. Mouth regions are implicated during emotional face processing, therefore differences in mouth salience might affect how social anxiety relates to emotional face discrimination. We designed an emotional facial expression recognition task to reveal how varying levels of sub-clinical social anxiety (measured by questionnaire) related to the discrimination of happy and fearful faces, and of happy and angry faces. We also categorised the facial expressions by the salience of the mouth region (i.e. high [open mouth] vs. low [closed mouth]). In a sample of 90 participants higher social anxiety (relative to lower social anxiety) was associated with a reduced happy face reaction time advantage. However, this effect was mainly driven by the faces with less salient closed mouths. Our results are consistent with theories of anxiety that incorporate an oversensitive valence evaluation system.  相似文献   

14.
Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.  相似文献   

15.
The present study examines the extent to which attentional biases in contamination fear commonly observed in obsessive-compulsive disorder (OCD) are specific to disgust or fear cues, as well as the components of attention involved. Eye tracking was used to provide greater sensitivity and specificity than afforded by traditional reaction time measures of attention. Participants high (HCF; n = 23) and low (LCF; n = 25) in contamination fear were presented with disgusted, fearful, or happy faces paired with neutral faces for 3 s trials. Evidence of both vigilance and maintenance-based biases for threat was found. The high group oriented attention to fearful faces but not disgusted faces compared to the low group. However, the high group maintained attention on both disgusted and fearful expressions compared to the low group, a pattern consistent across the 3 s trials. The implications of these findings for conceptualizing emotional factors that moderate attentional biases in contamination-based OCD are discussed.  相似文献   

16.
Sixteen clinically depressed patients and sixteen healthy controls were presented with a set of emotional facial expressions and were asked to identify the emotion portrayed by each face. They, were subsequently given a recognition memory test for these faces. There was no difference between the groups in terms of their ability to identify emotion between from faces. All participants identified emotional expressions more accurately than neutral expressions, with happy expressions being identified most accurately. During the recognition memory phase the depressed patients demonstrated superior memory for sad expressions, and inferior memory for happy expressions, relative to neutral expressions. Conversely, the controls demonstrated superior memory for happy expressions, and inferior memory for sad expressions, relative to neutral expressions. These results are discussed in terms of the cognitive model of depression proposed by Williams, Watts, MacLeod, and Mathews (1997).  相似文献   

17.
The gaze of a fearful face should be a particularly effective cue to attention; it allows one to rapidly allocate attention to potential threats. Prior data from investigations of this issue have been mixed. We report a novel method in which the gazes of two faces simultaneously cued different directions. Across trials, the emotion expressed by each face varied between happy, neutral, and fearful. Results showed that attention followed a fearful gaze when it competed with a neutral gaze but did not consistently follow a happy gaze when it competed with a neutral gaze. These results suggest that fear moderates the effectiveness of gaze cuing, and we present a parsimonious account that reconciles previously inconsistent data. We also found that presenting a fearful and a happy face simultaneously eliminates this effect, suggesting that emotional expressions interact in ways that may be important for understanding how emotional stimuli influence attention in more complex environments.  相似文献   

18.
Perceptions of age influence how we evaluate, approach, and interact with other people. Based on a paramorphic human judgment model, the present study investigates possible determinants of accuracy and bias in age estimation across the adult life span. For this purpose, 154 young, middle-aged, and older participants of both genders estimated the age of 171 faces of young, middle-aged, and older men and women, portrayed on a total of 2,052 photographs. Each face displayed either an angry, fearful, disgusted, happy, sad, or neutral expression (FACES database; Ebner, Riediger, & Lindenberger, 2010). We found that age estimation ability decreased with age. Older and young adults, however, were more accurate and less biased in estimating the age of members of their own as compared with those of the other age group. In contrast, no reliable own-gender advantage was observed. Generally, the age of older faces was more difficult to estimate than the age of younger faces. Furthermore, facial expressions had a substantial impact on accuracy and bias of age estimation. Relative to other facial expressions, the age of neutral faces was estimated most accurately, while the age of faces displaying happy expressions was most likely underestimated. Results are discussed in terms of methodological and practical implications for research on age estimation.  相似文献   

19.
We establish attentional capture by emotional distractor faces presented as a "singleton" in a search task in which the emotion is entirely irrelevant. Participants searched for a male (or female) target face among female (or male) faces and indicated whether the target face was tilted to the left or right. The presence (vs. absence) of an irrelevant emotional singleton expression (fearful, angry, or happy) in one of the distractor faces slowed search reaction times compared to the singleton absent or singleton target conditions. Facilitation for emotional singleton targets was found for the happy expression but not for the fearful or angry expressions. These effects were found irrespective of face gender and the failure of a singleton neutral face to capture attention among emotional faces rules out a visual odd-one-out account for the emotional capture. The present study thus establishes irrelevant, emotional, attentional capture.  相似文献   

20.
To investigate the time course of emotional expression processing, we recorded ERP responses to stimulus arrays containing neutral versus angry, disgusted, fearful, happy, sad, or surprised faces. In one half of the experiment, the task was to discriminate emotional and neutral facial expressions. Here, an enhanced early frontocentral positivity was elicited in response to emotional as opposed to neutral faces, followed by a broadly distributed positivity and an enhanced negativity at lateral posterior sites. These emotional expression effects were very similar for all six basic emotional expressions. In the other half of the experiment, attention was directed away from the faces toward a demanding perceptual discrimination task. Under these conditions, emotional expression effects were completely eliminated, demonstrating that brain processes involved in the detection and analysis of facial expression require focal attention. The face-specific N170 component was unaffected by any emotional expression, supporting the hypothesis that structural encoding and expression analysis are independent processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号