首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

Using faces as the priming stimuli, the present study explored the influence of facial expressions on the activation of gender stereotypes using a lexical decision paradigm. Experiment 1 explored the activation of gender stereotypes when the facial primes contained only gender information. The results showed that gender stereotypes were activated. In Experiment 2, the facial primes contained both gender category and expression information. The results indicated that gender stereotypes were not activated. Experiment 3 required the participants to make emotion, gender, or impression decisions concerning the facial primes before the lexical decision task. The results showed that gender stereotypes were not activated in emotion and impression decisions conditions, whereas stereotypes were activated in gender decisions condition. These finding suggest that facial expressions can inhibit automatic activation of gender stereotypes, unless the perceivers perform gender categorization processing to prime faces intentionally.  相似文献   

2.
Simulation process and mirroring mechanism appear to be necessary to the recognition of emotional facial expressions. Prefrontal areas were found to support this simulation mechanism. The present research analyzed the role of premotor area in processing emotional faces with different valence (positive vs. negative faces), considering both conscious and unconscious pathways. High-frequency rTMS (10 Hz) stimulation was applied to prefrontal area to induce an activation response when overt (conscious) and covert (unconscious) processing was implicated. Twenty-two subjects were asked to detect emotion/no emotion (anger, fear, happiness, neutral). Error rates (ERs) and response times (RTs) were considered in response to the experimental conditions. ERs and RTs decreased in case of premotor brain activation, specifically in response to fear, for both conscious and unconscious condition. The present results highlight the role of the premotor system for facial expression processing, supporting the existence of two analogous mechanisms for conscious and unconscious condition.  相似文献   

3.
The present research demonstrates that the attention bias to angry faces is modulated by how people categorize these faces. Since facial expressions contain psychologically meaningful information for social categorizations (i.e., gender, personality) but not for non-social categorizations (i.e., eye-color), angry facial expression should especially capture attention during social categorization tasks. Indeed, in three studies, participants were slower to name the gender of angry compared to happy or neutral faces, but not their color (blue or green; Study 1) or eye-color (blue or brown; Study 2). Furthermore, when different eye-colors were linked to a personality trait (introversion, extraversion) versus sensitivity to light frequencies (high, low), angry faces only slowed down categorizations when eye-color was indicative of a social characteristic (Study 3). Thus, vigilance for angry facial expressions is contingent on people's categorization goals, supporting the perspective that even basic attentional processes are moderated by social influences.  相似文献   

4.
Previous research [Smith, E. R., Seger, C. R., & Mackie, D. M. (2007). Can emotions be truly group-level? Evidence regarding four conceptual criteria. Journal of Personality and Social Psychology, 93, 431-446] has demonstrated that when people are explicitly asked about the emotions they experience as members of a particular group, their reported emotions converge toward a profile typical for that group. Two studies demonstrate that the same type of convergence occurs when a group identity is made situationally salient through priming, without an explicit request to report group-level emotions. People who identify more strongly with the group converge more, and show more similarity between their group-primed emotions and explicitly reported group-level emotions. This research confirms that activating a social identity produces convergence for emotions as well as for attitudes and behaviors. It also suggests that some previous emotion research may have tapped group rather than individual-level emotions, potentially requiring some reconceptualization.  相似文献   

5.
This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched normal controls volunteered to participate. The participants were individually presented with morphed photographs of facial emotion expressions over multiple trials. They were requested to classify each of these morphed photographs according to Ekman's six basic emotion categories. The findings indicated that the clinical participants had impaired facial emotion recognition, though no clear lateralization pattern of impairment was observed. The patients with localized thalamic damage performed significantly worse in recognizing sadness than the controls. Longitudinal studies on patients with subcortical brain damage should be conducted to examine how cognitive reorganization post-stroke would affect emotion recognition.  相似文献   

6.
Humans exchange a range of nonverbal social signals in every interaction. It is an open question whether people use these signals, consciously or unconsciously, to guide social behavior. This experiment directly tested whether participants could learn to predict another person’s behavior using nonverbal cues in a single interaction, and whether explicit knowledge of the cue-outcome relationship was necessary for successful prediction. Participants played a computerized game of rock-paper-scissors against an avatar they believed was another participant. Sometimes the avatar generated a predictive facial cue before the play. On these trials, participants’ win-frequency increased over time, even if they did not acquire explicit knowledge of the predictive cue. The degree to which participants could predict the avatar (wins on cued trials) related to their self-reported liking of the avatar. These findings demonstrate the importance of implicit associative learning mechanisms in guiding social behavior on a moment-to-moment basis during interaction.  相似文献   

7.
The purpose of the present study was to examine the time course of race and expression processing to determine how these cues influence early perceptual as well as explicit categorization judgments. Despite their importance in social perception, little research has examined how social category information and emotional expression are processed over time. Moreover, although models of face processing suggest that the two cues should be processed independently, this has rarely been directly examined. Event-related brain potentials were recorded as participants made race and emotion categorization judgments of Black and White men posing either happy, angry, or neutral expressions. Our findings support that processing of race and emotion cues occur independently and in parallel, relatively early in processing.  相似文献   

8.
Social cognitive research has documented the integral role of social categories (e.g., race) in face processing. Activating a social category can lead perception and memory of faces to be biased in a category-consistent direction. The current research extends this past work, to test the hypothesis that making a social category salient can reduce subsequent face recognition. In two experiments, the current research finds that the typically superior same-race recognition is debilitated by making the same-race category salient. We find that when White-Americans self-categorize as ‘White,’ subsequent perceptual and memorial biases reduce the typically strong same-race recognition.  相似文献   

9.
This study examined the perception of emotional expressions, focusing on the face and the body. Photographs of four actors expressing happiness, sadness, anger, and fear were presented in congruent (e.g., happy face with happy body) and incongruent (e.g., happy face with fearful body) combinations. Participants selected an emotional label using a four-option categorisation task. Reaction times and accuracy for the categorisation judgement, and eye movements were the dependent variables. Two regions of interest were examined: face and body. Results showed better accuracy and faster reaction times for congruent images compared to incongruent images. Eye movements showed an interaction in which there were more fixations and longer dwell times to the face and fewer fixations and shorter dwell times to the body with incongruent images. Thus, conflicting information produced a marked effect on information processing in which participants focused to a greater extent on the face compared to the body.  相似文献   

10.
Perceivers remember own-race faces more accurately than other-race faces (i.e., Own-Race Bias). In the current experiments, we manipulated participants' attentional resources and social group membership to explore their influence on own and other-race face recognition memory. In Experiment 1, Chinese participants viewed own-race and Caucasian faces, and between-subjects we manipulated whether participants attention was divided during face encoding. We found that divided attention eliminated the Own-Race Bias in memory due to a reduction of memory accuracy for own-race faces, implicating that attention allocation plays a role in creating the bias. In Experiment 2, Chinese participants completed an ostensible personality test. Some participants were informed that their personality traits were most commonly found in Caucasian (i.e., other-race) individuals, resulting in these participants sharing a group membership with other-race targets. In contrast, other participants were not told anything about the personality test, resulting in the default own-race group membership. The participants encoded the faces for a subsequent recognition memory test either with or without performing a concurrent arithmetic distracting task. Results showed that other-race group membership and reducing attention during encoding independently eliminated the typical Own-Race Bias in face memory. The implications of these findings on perceptual-expertise and social-categorization models are discussed.  相似文献   

11.
Facial crowds of emotion connoting approval or criticism are linked to the fears of socially anxious individuals. We examined evaluation ratings and decision latencies of mixed facial displays by individuals with generalized social phobia (GSPs, n = 18), individuals with comorbid depression and GSP (COMs, n = 18), and normal controls (CONs, n = 18). First, we postulated that GSPs will assign more negative ratings to predominantly disapproving audiences as compared to CONs, and that GSPs will be faster in their evaluation of these audiences (negative bias hypothesis). Second, we expected depression, but not social anxiety, to be associated with diminished positive evaluation of audiences containing predominantly happy expressions and with a slower processing of such positive cues (the impaired positivity hypothesis). Results supported the negative bias hypothesis, and provided partial support for the impaired positivity hypothesis. The importance of examining the processing of complex non-verbal cues in social anxiety and in depression is discussed.  相似文献   

12.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

13.
We tested the effect of mask use and other-race effect on (a) face recognition, (b) recognition of facial expressions, and (c) social distance. Caucasian subjects were tested in a matching-to-sample paradigm with either masked or unmasked Caucasian and Asian faces. The participants exhibited the best performance in recognizing an unmasked face condition and the poorest to recognize a masked face that they had seen earlier without a mask. Accuracy was poorer for Asian faces than Caucasian faces. The second experiment presented Asian or Caucasian faces having emotional expressions, with and without masks. The participants' emotion recognition performance decreased for masked faces. From the most accurately to least accurately recognized emotions were as follows: happy, neutral, disgusted, fearful. Performance was poorer for Asian stimuli compared to Caucasian. In Experiment 3 the same participants indicated the social distance they would prefer with each pictured person. They preferred a wider distance with unmasked faces compared to masked faces. Distance from farther to closer was as follows: disgusted, fearful, neutral, and happy. They preferred wider social distance for Asian compared to Caucasian faces. Altogether, findings indicated that during the COVID-19 pandemic mask wearing decreased recognition of faces and emotional expressions, negatively impacting communication among people from different ethnicities.  相似文献   

14.
Although facial information is distributed over spatial as well as temporal domains, thus far research on selective attention to disapproving faces has concentrated predominantly on the spatial domain. This study examined the temporal characteristics of visual attention towards facial expressions by presenting a Rapid Serial Visual Presentation (RSVP) paradigm to high (n=33) and low (n=34) socially anxious women. Neutral letter stimuli (p, q, d, b) were presented as the first target (T1), and emotional faces (neutral, happy, angry) as the second target (T2). Irrespective of social anxiety, the attentional blink was attenuated for emotional faces. Emotional faces as T2 did not influence identification accuracy of a preceding (neutral) target. The relatively low threshold for the (explicit) identification of emotional expressions is consistent with the view that emotional facial expressions are processed relatively efficiently.  相似文献   

15.
The experiment tested whether patients with social phobia direct their attention to or away from faces with a range of emotional expressions. A modified dot probe paradigm (J. Abnorm. Psychol. 95 (1986) 15) measured whether participants attended more to faces or to household objects. Twenty patients with social phobia were faster in identifying the probe when it occurred in the location of the household objects, regardless of whether the facial expressions were positive, neutral, or negative. In contrast, controls did not exhibit an attentional preference. The results are in line with recent theories of social phobia that emphasize the role of reduced processing of external social cues in maintaining social anxiety.  相似文献   

16.
Two experiments investigated the role that different face regions play in a variety of social judgements that are commonly made from facial appearance (sex, age, distinctiveness, attractiveness, approachability, trustworthiness, and intelligence). These judgements lie along a continuum from those with a clear physical basis and high consequent accuracy (sex, age) to judgements that can achieve a degree of consensus between observers despite having little known validity (intelligence, trustworthiness). Results from Experiment 1 indicated that the face's internal features (eyes, nose, and mouth) provide information that is more useful for social inferences than the external features (hair, face shape, ears, and chin), especially when judging traits such as approachability and trustworthiness. Experiment 2 investigated how judgement agreement was affected when the upper head, eye, nose, or mouth regions were presented in isolation or when these regions were obscured. A different pattern of results emerged for different characteristics, indicating that different types of facial information are used in the various judgements. Moreover, the informativeness of a particular region/feature depends on whether it is presented alone or in the context of the whole face. These findings provide evidence for the importance of holistic processing in making social attributions from facial appearance.  相似文献   

17.
We investigated the effects of smiling on perceptions of positive, neutral and negative verbal statements. Participants viewed computer-generated movies of female characters who made angry, disgusted, happy or neutral statements and then showed either one of two temporal forms of smile (slow vs. fast onset) or a neutral expression. Smiles significantly increased the perceived positivity of the message by making negative statements appear less negative and neutral statements appear more positive. However, these smiles led the character to be seen as less genuine than when she showed a neutral expression. Disgust + smile messages led to higher judged happiness than did anger + smile messages, suggesting that smiles were seen as reflecting humour when combined with disgust statements, but as masking negative affect when combined with anger statements. These findings provide insights into the ways that smiles moderate the impact of verbal statements.  相似文献   

18.
《Brain and cognition》2014,84(3):252-261
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   

19.
Facial expressions of familiar faces have been found to influence identification. In this study, we hypothesize that faces with emotional expression are encoded for both structural and variant information resulting in more robust identification. Eighty-eight participants were presented faces with repetition priming frequencies of 2, 5, 10, and 20 (learning stage) and then judged the faces in terms of familiarity (testing stage). Participants were randomized into one of the following conditions: The facial expression between learning and testing stage remained the same (F-F), faces with facial expression shown in the learning stage were neutralized in the testing stage (F-N), or faces with neutral expressions shown in the learning stage had emotional expressions in the testing stage (N-F). Results confirmed our hypothesis and implications are discussed.  相似文献   

20.
Theorists have long postulated that facial properties such as emotion and sex are potent social stimuli that influence how individuals act. Yet extant scientific findings were mainly derived from investigations on the prompt motor response upon the presentation of affective stimuli, which were mostly delivered by means of pictures, videos, or text. A theoretical question remains unaddressed concerning how the perception of emotion and sex would modulate the dynamics of a continuous coordinated behaviour. Conceived in the framework of dynamical approach to interpersonal motor coordination, the present study aimed to address this question by adopting the coupled-oscillators paradigm. Twenty-one participants performed in-phase and anti-phase coordination with two avatars (male and female) displaying three emotional expressions (neutral, happy, and angry) at different frequencies (100% and 150% of the participant's preferred frequency) by executing horizontal rhythmic left-right oscillatory movements. Time to initiate movement (TIM), mean relative phase error (MnRP), and standard deviation of relative phase (SDRP) were calculated as indices of reaction time, deviation from the intended pattern of coordination, and coordination stability, respectively. Results showed that in anti-phase condition at 150% frequency, MnRP was lower with the angry and the female avatar. In addition, coordination was found to be more stable with the male avatar than the female one when both displaying neutral emotion. But the happy female avatar was found to elicit more stable coordination than the neutral female avatar. These results implied that individuals are more relaxed to coordinate with the female than the male, and the sensorimotor system becomes more flexible to coordinate with an angry person. It is also suggested social roles influence how people coordinate, and individuals attend more to interact with a happy female. In sum, the present study evidenced that social perception is embodied in the interactive behaviour during social interaction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号