首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
Memory biases toward threat have been documented in several anxiety disorders, but contradictory findings have recently been reported in social phobics' recognition of facial expressions. The present study examined recognition memory in clients with social phobia, in an effort to clarify previous inconsistent results. Just before giving a speech to a live audience, social phobia clients and normal controls viewed photographs of people with reassuring and threatening facial expressions. The stimuli were later presented again alongside photographs of the same person with a different facial expression, and participants chose which face they had seen before. Individuals with social phobia were less accurate at recognizing previously seen photographs than controls, apparently due to state anxiety. In contrast, social phobics did not show a memory bias toward threatening facial expressions. Theoretical and treatment implications are discussed.  相似文献   

2.
This study investigated age-related differences between younger (M = 25.52 years) and older (M = 70.51 years) adults in avoidance motivation and the influence of avoidance motivation on gaze preferences for happy, neutral, and angry faces. In line with the hypothesis of reduced negativity effect later in life, older adults avoided angry faces and (to a lesser degree) preferred happy faces more than younger adults did. This effect cannot be explained by age-related changes in dispositional motivation. Irrespective of age, avoidance motivation predicted gaze behavior towards emotional faces. The study demonstrates the importance of interindividual differences beyond young adulthood.  相似文献   

3.
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion.  相似文献   

4.
The present study examined whether information processing bias against emotional facial expressions is present among individuals with social anxiety. College students with high (high social anxiety group; n  = 26) and low social anxiety (low social anxiety group; n  = 26) performed three different types of working memory tasks: (a) ordering positive and negative facial expressions according to the intensity of emotion; (b) ordering pictures of faces according to age; and (c) ordering geometric shapes according to size. The high social anxiety group performed significantly more poorly than the low social anxiety group on the facial expression task, but not on the other two tasks with the nonemotional stimuli. These results suggest that high social anxiety interferes with processing of emotionally charged facial expressions.  相似文献   

5.
In 2 experiments, the authors tested predictions from cognitive models of social anxiety regarding attentional biases for social and nonsocial cues by monitoring eye movements to pictures of faces and objects in high social anxiety (HSA) and low social anxiety (LSA) individuals. Under no-stress conditions (Experiment 1), HSA individuals initially directed their gaze toward neutral faces, relative to objects, more often than did LSA participants. However, under social-evaluative stress (Experiment 2), HSA individuals showed reduced biases in initial orienting and maintenance of gaze on faces (cf. objects) compared with the LSA group. HSA individuals were also relatively quicker to look at emotional faces than neutral faces but looked at emotional faces for less time, compared with LSA individuals, consistent with a vigilant-avoidant pattern of bias.  相似文献   

6.
Caricaturing facial expressions   总被引:1,自引:0,他引:1  
The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.  相似文献   

7.
Three experiments investigated the perception of facial displays of emotions. Using a morphing technique, Experiment 1 (identification task) and Experiment 2 (ABX discrimination task) evaluated the merits of categorical and dimensional models of the representation of these stimuli. We argue that basic emotions—as they are usually defined verbally—do not correspond to primary perceptual categories emerging from the visual analysis of facial expressions. Instead, the results are compatible with the hypothesis that facial expressions are coded in a continuous anisotropic space structured by valence axes. Experiment 3 (identification task) introduces a new technique for generating chimeras to address the debate between feature-based and holistic models of the processing of facial expressions. Contrary to the pure holistic hypothesis, the results suggest that an independent assessment of discrimination features is possible, and may be sufficient for identifying expressions even when the global facial configuration is ambiguous. However, they also suggest that top-down processing may improve identification accuracy by assessing the coherence of local features.  相似文献   

8.
The Approach–Avoidance Task (AAT) was employed to indirectly investigate avoidance reactions to stimuli of potential social threat. Forty-three highly socially anxious individuals (HSAs) and 43 non-anxious controls (NACs) reacted to pictures of emotional facial expressions (angry, neutral, or smiling) or to control pictures (puzzles) by pulling a joystick towards themselves (approach) versus pushing it away from themselves (avoidance). HSAs showed stronger avoidance tendencies than NACs for smiling as well as angry faces, whereas no group differences were found for neutral faces and puzzles. In contrast, valence ratings of the emotional facial expressions did not differ between groups. A critical discrepancy between direct and indirect measures was observed for smiling faces: HSAs evaluated them positively, but reacted to them with avoidance.  相似文献   

9.
This study explored how rapidly emotion specific facial muscle reactions were elicited when subjects were exposed to pictures of angry and happy facial expressions. In three separate experiments, it was found that distinctive facial electromyographic reactions, i.e., greater Zygomaticus major muscle activity in response to happy than to angry stimuli and greater Corrugator supercilii muscle activity in response to angry stimuli, were detectable after only 300–400 ms of exposure. These findings demonstrate that facial reactions are quickly elicited, indicating that expressive emotional reactions can be very rapidly manifested and are perhaps controlled by fast operating facial affect programs.  相似文献   

10.
Unconscious facial reactions to emotional facial expressions   总被引:22,自引:0,他引:22  
Studies reveal that when people are exposed to emotional facial expressions, they spontaneously react with distinct facial electromyographic (EMG) reactions in emotion-relevant facial muscles. These reactions reflect, in part, a tendency to mimic the facial stimuli. We investigated whether corresponding facial reactions can be elicited when people are unconsciously exposed to happy and angry facial expressions. Through use of the backward-masking technique, the subjects were prevented from consciously perceiving 30-ms exposures of happy, neutral, and angry target faces, which immediately were followed and masked by neutral faces. Despite the fact that exposure to happy and angry faces was unconscious, the subjects reacted with distinct facial muscle reactions that corresponded to the happy and angry stimulus faces. Our results show that both positive and negative emotional reactions can be unconsciously evoked, and particularly that important aspects of emotional face-to-face communication can occur on an unconscious level.  相似文献   

11.
Three studies examined the nature of the contributions of each hemisphere to the processing of facial expressions and facial identity. A pair of faces, the members of which differed in either expression or identity, were presented to the right or left field. Subjects were required to compare the members of the pair to each other (experiments 1 and 2) or to a previously presented sample (experiment 3). The results revealed that both face and expression perception show an LVF superiority although the two tasks could be differentiated in terms of overall processing time and the interaction of laterality differences with sex. No clear-cut differences in laterality emerged for processing of positive and negative expressions.  相似文献   

12.
Individuals spontaneously categorise other people on the basis of their gender, ethnicity and age. But what about the emotions they express? In two studies we tested the hypothesis that facial expressions are similar to other social categories in that they can function as contextual cues to control attention. In Experiment 1 we associated expressions of anger and happiness with specific proportions of congruent/incongruent flanker trials. We also created consistent and inconsistent category members within each of these two general contexts. The results demonstrated that participants exhibited a larger congruency effect when presented with faces in the emotional group associated with a high proportion of congruent trials. Notably, this effect transferred to inconsistent members of the group. In Experiment 2 we replicated the effects with faces depicting true and false smiles. Together these findings provide consistent evidence that individuals spontaneously utilise emotions to categorise others and that such categories determine the allocation of attentional control.  相似文献   

13.
Cognitive-behavioural models of social phobia (Clark & Wells, 1995; Rapee & Heimberg, 1997) propose that biased information processing contributes to the maintenance of social phobia. Given the importance of facial expressions in social interactions, recent investigations of these information-processing biases have increasingly used facial stimuli. The current study utilised schematic faces of emotional expressions to investigate interpretations of facial expressions and specific facial features in individuals with high and low social anxiety. Individuals with elevated social anxiety demonstrated biases in their perceptions of negative valence from the faces, whereas group differences were not observed for perceptions of activity or potency. Further, although the two groups generally utilised the same facial features to interpret facial expressions, the results suggested that individuals with high social anxiety may be more lenient in perceiving threat in faces than individuals without social anxiety.  相似文献   

14.
Unconscious processing of stimuli with emotional content can bias affective judgments. Is this subliminal affective priming merely a transient phenomenon manifested in fleeting perceptual changes, or are long-lasting effects also induced? To address this question, we investigated memory for surprise faces 24 h after they had been shown with 30-ms fearful, happy, or neutral faces. Surprise faces subliminally primed by happy faces were initially rated as more positive, and were later remembered better, than those primed by fearful or neutral faces. Participants likely to have processed primes supraliminally did not respond differentially as a function of expression. These results converge with findings showing memory advantages with happy expressions, though here the expressions were displayed on the face of a different person, perceived subliminally, and not present at test. We conclude that behavioral biases induced by masked emotional expressions are not ephemeral, but rather can last at least 24 h.  相似文献   

15.
Infant attention to facial expressions and facial motion   总被引:1,自引:0,他引:1  
Three-month-old infants were shown moving faces and still faces on videotape in a paired-comparison situation. Motion type was clearly specified, and facial expression and motion were separately varied. Infants saw a still face, internal motion on the face (i.e., motion of the internal features), and whole object (i.e., side-to-side) motion, each with happy and neutral expressions. Infants showed preference for expressions when the face was still and when it showed internal motion. Facial expression and facial motion were equally preferred, and both appeared to be salient dimensions of the face for three-month-old infants.  相似文献   

16.
A small body of research suggests that socially anxious individuals show biases in interpreting the facial expressions of others. The current study included a clinically anxious sample in a speeded emotional card-sorting task in two conditions (baseline and threat) to investigate several hypothesized biases in interpretation. Following the threat manipulation, participants with generalized social anxiety disorders (GSADs) sorted angry cards with greater accuracy, but also evidenced a greater rate of neutral cards misclassified as angry, as compared to nonanxious controls. The controls showed the opposite pattern, sorting neutral cards with greater accuracy but also misclassifying a greater proportion of angry cards as neutral, as compared to GSADs. These effects were accounted for primarily by low-intensity angry cards. Results are consistent with previous studies showing a negative interpretive bias, and can be applied to the improvement of clinical interventions.  相似文献   

17.
This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals’ angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals’ happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.  相似文献   

18.
This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.  相似文献   

19.
20.
Facial images can be enhanced by application of an algorithm--the caricature algorithm--that systematically manipulates their distinctiveness (Benson & Perrett, 1991c; Brennan, 1985). In this study, we first produced a composite facial image from natural images of the six facial expressions of fear, sadness, surprise, happiness, disgust, and anger shown on a number of different individual faces (Ekman & Friesen, 1975). We then caricatured the composite images with respect to a neutral (resting) expression. Experiment 1 showed that rated strength of the target expression was directly related to the degree of enhancement for all the expressions. Experiment 2, which used a free rating procedure, found that, although caricature enhanced the strength of the target expression (more extreme ratings), it did not necessarily enhance its purity, inasmuch as the attributes of nontarget expressions were also enhanced. Naming of prototypes, of original exemplar images, and of caricatures was explored in Experiment 3 and followed the pattern suggested by the free rating conditions of Experiment 2, with no overall naming advantage to caricatures under these conditions. Overall, the experiments suggested that computational methods of compositing and caricature can be usefully applied to facial images of expression. Their utility in enhancing the distinctiveness of the expression depends on the purity of expression in the source image.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号