首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three experiments examined the recognition speed advantage for happy faces. The results replicated earlier findings by showing that positive (happy) facial expressions were recognized faster than negative (disgusted or sad) facial expressions (Experiments 1 and 2). In addition, the results showed that this effect was evident even when low-level physical differences between positive and negative faces were controlled by using schematic faces (Experiment 2), and that the effect was not attributable to an artifact arising from facilitated recognition of a single feature in the happy faces (up-turned mouth line, Experiment 3). Together, these results suggest that the happy face advantage may reflect a higher-level asymmetry in the recognition and categorization of emotionally positive and negative signals.  相似文献   

2.
Observers are remarkably consistent in attributing particular emotions to particular facial expressions, at least in Western societies. Here, we suggest that this consistency is an instance of the fundamental attribution error. We therefore hypothesized that a small variation in the procedure of the recognition study, which emphasizes situational information, would change the participants' attributions. In two studies, participants were asked to judge whether a prototypical "emotional facial expression" was more plausibly associated with a social-communicative situation (one involving communication to another person) or with an equally emotional but nonsocial, situation. Participants were found more likely to associate each facial display with the social than with the nonsocial situation. This result was found across all emotions presented (happiness, fear, disgust, anger, and sadness) and for both Spanish and Canadian participants.  相似文献   

3.
Research has shown that neutral faces are better recognized when they had been presented with happy rather than angry expressions at study, suggesting that emotional signals conveyed by facial expressions influenced the encoding of novel facial identities in memory. An alternative explanation, however, would be that the influence of facial expression resulted from differences in the visual features of the expressions employed. In this study, this possibility was tested by manipulating facial expression at study versus test. In line with earlier studies, we found that neutral faces were better recognized when they had been previously encountered with happy rather than angry expressions. On the other hand, when neutral faces were presented at study and participants were later asked to recognize happy or angry faces of the same individuals, no influence of facial expression was detected. As the two experimental conditions involved exactly the same amount of changes in the visual features of the stimuli between study and test, the results cannot be simply explained by differences in the visual properties of different facial expressions and may instead reside in their specific emotional meaning. The findings further suggest that the influence of facial expression is due to disruptive effects of angry expressions rather than facilitative effects of happy expressions. This study thus provides additional evidence that facial identity and facial expression are not processed completely independently.  相似文献   

4.
Psychonomic Bulletin & Review - When the upper half of one face (‘target region’) is spatially aligned with the lower half of another (‘distractor region’), the two...  相似文献   

5.
Summary The existence of facial vision has been doubted, perhaps because of its identification with dermo-optical perception. To determine whether more credence should be granted to this alleged phenomenon, we studied both blind and sighted people. Ninety-two percent of the partially blind people reported experiencing facial vision, but only 30% of the totally blind people reported the experience. Eighty-five percent of the sighted people also reported experiencing facial vision when a shadow moved across their eyelids. In response to a questionnaire asking about subjective visual experiences, 43% of the sighted people reported seeing as though through a window on their face. The boundaries of two monocular fields mapped by apparent locations of pressure phosphenes agreed with the boundary of the area in which facial vision is experienced. The findings indicate that facial vision can be experienced by both blind and sighted persons and that it can be explained in part by the principles of visual direction.This research was supported by grant no. A0296 from the Natural Sciences and Engineering Research Council of Canada and a Canada Summer Internship Grant. The authors wish to thank S. Anstis. I. Howard, M. Komoda, M. Steinbach, N. Wade, J. Codd, and those associated with our laboratory for their helpful comments on an earlier version of this paper. The authors would also like to thank Joanne Gallagher for her help in collecting data; Milan Tytla, at the Toronto Hospital for Sick Children, for testing the blind participants; and the Canadian National Institute for the Blind, for their assistance in recruiting blind participants  相似文献   

6.
Influential facial impression models have repeatedly shown that trustworthiness, youthful–attractiveness, and dominance dimensions subserve a wide variety of first impressions formed from strangers’ faces, suggestive of a shared social reality. However, these models are built from impressions aggregated across observers. Critically, recent work has now shown substantial inter-observer differences in facial impressions, raising the important question of whether these dimensional models based on aggregated group data are meaningful at the individual observer level. We addressed this question with a novel case series approach, using factor analyses of ratings of twelve different traits to build individual models of facial impressions for different observers. Strikingly, three dimensions of trustworthiness, youthful/attractiveness, and competence/dominance appeared across the majority of these individual observer models, demonstrating that the dimensional approach is indeed meaningful at the individual level. Nonetheless, we also found differences in the stability of the competence/dominance dimension across observers. Taken together, results suggest that individual differences in impressions arise in the context of a largely common structure that supports a shared social reality.  相似文献   

7.
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain–behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = –.51) and memory (r = –.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.  相似文献   

8.
Twenty-seven women with high scores on the Blushing Propensity Scale (BPS) and 26 women with low BPS scores were exposed to two different video segments. One video showed the subject's own singing, recorded in a previous session and the other video showed a segment of Hitchcock's movie Psycho. During the experiment, facial coloration, facial temperature, and skin conductance level were measured. In addition, subjects' blushing intensity was judged by raters. Finally, subjects were asked to rate their blushing intensity and fear of blushing during the video presentations. Subjects generally blushed more during the presentation of their singing than during comparison stimulation, as measured physiologically. There were no between group differences in this respect. No differences were found between the two groups on raters' judgements about blushing intensity. However, high BPS subjects dramatically overestimated their blushing intensity and were more afraid of blushing than low BPS subjects. During the mere presence of the raters, high BPS subjects tended to show a relatively strong coloration. Thus, the BPS seems to reflect both a fearful preoccupation and a stronger facial coloration.  相似文献   

9.
The effects of task demands on the visual comparison of facial patterns and of comparable nonfacial patterns were explored in two studies. The studies yielded two primary findings. First, faces, despite their holistic properties, are not rotated faster than comparable non-face-like patterns, although subjects’ judgments of them were uniformly more rapid than judgments for nonfaces. Second, the nature of the same-different judgment task required of subjects had a large effect on the pattern of results obtained: When stimuli were compared to their mirror images, results indicative of mental “rotation” were obtained. When stimuli were compared on the basis of similarity of individual features, the pattern of results was very different. This one manipulation produced effects that exceeded those of all of the other manipulations, including that of rotation.  相似文献   

10.
This study investigates the discrimination accuracy of emotional stimuli in subjects with major depression compared with healthy controls using photographs of facial expressions of varying emotional intensities. The sample included 88 unmedicated male and female subjects, aged 18-56 years, with major depressive disorder (n = 44) or no psychiatric illness (n = 44), who judged the emotion of 200 facial pictures displaying an expression between 10% (90% neutral) and 80% (nuanced) emotion. Stimuli were presented in 10% increments to generate a range of intensities, each presented for a 500-ms duration. Compared with healthy volunteers, depressed subjects showed very good recognition accuracy for sad faces but impaired recognition accuracy for other emotions (e.g., harsh, surprise, and sad expressions) of subtle emotional intensity. Recognition accuracy improved for both groups as a function of increased intensity on all emotions. Finally, as depressive symptoms increased, recognition accuracy increased for sad faces, but decreased for surprised faces. Moreover, depressed subjects showed an impaired ability to accurately identify subtle facial expressions, indicating that depressive symptoms influence accuracy of emotional recognition.  相似文献   

11.
Previous research that has evaluated the accuracy of facial composites has reported low identification rates. Two studies are reported here that consider whether showing more than one composite of the same suspect might improve the rate of identification. Sixteen participant‐witnesses saw one of two staged events, each involving a different unfamiliar target. Each participant‐witness worked with a police operator to construct a composite of the target they had seen. One, four or eight composites depicting the same target were then shown to individuals familiar with the target. Overall, the results showed that presenting more than one composite increased the rate of identification. In addition, the results of Study 2 suggest that if the police must select just one composite from a number produced by witnesses, then a promising method might be to choose the one which bears most similarity to the other composites in the set. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
The purpose of the present study was to explore the accessory nonverbal behaviours emitted by stutterers when their speech was fluent, normally disfluent, or stuttered. Subjects were 25 stutterers who were required to speak spontaneously for a 2-min. period. Seven types of nonverbal behavior were observed. Significant differences among the three speech categories were obtained for jaw movements, mouth movements, forehead movements, eyebrow movements, and head movements. Eyelid movements and eye blinks were nonsignificant. The results are discussed with respect to the various functions that can be attributed to nonverbal behaviour in stuttering.  相似文献   

13.
14.
In 6 experiments, the authors investigated whether attention orienting by gaze direction is modulated by the emotional expression (neutral, happy, angry, or fearful) on the face. The results showed a clear spatial cuing effect by gaze direction but no effect by facial expression. In addition, it was shown that the cuing effect was stronger with schematic faces than with real faces, that gaze cuing could be achieved at very short stimulus onset asynchronies (14 ms), and that there was no evidence for a difference in the strength of cuing triggered by static gaze cues and by cues involving apparent motion of the pupils. In sum, the results suggest that in normal, healthy adults, eye direction processing for attention shifts is independent of facial expression analysis.  相似文献   

15.
Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

16.
Humans must coordinate approach–avoidance behaviours with the social cues that elicit them, such as facial expressions and gaze direction. We hypothesised that when someone is observed looking in a particular direction with a happy expression, the observer would tend to approach that direction, but that when someone is observed looking in a particular direction with a fearful expression, the observer would tend to avoid that direction. Twenty-eight participants viewed stimulus faces with averted gazes and happy or fearful expressions on a computer screen. Participants were asked to grasp (approach) or withdraw from (avoid) a left- or right-hand button depending on the stimulus face's expression. The results were consistent with our hypotheses about avoidance responses, but not with respect to approach responses. Links between social cues and adaptive behaviour are discussed.  相似文献   

17.
The human–horse relationship has a long evolutionary history. Horses continue to play a pivotal role in the lives of humans and it is common for humans to think their horses recognize them by face. If a horse can distinguish his/her human companion from other humans, then evolution has supplied the horse with a very adaptive cognitive ability. The current study used operant conditioning trials to examine whether horses could discriminate photographed human faces and transfer this facial recognition ability a novel setting. The results indicated the horses (a) learned to discriminate photographs of the unrelated individuals, fraternal twins, and identical twins and (b) demonstrated transfer of facial recognition by spending more time with their S+ woman in the field test.  相似文献   

18.
We investigated whether moral violations involving harm selectively elicit anger, whereas purity violations selectively elicit disgust, as predicted by the Moral Foundations Theory (MFT). We analysed participants’ spontaneous facial expressions as they listened to scenarios depicting moral violations of harm and purity. As predicted by MFT, anger reactions were elicited more frequently by harmful than by impure actions. However, violations of purity elicited more smiling reactions and expressions of anger than of disgust. This effect was found both in a classic set of scenarios and in a new set in which the different kinds of violations were matched on weirdness. Overall, these findings are at odds with predictions derived from MFT and provide support for “monist” accounts that posit harm at the basis of all moral violations. However, we found that smiles were differentially linked to purity violations, which leaves open the possibility of distinct moral modules.  相似文献   

19.
20.
Facial features appear to be a prominent kinship cue for ascribing relatedness among human individuals. Although there is evidence that adults can detect kinship in unrelated and unfamiliar individual’s faces, it remains to be seen whether people already possess the ability when they are young. To further understand the development of this skill, we explored children’s ability to detect parent-offspring resemblance in unrelated and unfamiliar faces. To this end, we tested approximately 140 children, aged 5–11, in two photo-matching tasks. We used a procedure that asked them to match one neonate’s face to one of three adults’ faces (Task 1), or to match one adult’s face to one of three neonate’s faces (Task 2). Our findings reveal asymmetrical performance, depending on the tasks assigned (performance of Task 2 is stronger than for Task 1), and on the sex of individuals who made up the parent-offspring pair (male parents are better matched with neonates than female parents, and boys are better matched than girls). The picture that emerges from our study is, on one hand, that the ability to detect kinship is already present at the age of five but continues to improve as one gets older, and on the other, that perception of parent-offspring facial resemblance varies according to the appraisers’ characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号