首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We report data from an experiment that investigated the influence of gaze direction and facial expression on face memory. Participants were shown a set of unfamiliar faces with either happy or angry facial expressions, which were either gazing straight ahead or had their gaze averted to one side. Memory for faces that were initially shown with angry expressions was found to be poorer when these faces had averted as opposed to direct gaze, whereas memory for individuals shown with happy faces was unaffected by gaze direction. We suggest that memory for another individual's face partly depends on an evaluation of the behavioural intention of that individual.  相似文献   

2.
We report data from an experiment that investigated the influence of gaze direction and facial expression on face memory. Participants were shown a set of unfamiliar faces with either happy or angry facial expressions, which were either gazing straight ahead or had their gaze averted to one side. Memory for faces that were initially shown with angry expressions was found to be poorer when these faces had averted as opposed to direct gaze, whereas memory for individuals shown with happy faces was unaffected by gaze direction. We suggest that memory for another individual's face partly depends on an evaluation of the behavioural intention of that individual.  相似文献   

3.
Provides a comprehensive review of John T. Lanzetta's research program on facial expression and emotion. After reviewing the study that initiated this research program (Lanzetta & Kleck, 1970), the program is described as developing along four distinct lines of research: (1) the role of facial expression in the modulation and self-regulation of emotion, (2) the evocative power of the face as an emotional stimulus, (3) the role of facial expression in empathy and counterempathy, and (4) the role of facial displays in human politics. Beyond reviewing the major studies and key findings to emerge from each of these lines, the progression of thought underlying the development of this research program as a whole and the interrelations among the individual research lines are also emphasized.  相似文献   

4.
《Acta psychologica》1987,66(3):291-306
A classic series of experiments by Loftus, Miller and Burns (1978) showed that a person's recollection of an event can be changed by misleading postevent information. Several hypotheses accounting for this effect have been proposed. Loftus' hypothesis of destructive updating claims that the original memory is destroyed by the postevent information. The coexistence hypothesis asserts that the older memory survives but is rendered inaccessible through a mechanism of inhibition or suppression. The non-conflict hypothesis simply accounts for the effect by claiming that subjects can only be misled if they did not encode or if they forgot the original event. These three hypotheses were modelled with the help of all-or-none probabilistic event trees. An experiment was conducted in order to test the three models and to assess parameter values. The experiment followed the classic Loftus paradigm. We suggested to some subjects that they had seen a stopsign, whereas in fact they had seen a traffic light. The misleading postevent information resulted in poorer reproduction of traffic light. Later, all subjects were asked whether they could remember the color of the traffic light, even if they believed they had seen a stopsign. The results showed that subjects who received the misleading post-event information were at least as good at recalling the color of the traffic light as subjects who did not receive misleading information. The no-conflict model accounts well for the obtained results, although the two other, less parsimonious, models cannot be entirely rejected.  相似文献   

5.
6.
7.
Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried out by witnesses and victims of crime, the role of external features (hair, ears, and neck) is less clear, although research does suggest their involvement. Here, over three experiments, we investigate the impact of external features for recovering facial memories using a modern, recognition-based composite system, EvoFIT. Participant-constructors inspected an unfamiliar target face and, one day later, repeatedly selected items from arrays of whole faces, with "breeding," to "evolve" a composite with EvoFIT; further participants (evaluators) named the resulting composites. In Experiment 1, the important internal-features (eyes, brows, nose, and mouth) were constructed more identifiably when the visual presence of external features was decreased by Gaussian blur during construction: higher blur yielded more identifiable internal-features. In Experiment 2, increasing the visible extent of external features (to match the target's) in the presented face-arrays also improved internal-features quality, although less so than when external features were masked throughout construction. Experiment 3 demonstrated that masking external-features promoted substantially more identifiable images than using the previous method of blurring external-features. Overall, the research indicates that external features are a distractive rather than a beneficial cue for face construction; the results also provide a much better method to construct composites, one that should dramatically increase identification of offenders.  相似文献   

8.
We conducted two experiments to investigate the psychological factors affecting the attractiveness of composite faces. Feminised or juvenilised Japanese faces were created by morphing between average male and female adult faces or between average male (female) adult and boy (girl) faces. In experiment 1, we asked the participants to rank the attractiveness of these faces. The results showed moderately juvenilised faces to be highly attractive. In experiment 2, we analysed the impressions the participants had of the composite faces by the semantic-differential method and determined the factors that largely affected attractiveness. On the basis of the factor scores, we plotted the faces in factor spaces and analysed the locations of attractive faces. We found that most of the attractive juvenilised faces involved impressions corresponding to an augmentation of femininity, characterised by the factors of 'elegance', 'mildness', and 'youthfulness', which the attractive faces potentially had.  相似文献   

9.
Several convergent lines of evidence have suggested that the presence of an emotion signal in a visual stimulus can influence processing of that stimulus. In the current study, we picked up on this idea, and explored the hypothesis that the presence of an emotional facial expression (happiness) would facilitate the identification of familiar faces. We studied two groups of normal participants (overall N=54), and neurological patients with either left (n=8) or right (n=10) temporal lobectomies. Reaction times were measured while participants named familiar famous faces that had happy expressions or neutral expressions. In support of the hypothesis, naming was significantly faster for the happy faces, and this effect obtained in the normal participants and in both patient groups. In the patients with left temporal lobectomies, the effect size for this facilitation was large (d=0.87), suggesting that this manipulation might have practical implications for helping such patients compensate for the types of naming defects that often accompany their brain damage. Consistent with other recent work, our findings indicate that emotion can facilitate visual identification, perhaps via a modulatory influence of the amygdala on extrastriate cortex.  相似文献   

10.
Three studies investigated developmental changes in facial expression processing, between 3years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed better classification of expressions of faces wearing sunglasses than children who saw the same faces un-occluded. When the mouth area was occluded with a mask, children under nine years showed no impairment in expression classification, relative to un-occluded faces. An early selective focus of attention on the eyes may be optimal for socialization, but mediate against accurate expression classification. The data support a model in which a threshold level of attentional control must be reached before children can develop adult-like configural processing skills and be flexible in their use of face- processing strategies.  相似文献   

11.
Studies have shown that emotion elicited after learning enhances memory consolidation. However, no prior studies have used facial photos as stimuli. This study examined the effect of post-learning positive emotion on consolidation of memory for faces. During the learning participants viewed neutral, positive, or negative faces. Then they were assigned to a condition in which they either watched a 9-minute positive video clip, or a 9-minute neutral video. Then 30 minutes after the learning participants took a surprise memory test, in which they made “remember”, “know”, and “new” judgements. The findings are: (1) Positive emotion enhanced consolidation of recognition for negative male faces, but impaired consolidation of recognition for negative female faces; (2) For males, recognition for negative faces was equivalent to that for positive faces; for females, recognition for negative faces was better than that for positive faces. Our study provides the important evidence that effect of post-learning emotion on memory consolidation can extend to facial stimuli and such an effect can be modulated by facial valence and facial gender. The findings may shed light on establishing models concerning the influence of emotion on memory consolidation.  相似文献   

12.
Hoss RA  Ramsey JL  Griffin AM  Langlois JH 《Perception》2005,34(12):1459-1474
We tested whether adults (experiment 1) and 4 - 5-year-old children (experiment 2) identify the sex of highly attractive faces faster and more accurately than not very attractive faces in a reaction-time task. We also assessed whether facial masculinity/femininity facilitated identification of sex. Results showed that attractiveness facilitated adults' sex classification of both female and male faces and children's sex classification of female, but not male, faces. Moreover, attractiveness affected the speed and accuracy of sex classification independently of masculinity/femininity. High masculinity in male faces, but not high femininity in female faces, also facilitated sex classification for both adults and children. These findings provide important new data on how the facial cues of attractiveness and masculinity/femininity contribute to the task of sex classification and provide evidence for developmental differences in how adults and children use these cues. Additionally, these findings provide support for Langlois and Roggman's (1990 Psychological Science 1 115 121) averageness theory of attractiveness.  相似文献   

13.
G Rhodes 《Perception》1988,17(1):43-63
The encoding and relative importance of first-order (discrete) and second-order (configural) features in mental representations of unfamiliar faces have been investigated. Nonmetric multidimensional scaling (KYST) was carried out on similarity judgments of forty-one photographs of faces (homogeneous with respect to sex, race, facial expression, and, to a lesser extent, age). A large set of ratings, measurements, and ratios of measurements of the faces was regressed against the three-dimensional KYST solution in order to determine the first-order and second-order features used to judge similarity. Parameters characterizing both first-order and second-order features emerged as important determinants of facial similarity. First-order feature parameters characterizing the appearance of the eyes, eyebrows, and mouth, and second-order feature parameters characterizing the position of the eyes, spatial relations between the internal features, and chin shape correlated with the dimensions of the KYST solution. There was little difference in the extent to which first-order and second-order features were encoded. Two higher-level parameters, age and weight, were also used to judge similarity. The implications of these results for mental representations of faces are discussed.  相似文献   

14.
For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.  相似文献   

15.
The view that, as children get older, there is a decline in the use of feature‐based forms of face processing to more configurational forms of processing was examined by asking 6‐year‐old and 9‐year‐old children to judge which of two photographs matches an identical probe photograph. The probe and test stimuli were either photographs of whole faces or photographs of isolated facial features. Within this standard method, the stimuli also systematically varied in terms of the familiarity of the faces shown and in the orientation of presentation, both factors that have been interpreted as effecting configurational encoding. A number of age‐related effects are observed: (a) older children are better at recognizing whole faces than younger children, (b) older children exhibit a clear face inversion effect with whole faces while the younger children are equally adept at identifying upright and inverted whole faces, and (c) analysis of the recognition rates associated with the individual features reveals that younger children are better than older children when asked to recognize eye regions. It is argued that the data support the view that as children get older there is a change in the forms of piecemeal encoding employed and an increase in configurational processing. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

16.
This study demonstrates that when people attempt to identify a facial expression of emotion (FEE) by haptically exploring a 3D facemask, they are affected by viewing a simultaneous, task-irrelevant visual FEE portrayed by another person. In comparison to a control condition, where visual noise was presented, the visual FEE facilitated haptic identification when congruent (visual and haptic FEEs same category). When the visual and haptic FEEs were incongruent, haptic identification was impaired, and error responses shifted toward the visually depicted emotion. In contrast, visual emotion labels that matched or mismatched the haptic FEE category produced no such effects. The findings indicate that vision and touch interact in FEE recognition at a level where featural invariants of the emotional category (cf. precise facial geometry or general concepts) are processed, even when the visual and haptic FEEs are not attributable to a common source. Processing mechanisms behind these effects are considered.  相似文献   

17.
18.
We designed two computational models to replicate human facial attractiveness ratings. The primary model used partial least squares (PLS) to identify image factors associated with facial attractiveness from facial images and attractiveness ratings of those images. For comparison we also made a model similar to previous models of facial attractiveness, in that it used manually derived measurements between features as inputs, though we took the additional step of dimensionality reduction via principal component analysis (PCA) and weighting of PCA dimensions via a perceptron. Strikingly, both models produced estimates of facial attractiveness that were indistinguishable from human ratings. Because PLS extracts a small number of image factors from the facial images that covary with attractiveness ratings of the images, it is possible to determine the information used by the model. The image factors that the model discovered correspond to two of the main contemporary hypotheses of averageness judgments: facial attractiveness and sexual dimorphism. In contrast, facial symmetry was not important to the model, and an explicit feature-based measurement of symmetry was not correlated with human judgments of facial attractiveness. This provides novel evidence for the importance of averageness and sexual dimorphism, but not symmetry, in human judgments of facial attractiveness.  相似文献   

19.
Affect bursts consist of spontaneous and short emotional expressions in which facial, vocal, and gestural components are highly synchronized. Although the vocal characteristics have been examined in several recent studies, the facial modality remains largely unexplored. This study investigated the facial correlates of affect bursts that expressed five different emotions: anger, fear, sadness, joy, and relief. Detailed analysis of 59 facial actions with the Facial Action Coding System revealed a reasonable degree of emotion differentiation for individual action units (AUs). However, less convergence was shown for specific AU combinations for a limited number of prototypes. Moreover, expression of facial actions peaked in a cumulative-sequential fashion with significant differences in their sequential appearance between emotions. When testing for the classification of facial expressions within a dimensional approach, facial actions differed significantly as a function of the valence and arousal level of the five emotions, thereby allowing further distinction between joy and relief. The findings cast doubt on the existence of fixed patterns of facial responses for each emotion, resulting in unique facial prototypes. Rather, the results suggest that each emotion can be portrayed by several different expressions that share multiple facial actions.  相似文献   

20.
The influences of sex and lateralized visual hemispace bias in the judgment of the emotional valence of faces during a free viewing condition are evaluated. 73 subjects (aged 18 to 52 yr.) viewed videotaped facial expressions of emotion in normal and mirror-reversed orientation and classified each face as a positive, negative, or neutral expression. There was a significant interaction between the sex of the rater and the orientation of the face that influenced the proportion of correct classifications. Male and female perceivers did not differ in the accuracy of their affect judgments for faces viewed in normal orientation, whereas reversal of the orientation of the faces resulted in a significant enhancement of accuracy judgments for the males but not the females. The results suggest greater cerebral lateralization of perceptual processes in males.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号