首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Eyewitnesses to a simulated crime attempted to identify the perpetrator from a computerized mug book. The 208 mug book pictures were presented either 1 mug shot per page or in groups of 12 mug shots per page. Half of the mug books were arranged by similarity to the perpetrator as determined by a facial recognition algorithm, and half were randomly arranged. In contrast to past findings with photospreads, false-positive identifications were significantly higher using the one-at-a-time procedure than the grouped procedure. Results suggest that the best practice for mug books may be the use of groups of pictures per page rather than the one-at-a-time procedure long advocated by experts for use in lineups and photospreads.  相似文献   

2.
In this study, the relationship between face recognition and different facial encoding strategies was investigated. Children (6-8 years, N= 134) participated in both a face recognition task and an encoding task. During the recognition task, they saw 7 target faces in an eyewitness context (video) or in a neutral context (static black and white slides) which they later had to recognize from a set of 21 faces. On the encoding task, the same children had to categorize new faces (schematic and photorealistic) into two categories. The construction of the categories allowed participants to encode the faces either analytically (by focusing on a single attribute) or holistically (in terms of overall similarity). The results showed that face recognition was better in the social than in the neutral context. In the neutral context, only holistic encoding was connected to better face recognition. In the social context, children seemed to use not only information about the faces but also information about the persons.  相似文献   

3.
The notion of social appraisal emphasizes the importance of a social dimension in appraisal theories of emotion by proposing that the way an individual appraises an event is influenced by the way other individuals appraise and feel about the same event. This study directly tested this proposal by asking participants to recognize dynamic facial expressions of emotion (fear, happiness, or anger in Experiment 1; fear, happiness, anger, or neutral in Experiment 2) in a target face presented at the center of a screen while a contextual face, which appeared simultaneously in the periphery of the screen, expressed an emotion (fear, happiness, anger) or not (neutral) and either looked at the target face or not. We manipulated gaze direction to be able to distinguish between a mere contextual effect (gaze away from both the target face and the participant) and a specific social appraisal effect (gaze toward the target face). Results of both experiments provided evidence for a social appraisal effect in emotion recognition, which differed from the mere effect of contextual information: Whereas facial expressions were identical in both conditions, the direction of the gaze of the contextual face influenced emotion recognition. Social appraisal facilitated the recognition of anger, happiness, and fear when the contextual face expressed the same emotion. This facilitation was stronger than the mere contextual effect. Social appraisal also allowed better recognition of fear when the contextual face expressed anger and better recognition of anger when the contextual face expressed fear.  相似文献   

4.
Faces learned from multiple viewpoints are recognized better with left than right three-quarter views. This left-view superiority could be explained by perceptual experience, facial asymmetry, or hemispheric specialization. In the present study, we investigated whether left-view sequences are also more effective in recognizing same and novel views of a face. In a sequential matching task, a view sequence showing a face rotating around a left (?30°) or a right (+30°) angle, with an amplitude of 30°, was followed by a static test view with the same viewpoint as the sequence (?30° or +30°) or with a novel one (0°, +30°, or ?30°). We found a superiority of left-view sequences independently of the test viewpoint, but no superiority of left over right test views. These results do not seem compatible with the perceptual experience hypothesis, which predicts superiority only for left-side test views (?30°). Also, a facial asymmetry judgement task showed no correlation between the asymmetry of individual faces and the left-view sequence superiority. A superiority of left-view sequences for novel as well as same test views argues in favour of an explanation by hemispheric specialization, because of the possible role of the right hemisphere in extracting facial identity information.  相似文献   

5.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

6.
The impact of allowing witnesses to choose the type of cues presented in multimedia mug books was explored in two experiments. In Experiment 1, participants viewed a videotaped crime and attempted to identify the perpetrator from one of three types of mug books: (a) dynamic‐combined—participants could choose to follow static mug shots with a computerized video clip combining three types of dynamic cues: the person walking, talking, and rotating; (b) dynamic‐separable—participants could limit the types of dynamic cues presented; and (c) static—just the static mug shot was presented. The dynamic‐separable condition produced significantly fewer false positive foil identifications than the static condition. Within the dynamic‐separable condition, voice was the most preferred cue. Experiment 2 explored the contribution of the individual cues. Participants attempted identifications from single dynamic cue mug books where only one type of cue was presented if a participant chose additional information. It was found that providing individual cues did not improve performance over the static mug book control. Based on the potential danger of witnesses choosing to rely on single dynamic cues, it was suggested that multimedia mug books should present dynamic cues in combination. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

7.
动态面孔表情优势效应是指较静态面孔表情而言, 个体在识别动态的面孔表情时表现出较好的识别能力。动态面孔表情优势效应的心理机制主要涉及增强的构形加工、补偿角色和面孔模仿能力。此外, 该优势效应的神经网络则由核心的神经网络和扩展的神经网络组成, 前者主要负责早期的知觉编码和刺激的运动加工, 而后者与个体的面孔模仿能力、刺激的动态表征等有关。今后的研究应集中在完善延伸的神经网络、拓展其心理机制; 开展动态面孔表情优势效应的发展性研究; 考察面孔表情的刚性运动特征; 注重在虚拟现实环境下研究动态面孔表情的优势效应。  相似文献   

8.
A matching advantage for dynamic human faces   总被引:3,自引:0,他引:3  
Thornton IM  Kourtzi Z 《Perception》2002,31(1):113-132
In a series of three experiments, we used a sequential matching task to explore the impact of non-rigid facial motion on the perception of human faces. Dynamic prime images, in the form of short video sequences, facilitated matching responses relative to a single static prime image. This advantage was observed whenever the prime and target showed the same face but an identity match was required across expression (experiment 1) or view (experiment 2). No facilitation was observed for identical dynamic prime sequences when the matching dimension was shifted from identity to expression (experiment 3). We suggest that the observed dynamic advantage, the first reported for non-degraded facial images, arises because the matching task places more emphasis on visual working memory than typical face recognition tasks. More specifically, we believe that representational mechanisms optimised for the processing of motion and/or change-over-time are established and maintained in working memory and that such 'dynamic representations' (Freyd, 1987 Psychological Review 94 427-438) capitalise on the increased information content of the dynamic primes to enhance performance.  相似文献   

9.
Perceived gaze contact in seen faces may convey important social signals. We examined whether gaze perception affects face processing during two tasks: Online gender judgement, and later incidental recognition memory. Individual faces were presented with eyes directed either straight towards the viewer or away, while these faces were seen in either frontal or three-quarters view. Participants were slower to make gender judgements for faces with direct versus averted eye gaze, but this effect was particularly pronounced for faces with opposite gender to the observer, and seen in three-quarters view. During subsequent surprise recognition-memory testing, recognition was better for faces previously seen with direct than averted gaze, again especially for the opposite gender to the observer. The effect of direct gaze was stronger in both tasks when the head was seen in three-quarters rather than in frontal view, consistent with the greater salience of perceived eye contact for deviated faces. However, in the memory test, face recognition was also relatively enhanced for faces of opposite gender in front views when their gaze was averted rather than direct. Together, these results indicate that perceived eye contact can interact with facial processing during gender judgements and recognition memory, even when gaze direction is task-irrelevant, and particularly for faces of opposite gender to the observer (an influence which controls for stimulus factors when considering observers of both genders). These findings appear consistent with recent neuroimaging evidence that social facial cues can modulate visual processing in cortical regions involved in face processing and memory, presumably via interconnections with brain systems specialized for gaze perception and social monitoring.  相似文献   

10.
Preferential inspection of views of 3-D model heads.   总被引:1,自引:0,他引:1  
  相似文献   

11.
Natural variability between instances of unfamiliar faces can make it difficult to reconcile two images as the same person. Yet for familiar faces, effortless recognition occurs even with considerable variability between images. To explore how stable face representations develop, we employed incidental learning in the form of a face sorting task. In each trial, multiple images of two facial identities were sorted into two corresponding piles. Following the sort, participants showed evidence of having learnt the faces performing more accurately on a matching task with seen than with unseen identities. Furthermore, ventral temporal event-related potentials were more negative in the N250 time range for previously seen than for previously unseen identities. These effects appear to demonstrate some degree of abstraction, rather than simple picture learning, as the neurophysiological and behavioural effects were observed with novel images of the previously seen identities. The results provide evidence of the development of facial representations, allowing a window onto natural mechanisms of face learning.  相似文献   

12.
Gender is a dimension of face recognition   总被引:1,自引:0,他引:1  
In an experiment, the authors investigated the impact of gender categorization on face recognition. Participants were familiarized with composite androgynous faces labeled with either a woman's first name (Mary) or a man's first name (John). The results indicated that participants more quickly eliminated faces of the opposite gender than faces of the same gender than the face they were looking for. This gender effect did not result from greater similarity between faces of the same gender. Rather, early gender categorization of a face during face recognition appears to speed up the comparison process between the perceptual input and the facial representation. Implications for face recognition models are discussed.  相似文献   

13.
Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.  相似文献   

14.
Sex and sexual orientation related differences in processing of happy and sad facial emotions were examined using an experimental facial emotion recognition paradigm with a large sample (N = 240). Analysis of covariance (controlling for age and IQ) revealed that women (irrespective of sexual orientation) had faster reaction times than men for accurate identification of facial emotion and were more accurate in identifying male faces than female ones, whereas men performed the same regardless of the sex of the face. However, there were no overall sex differences in accuracy. These findings suggest a limited role for sex in the perception of facial affect.  相似文献   

15.
Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain-inspired algorithms that have recently reached human-level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human-like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human-like representation depends on a system that is optimized for face identification. In the current study, we examined the representation of DCNNs of faces that differ in features that are critical or non-critical for human face recognition. Our findings show that DCNNs optimized for face identification are tuned to the same facial features used by humans for face recognition. Sensitivity to these features was highly correlated with performance of the DCNN on a benchmark face recognition task. Moreover, sensitivity to these features and a view-invariant face representation emerged at higher layers of a DCNN optimized for face recognition but not for object recognition. This finding parallels the division to a face and an object system in high-level visual cortex. Taken together, these findings validate human perceptual models of face recognition, enable us to use DCNNs to test predictions about human face and object recognition as well as contribute to the interpretability of DCNNs.  相似文献   

16.
There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information.  相似文献   

17.
People readily ascribe personality traits to others and believe that faces hold important guides to character. Here we examined the relationship between static facial appearance and self-reported cooperation/defection using the prisoner’s dilemma (N = 193). Study 1 combined face images of those self-reporting they would be most and least likely to cooperate. The composites of cooperators were seen as more cooperative than non-cooperators. Study 2 demonstrated accuracy with ratings of individual faces. Masculinity of face shape was negatively related to self-reported cooperation for men, but not women. Further, ratings of smile intensity were positively, but not significantly, related to self-reported cooperation. Overall, individuals appear able judge the potential of others to cooperate from static facial appearance alone at rates greater than chance.  相似文献   

18.
Two experiments test the effects of exposure duration and encoding instruction on the relative memory for five facial features. Participants viewed slides of Identi-kit faces and were later given a recognition test with same or changed versions of each face. Each changed test face involved a change in one facial feature: hair, eyes, chin, nose or mouth. In both experiments the upper-face features of hair and eyes were better recognized than the lower-face features of nose, mouth, and chin, as measured by false alarm rates. In Experiment 1, participants in the 20-second exposure duration condition remembered faces significantly better than participants in the 3-second exposure duration condition; however, memory for all five facial features improved at a similar rate with the increased duration. In Experiment 2, participants directed to use feature scanning encoding instructions remembered faces significantly better than participants following age judgement instructions; however, the size of the memory advantage for upper facial features was less with feature scanning instructions than with age judgement instructions. The results are discussed in terms of a quantitative difference in processing faces with longer exposure duration, versus a qualitative difference in processing faces with various encoding instructions. These results are related to conditions that affect the accuracy of eyewitness identification.  相似文献   

19.
The role of movement in the recognition of famous faces   总被引:6,自引:0,他引:6  
The effects of movement on the recognition of famous faces shown in difficult conditions were investigated. Images were presented as negatives, upside down (inverted), and thresholded. Results indicate that, under all these conditions, moving faces were recognized significantly better than static ones. One possible explanation of this effect could be that a moving sequence contains more static information about the different views and expressions of the face than does a single static image. However, even when the amount of static information was equated (Experiments 3 and 4), there was still an advantage for moving sequences that contained their original dynamic properties. The results suggest that the dynamics of the motion provide additional information, helping to access an established familiar face representation. Both the theoretical and the practical implications for these findings are discussed.  相似文献   

20.
Faces from another race are generally more difficult to recognize than faces from one's own race. However, faces provide multiple cues for recognition and it remains unknown what are the relative contribution of these cues to this “other-race effect”. In the current study, we used three-dimensional laser-scanned head models which allowed us to independently manipulate two prominent cues for face recognition: the facial shape morphology and the facial surface properties (texture and colour). In Experiment 1, Asian and Caucasian participants implicitly learned a set of Asian and Caucasian faces that had both shape and surface cues to facial identity. Their recognition of these encoded faces was then tested in an old/new recognition task. For these face stimuli, we found a robust other-race effect: Both groups were more accurate at recognizing own-race than other-race faces. Having established the other-race effect, in Experiment 2 we provided only shape cues for recognition and in Experiment 3 we provided only surface cues for recognition. Caucasian participants continued to show the other-race effect when only shape information was available, whereas Asian participants showed no effect. When only surface information was available, there was a weak pattern for the other-race effect in Asians. Performance was poor in this latter experiment, so this pattern needs to be interpreted with caution. Overall, these findings suggest that Asian and Caucasian participants rely differently on shape and surface cues to recognize own-race faces, and that they continue to use the same cues for other-race faces, which may be suboptimal for these faces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号