首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Unfamiliar face matching is a surprisingly difficult task, yet we often rely on people's matching decisions in applied settings (e.g., border control). Most attempts to improve accuracy (including training and image manipulation) have had very limited success. In a series of studies, we demonstrate that using smiling rather than neutral pairs of images brings about significant improvements in face matching accuracy. This is true for both match and mismatch trials, implying that the information provided through a smile helps us detect images of the same identity as well as distinguishing between images of different identities. Study 1 compares matching performance when images in the face pair display either an open-mouth smile or a neutral expression. In Study 2, we add an intermediate level, closed-mouth smile, to identify the effect of teeth being exposed, and Study 3 explores face matching accuracy when only information about the lower part of the face is available. Results demonstrate that an open-mouth smile changes the face in an idiosyncratic way which aids face matching decisions. Such findings have practical implications for matching in the applied context where we typically use neutral images to represent ourselves in official documents.  相似文献   

2.
Face perception is characterized by a distinct scanpath. While eye movements are considered functional, there has not been direct evidence that disrupting this scanpath affects face recognition performance. The present experiment investigated the influence of an irrelevant letter-search task (with letter strings arranged horizontally, vertically, or randomly) on the subsequent scanning strategies in processing upright and inverted famous faces. Participants’ response time to identify the face and the direction of their eye movements were recorded. The orientation of the letter search influenced saccadic direction when viewing the face images, such that a direct carryover-effect was observed. Following a vertically oriented letter-search task, the recognition of famous faces was slower and less accurate for upright faces, and faster for inverted faces. These results extend the carryover findings of Thompson and Crundall into a novel domain. Crucially they also indicate that upright and inverted faces are better processed by different eye movements, highlighting the importance of scanpaths in face recognition.  相似文献   

3.
In cases of disputed CCTV identification, expert testimony based on the results of analysis by facial image comparison may be presented to the Jury. However, many of the techniques lack empirical data to support their use. Using a within‐participants design, we compared the accuracy of face‐matching decisions when images were presented using a ‘facial wipe’ technique (where one image is superimposed on another and the display gradually ‘wipes’ between the two) with decisions based on static images. Experiment 1 used high‐quality image pairs; Experiment 2 used disguised target images; and Experiment 3 used degraded target images. Across all three experiments, rather than optimising performance, facial wipes reduced accuracy relative to static presentations. Further, there is evidence that video wipes increase false positives and therefore may increase the likelihood that images of two different people will be incorrectly judged to show the same individual.Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
We know from previous research that unfamiliar face matching (determining whether two simultaneously presented images show the same person or not) is very error-prone. A small number of studies in laboratory settings have shown that the use of multiple images or a face average, rather than a single image, can improve face matching performance. Here, we tested 1,999 participants using four-image arrays and face averages in two separate live matching tasks. Matching a single image to a live person resulted in numerous errors (79.9% accuracy across both experiments), and neither multiple images (82.4% accuracy) nor face averages (76.9% accuracy) improved performance. These results are important when considering possible alterations which could be made to photo-ID. Although multiple images and face averages have produced measurable improvements in performance in recent laboratory studies, they do not produce benefits in a real-world live face matching context.  相似文献   

5.
The accurate identification of an unfamiliar individual from a face photo is a critical factor in several applied situations (e.g., border control). Despite this, matching faces to photographic ID is highly prone to error. In lieu of effective training measures, which could reduce face matching errors, the selection of “super-recognisers” (SRs) provides the most promising route to combat misidentification or fraud. However, to date, super-recognition has been defined and tested using almost exclusively “own-race” face memory and matching tests. Here, across three studies, we test Caucasian participants' performance on own- and other-race face identification tasks (GFMT, MFMT, CFMT+, EFMT, CFMT-Chinese). Our findings show that compared to controls, high-performing typical recognisers (Studies 1 and 2) and SRs (Study 3) show superior performance on both the own- and other-race tests. These findings suggest that recruiting SRs in ethnically diverse applied settings could be advantageous.  相似文献   

6.
Two experiments test whether isolated visible speech movements can be used for face matching. Visible speech information was isolated with a point-light methodology. Participants were asked to match articulating point-light faces to a fully illuminated articulating face in an XAB task. The first experiment tested single-frame static face stimuli as a control. The results revealed that the participants were significantly better at matching the dynamic face stimuli than the static ones. Experiment 2 tested whether the observed dynamic advantage was based on the movement itself or on the fact that the dynamic stimuli consisted of many more static and ordered frames. For this purpose, frame rate was reduced, and the frames were shown in a random order, a correct order with incorrect relative timing, or a correct order with correct relative timing. The results revealed better matching performance with the correctly ordered and timed frame stimuli, suggesting that matches were based on the actual movement itself. These findings suggest that speaker-specific visible articulatory style can provide information for face matching.  相似文献   

7.
We describe three experiments in which viewers complete face detection tasks as well as standard measures of unfamiliar face identification. In the first two studies, participants viewed pareidolic images of objects (Experiment 1) or cloud scenes (Experiment 2), and their propensity to see faces in these scenes was measured. In neither case is performance significantly associated with identification, as measured by the Cambridge Face Memory or Glasgow Face Matching Tests. In Experiment 3 we showed participants real faces in cluttered scenes. Viewers’ ability to detect these faces is unrelated to their identification performance. We conclude that face detection dissociates from face identification.  相似文献   

8.
This article provides a response to five excellent commentaries on our article ‘Super‐recognizers: From the lab to the world and back again’. Specifically, the response summarizes commonalities between these commentaries. Based on this consensus, we propose a flexible framework for the assessment of superior face recognition and outline guiding principles to advance future work in the field.  相似文献   

9.
Photo‐identification is based on the premise that photographs are representative of facial appearance. However, previous studies show that ratings of likeness vary across different photographs of the same face, suggesting that some images capture identity better than others. Two experiments were designed to examine the relationship between likeness judgments and face matching accuracy. In Experiment 1, we compared unfamiliar face matching accuracy for self‐selected and other‐selected high‐likeness images. Surprisingly, images selected by previously unfamiliar viewers – after very limited exposure to a target face – were more accurately matched than self‐selected images chosen by the target identity themselves. Results also revealed extremely low inter‐rater agreement in ratings of likeness across participants, suggesting that perceptions of image resemblance are inherently unstable. In Experiment 2, we test whether the cost of self‐selection can be explained by this general disagreement in likeness judgments between individual raters. We find that averaging across rankings by multiple raters produces image selections that provide superior identification accuracy. However, benefit of other‐selection persisted for single raters, suggesting that inaccurate representations of self interfere with our ability to judge which images faithfully represent our current appearance.  相似文献   

10.
Recent experiments have suggested that seeing a familiar face move provides additional dynamic information to the viewer, useful in the recognition of identity. In four experiments, repetition priming was used to investigate whether dynamic information is intrinsic to the underlying face representations. The results suggest that a moving image primes more effectively than a static image, even when the same static image is shown in the prime and the test phases (Experiment 1). Furthermore, when moving images are presented in the test phase (Experiment 2), there is an advantage for moving prime images. The most priming advantage is found with naturally moving faces, rather than with those shown in slow motion (Experiment 3). Finally, showing the same moving sequence at prime and test produced more priming than that found when different moving sequences were shown (Experiment 4). The results suggest that dynamic information is intrinsic to the face representations and that there is an advantage to viewing the same moving sequence at prime and test.  相似文献   

11.
Humans show improved recognition for faces from their own social group relative to faces from another social group. Yet before faces can be recognized, they must first be detected in the visual field. Here, we tested whether humans also show an ingroup bias at the earliest stage of face processing – the point at which the presence of a face is first detected. To this end, we measured viewers' ability to detect ingroup (Black and White) and outgroup faces (Asian, Black, and White) in everyday scenes. Ingroup faces were detected with greater speed and accuracy relative to outgroup faces (Experiment 1). Removing face hue impaired detection generally, but the ingroup detection advantage was undiminished (Experiment 2). This same pattern was replicated by a detection algorithm using face templates derived from human data (Experiment 3). These findings demonstrate that the established ingroup bias in face processing can extend to the early process of detection. This effect is ‘colour blind’, in the sense that group membership effects are independent of general effects of image hue. Moreover, it can be captured by tuning visual templates to reflect the statistics of observers' social experience. We conclude that group bias in face detection is both a visual and a social phenomenon.  相似文献   

12.
Three experiments are reported which examined the capacity to match a voice with a static image of a face. When using a simultaneous same/different matching task, performance was significantly better than chance (Experiments 1 and 2). However, it did not appear to depend either on sex of speaker, sex of listener, stimulus distinctiveness, or self-reported strategies (Experiment 2). Concerns over floor effects as well as a significant response bias prompted a change of task, and when performance was examined through matching a voice to a face lineup, a more interesting pattern emerged. Again, performance was significantly better than chance, but in addition, it was demonstrably affected by the distinctiveness of the speaker’s voice. These results are considered in the context of theoretical discussions regarding face–voice integration, and in the context of more applied considerations regarding multimodal benefits in witness scenarios.  相似文献   

13.
For face recognition, observers utilize both shape and texture information. Here, we investigated the relative diagnosticity of shape and texture for delayed matching of familiar and unfamiliar faces (Experiment 1) and identifying familiar and newly learned faces (Experiment 2). Within each familiarity condition, pairs of 3D‐captured faces were morphed selectively in either shape or texture in 20% steps, holding the respective other dimension constant. We also assessed participants’ individual face‐processing skills via the Bielefelder Famous Faces Test (BFFT), the Glasgow Face Matching Test, and the Cambridge Face Memory Test (CFMT). Using multilevel model analyses, we examined probabilities of same versus different responses (Experiment 1) and of original identity versus other/unknown identity responses (Experiment 2). Overall, texture was more diagnostic than shape for both delayed matching and identification, particularly so for familiar faces. On top of these overall effects, above‐average BFFT performance was associated with enhanced utilization of texture in both experiments. Furthermore, above‐average CFMT performance coincided with slightly reduced texture dominance in the delayed matching task (Experiment 1) and stronger sensitivity to morph‐based changes overall, that is irrespective of morph type, in the face identification task (Experiment 2). Our findings (1) show the disproportionate importance of texture information for processing familiar face identity and (2) provide further evidence that familiar and unfamiliar face identity perception are mediated by different underlying processes.  相似文献   

14.
We studied intact and impaired processes in a prosopagnosic patient (RP). In Experiment 1, RP showed an inversion superiority effect with both faces and objects, with better performance when stimuli were presented upside down than in normal upright orientation. In Experiment 2, we studied the effect of face configuration directly by comparing matching performance with normal vs. scrambled faces. RP was worse with normal than with scrambled faces, whereas normal controls showed an advantage of a good face context. In Experiment 3, RP showed interference from external face features on the evaluation of internal face features. These results indicate, first, that although RP is impaired in face recognition and face matching, he does still encode the whole face rather than relying completely on parts-based procedures. Second, RP has a deficit at the level of the configural processes involved in finding subtle differences between individual faces, as his performance is worse when presented with a normal face configuration than with scrambled or inverted faces.  相似文献   

15.
ABSTRACT

This study examines the limits of image variability, commonly referred to as Ambient Images, in face learning. To measure face learning, the authors used the face sorting paradigm from Jenkins et al. [(2011). Variability in photos of the same face. Cognition, 121(3), 313–323]. Before completing the face sorting task, participants viewed either 5, 15, or 45 ambient images of an unfamiliar person’s face. The authors aimed to observe whether there is an incremental benefit of ambient images and whether studying many ambient images could predict perfect performance. The results revealed that performance greatly improved from a low to medium exposure group; however, performance plateaued after viewing 15 ambient images. In addition, participants who viewed 45 images did not always achieve perfect performance. Results of this study also found that time data can serve as a quantitative measure of familiarity. The authors concluded that future research must extend past ambient images to fully understand the process of familiarity.  相似文献   

16.
The most widely used measurement of holistic face perception, the composite face effect (CFE), is challenged by two apparently contradictory goals: having a defined face part (i.e., the top half), and yet perceiving the face as an integrated unit (i.e., holistically). Here, we investigated the impact of a small gap between top and bottom face halves in the standard composite face paradigm, requiring matching of sequentially presented top face halves. In Experiment 1, the CFE was larger for no-gap than gap stimuli overall, but not for participants who were presented with gap stimuli first, suggesting that the area of the top face half was unknown without a gap. This was confirmed in Experiment 2, in which these two stimulus sets were mixed up: the gap stimuli thus provided information about the area of a top face half and the magnitude of the CFE did not differ between stimulus sets. These observations indicate that the CFE might be artificially inflated in the absence of a stimulus cue that objectively defines a border between the face halves. Finally, in Experiment 3, observers were asked to determine which of two simultaneously presented faces was the composite face. Perceptual judgements for no-gap stimuli approached ceiling; however, with a gap, participants were almost unable to distinguish the composite face from a veridical face. This effect was not only due to low-level segmentation cues at the border of no-gap face halves, because stimulus inversion decreased performance in both conditions. This result indicates that the two halves of different faces may be integrated more naturally with a small gap that eliminates an enhanced contrast border. Collectively, these observations suggest that a small gap between face halves provides an objective definition of the face half to match and is beneficial for valid measurement of the behavioural CFE.  相似文献   

17.
Four experiments are reported that investigate the usefulness of rigid (head nodding, shaking) and nonrigid (talking, expressions) motion for establishing new face representations of previously unfamiliar faces. Results show that viewing a face in motion leads to more accurate face learning, compared with viewing a single static image (Experiment 1). The advantage for viewing the face moving rigidly seems to be due to the different angles of view contained in these sequences (Experiment 2). However, the advantage for nonrigid motion is not simply due to multiple images (Experiment 3) and is not specifically linked to forwards motion but extends to reversed sequences (Experiment 4). Thus, although we have demonstrated beneficial effects of motion for face learning, they do not seem to be due to the specific dynamic properties of the sequences shown. Instead, the advantage for nonrigid motion may reflect increased attention to faces moving in a socially important manner.  相似文献   

18.
19.
When viewing unfamiliar faces, photographs of the same person often are perceived as belonging to different people and photographs of different people as belonging to the same person. Identity matching of unfamiliar faces is especially challenging when the photographs are of a person whose ethnicity differs from that of the observer. In contrast, matching is trivial when viewing familiar faces, regardless of race. Viewing multiple images of an own-race target identity improves accuracy on a line-up task when the target is known to be present (Dowsett et al., 2016, Q J Exp Psychol, 69, 1), suggesting that exposure to within-person variability in appearance is key to face learning. Across three experiments, we show that viewing multiple images of a target identity also improves accuracy for other-race faces on target-present trials. However, viewing multiple images decreases accuracy (i.e., increases false alarms) on target-absent trials for both own- and other-race faces. We discuss the implications of our findings for models of face recognition and for forensic settings.  相似文献   

20.
In recent years, there has been increasing interest in people with superior face recognition skills. Yet identification of these individuals has mostly relied on criterion performance on a single attempt at a single measure of face memory. The current investigation aimed to examine the consistency of superior face recognition skills in 30 police officers, both across tests that tap into the same process and between tests that tap into different components of face processing. Overall indices of performance across related measures were found to identify different superior performers to isolated test scores. Further, different top performers emerged for target‐present versus target‐absent indices, suggesting that signal detection measures are the most useful indicators of performance. Finally, a dissociation was observed between superior memory and matching performance. Super‐recognizer screening programmes hould therefore include overall indices summarizing multiple attempts at related tests, allowing for individuals to rank highly on different (and sometimes very specific) tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号