首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigated participants’ ability to identify and represent faces by hand. In Experiment 1, participants proved surprisingly capable of identifying unfamiliar live human faces using only their sense of touch. To evaluate the contribution of geometric and material information more directly, we biased participants toward encoding faces more in terms of geometric than material properties, by varying the exploration condition. When participants explored the faces both visually and tactually, identification accuracy did not improve relative to touch alone. When participants explored masks of the faces, thereby eliminating material cues, matching accuracy declined substantially relative to tactual identification of live faces. In Experiment 2, we explored intersensory transfer of face information between vision and touch. The findings are discussed in terms of their relevance to haptic object processing and to the faceprocessing literature in general.  相似文献   

2.
Three experiments are reported in which the effects of viewpoint on the recognition of distinctive and typical faces were explored. Specifically, we investigated whether generalization across views would be better for distinctive faces than for typical faces. In Experiment 1 the time to match different views of the same typical faces and the same distinctive faces was dependent on the difference between the views shown. In contrast, the accuracy and latency of correct responses on trials in which two different faces were presented were independent of viewpoint if the faces were distinctive but were view-dependent if the faces were typical. In Experiment 2 we tested participants'recognition memory for unfamiliar faces that had been studied at a single three-quarter view. Participants were presented with all face views during test. Finally, in Experiment 3, participants were tested on their recognition of unfamiliar faces that had been studied at all views. In both Experiments 2 and 3 we found an effect of distinctiveness and viewpoint but no interaction between these factors. The results are discussed in terms of a model of face representation based on inter-item similarity in which the representations are view specific.  相似文献   

3.
Mood has varied effects on cognitive performance including the accuracy of face recognition (Lundh & Ost, 1996). Three experiments are presented here that explored face recognition abilities in mood-induced participants. Experiment 1 demonstrated that happy-induced participants are less accurate and have a more conservative response bias than sad-induced participants in a face recognition task. Using a remember/know/guess procedure, Experiment 2 showed that sad-induced participants had more conscious recollections of faces than happy-induced participants. Additionally, sad-induced participants could recognise all faces accurately, whereas, happy- and neutral-induced participants recognised happy faces more accurately than sad faces. In Experiment 3, these effects were not observed when participants intentionally learnt the faces, rather than incidentally learnt the faces. It is suggested that happy-induced participants do not process faces as elaborately as sad-induced participants.  相似文献   

4.
Facial expressions serve as cues that encourage viewers to learn about their immediate environment. In studies assessing the influence of emotional cues on behavior, fearful and angry faces are often combined into one category, such as "threat-related," because they share similar emotional valence and arousal properties. However, these expressions convey different information to the viewer. Fearful faces indicate the increased probability of a threat, whereas angry expressions embody a certain and direct threat. This conceptualization predicts that a fearful face should facilitate processing of the environment to gather information to disambiguate the threat. Here, we tested whether fearful faces facilitated processing of neutral information presented in close temporal proximity to the faces. In Experiment 1, we demonstrated that, compared with neutral faces, fearful faces enhanced memory for neutral words presented in the experimental context, whereas angry faces did not. In Experiment 2, we directly compared the effects of fearful and angry faces on subsequent memory for emotional faces versus neutral words. We replicated the findings of Experiment 1 and extended them by showing that participants remembered more faces from the angry face condition relative to the fear condition, consistent with the notion that anger differs from fear in that it directs attention toward the angry individual. Because these effects cannot be attributed to differences in arousal or valence processing, we suggest they are best understood in terms of differences in the predictive information conveyed by fearful and angry facial expressions.  相似文献   

5.
Faces from another race are generally more difficult to recognize than faces from one's own race. However, faces provide multiple cues for recognition and it remains unknown what are the relative contribution of these cues to this “other-race effect”. In the current study, we used three-dimensional laser-scanned head models which allowed us to independently manipulate two prominent cues for face recognition: the facial shape morphology and the facial surface properties (texture and colour). In Experiment 1, Asian and Caucasian participants implicitly learned a set of Asian and Caucasian faces that had both shape and surface cues to facial identity. Their recognition of these encoded faces was then tested in an old/new recognition task. For these face stimuli, we found a robust other-race effect: Both groups were more accurate at recognizing own-race than other-race faces. Having established the other-race effect, in Experiment 2 we provided only shape cues for recognition and in Experiment 3 we provided only surface cues for recognition. Caucasian participants continued to show the other-race effect when only shape information was available, whereas Asian participants showed no effect. When only surface information was available, there was a weak pattern for the other-race effect in Asians. Performance was poor in this latter experiment, so this pattern needs to be interpreted with caution. Overall, these findings suggest that Asian and Caucasian participants rely differently on shape and surface cues to recognize own-race faces, and that they continue to use the same cues for other-race faces, which may be suboptimal for these faces.  相似文献   

6.
Two experiments test the effects of exposure duration and encoding instruction on the relative memory for five facial features. Participants viewed slides of Identi-kit faces and were later given a recognition test with same or changed versions of each face. Each changed test face involved a change in one facial feature: hair, eyes, chin, nose or mouth. In both experiments the upper-face features of hair and eyes were better recognized than the lower-face features of nose, mouth, and chin, as measured by false alarm rates. In Experiment 1, participants in the 20-second exposure duration condition remembered faces significantly better than participants in the 3-second exposure duration condition; however, memory for all five facial features improved at a similar rate with the increased duration. In Experiment 2, participants directed to use feature scanning encoding instructions remembered faces significantly better than participants following age judgement instructions; however, the size of the memory advantage for upper facial features was less with feature scanning instructions than with age judgement instructions. The results are discussed in terms of a quantitative difference in processing faces with longer exposure duration, versus a qualitative difference in processing faces with various encoding instructions. These results are related to conditions that affect the accuracy of eyewitness identification.  相似文献   

7.
We describe three experiments in which viewers complete face detection tasks as well as standard measures of unfamiliar face identification. In the first two studies, participants viewed pareidolic images of objects (Experiment 1) or cloud scenes (Experiment 2), and their propensity to see faces in these scenes was measured. In neither case is performance significantly associated with identification, as measured by the Cambridge Face Memory or Glasgow Face Matching Tests. In Experiment 3 we showed participants real faces in cluttered scenes. Viewers’ ability to detect these faces is unrelated to their identification performance. We conclude that face detection dissociates from face identification.  相似文献   

8.
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.  相似文献   

9.
When we examine objects haptically, do we weight their local and global features as we do visually, or do we place relatively greater emphasis on local shape? In Experiment 1, subjects made either haptic or visual comparisons of pairs of geometric objects (from a set of 16) differing in local and global shape. Relative to other objects, those with comparable global shape but different local features were judged less similar by touch than by vision. Separate groups of subjects explored the same objects while wearing either thick gloves (to discourage contour-following) or splinted gloves (to prevent enclosure). Ratings of similarity were comparable in these two conditions, suggesting that neither exploratory procedure was necessary, by itself, for the extraction of either local or global shape. In Experiment 2, haptic exploration time was restricted to 1, 4, 8, or 16 sec. Limiting exploration time affected relative similarity in objects differing in their local but not their global shape. Together, the findings indicate that the hepatic system initially weights local features more heavily than global ones, that this differential weighting decreases over time, and that neither contour-following nor enclosure is exclusively associated with the differential emphasis on local versus global features.  相似文献   

10.
Face recognition models suggest independent processing for functionally different types of information, such as identity, expression, sex, and facial speech. Interference between sex and expression information was tested using both a rating study and Garner's selective attention paradigm using speeded sex and expression decisions. When participants were asked to assess the masculinity of male and female angry and surprised faces, they found surprised faces to be more feminine than angry ones (Experiment 1). However, in a speeded-classification situation in the laboratory in which the sex decision was either "easy" relative to the expression decision (Experiment 2) or of more equivalent difficulty (Experiment 3), it was possible for participants to attend selectively to either dimension without interference from the other. Qualified support is offered for independent processing routes.  相似文献   

11.
Information given to witnesses after an identification decision greatly alters their impressions of the original event and importantly, their identification confidence. Two experiments investigated the possibility that the effect of feedback on confidence may be altered according to the strength of the witness's cues to accuracy. Experiment 1 used a manipulation of exposure duration to alter recognition accuracy prior to the delivery of confirming, disconfirming or no feedback. While the feedback effect was not different across exposure duration conditions, decisions that were made more quickly were less likely to show large changes in confidence due to feedback. Experiment 2 manipulated the distinctiveness of faces and showed that the effects of feedback on confidence, and on the resolution of the confidence judgement, were more pronounced when disconfirming feedback was given for distinctive faces and when confirming feedback was given for typical faces. These studies showed that the impressions that participants formed of their likely accuracy might moderate the effects of feedback on decision confidence. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
Investigations of face identification have typically focussed on matching faces to photographic IDs. Few researchers have considered the task of searching for a face in a crowd. In Experiment 1, we created the Chokepoint Search Test to simulate real-time search for a target. Performance on this test was poor (39% accuracy) and showed moderate associations with tests of face matching and memory. In addition, trial-level confidence predicted accuracy, and for those participants who were previously familiar with one or more targets, higher familiarity was associated with increased accuracy. In Experiment 2, we found improvements in performance on the test when three recent images of the target, but not three social media images, were displayed during searches. Taken together, our results highlight the difficulties inherent in real-time searching for faces, with important implications for those security personnel who carry out this task on a daily basis.  相似文献   

13.
The results of two studies on the relationship between evaluations of trustworthiness, valence and arousal of faces are reported. In Experiment 1, valence and trustworthiness judgments of faces were positively correlated, while arousal was negatively correlated with both trustworthiness and valence. In Experiment 2, learning about faces based on their emotional expression and the extent to which this learning is influenced by perceived trustworthiness was investigated. Neutral faces of different models differing in trustworthiness were repeatedly associated with happy or with angry expressions and the participants were asked to categorize each neutral face as belonging to a "friend" or to an "enemy" based on these associations. Four pairing conditions were defined in terms of the congruency between trustworthiness level and expression: Trustworthy-congruent, trustworthy-incongruent, untrustworthy-congruent and untrustworthy-incongruent. Categorization accuracy during the learning phase and face evaluation after learning were measured. During learning, participants learned to categorize with similar efficiency trustworthy and untrustworthy faces as friends or enemies and thus no effects of congruency were found. In the evaluation phase, faces of enemies were rated as more negative and arousing than those of friends, thus showing that learning was effective to change the affective value of the faces. However, faces of untrustworthy models were still judged on average more negative and arousing than those of trustworthy ones. In conclusion, although face trustworthiness did not influence learning of associations between faces and positive or negative social information it did have a significant influence on face evaluation that was manifest even after that learning.  相似文献   

14.
John B. Pierce Laboratory and Yale University, New Haven, Connecticut When we examine objects haptically, do we weight their local and global features as we do visually, or do we place relatively greater emphasis on local shape? In Experiment 1, subjects made either haptic or visual comparisons of pairs of geometric objects (from a set of 16) differing in local and global shape. Relative to other objects, those with comparable global shape but different local features were judged less similar by touch than by vision. Separate groups of subjects explored the same objects while wearing either thick gloves (to discourage contour-following) or splinted gloves (to prevent enclosure). Ratings of similarity were comparable in these two conditions, suggesting that neither exploratory procedure was necessary, by itself, for the extraction of either local or global shape. In Experiment 2, haptic exploration time was restricted to 1, 4, 8, or 16 sec. Limiting exploration time affected relative similarity in objects differing in their local but not their global shape. Together, the findings indicate that the haptic system initially weights local features more heavily than global ones, that this differential weighting decreases over time, and that neither contour-following nor enclosure is exclusively associated with the differential emphasis on local versus global features.  相似文献   

15.
In two experiments, participants counted features of schematic faces with positive, negative, or neutral emotional expressions. In Experiment 1 it was found that counting features took longer when they were embedded in negative as opposed to positive faces. Experiment 2 replicated the results of Experiment 1 and also demonstrated that more time was required to count features of negative relative to neutral faces. However, in both experiments, when the faces were inverted to reduce holistic face perception, no differences between neutral, positive, and negative faces were observed, even though the feature information in the inverted faces was the same as in the upright faces. We suggest that, relative to neutral and positive faces, negative faces are particularly effective at capturing attention to the global face level and thereby make it difficult to count the local features of faces.  相似文献   

16.
Hole GJ  George PA  Eaves K  Rasek A 《Perception》2002,31(10):1221-1240
The importance of 'configural' processing for face recognition is now well established, but it remains unclear precisely what it entails. Through four experiments we attempted to clarify the nature of configural processing by investigating the effects of various affine transformations on the recognition of familiar faces. Experiment 1 showed that recognition was markedly impaired by inversion of faces, somewhat impaired by shearing or horizontally stretching them, but unaffected by vertical stretching of faces to twice their normal height. In experiment 2 we investigated vertical and horizontal stretching in more detail, and found no effects of either transformation. Two further experiments were performed to determine whether participants were recognising stretched faces by using configural information. Experiment 3 showed that nonglobal vertical stretching of faces (stretching either the top or the bottom half while leaving the remainder undistorted) impaired recognition, implying that configural information from the stretched part of the face was influencing the process of recognition--ie that configural processing involves global facial properties. In experiment 4 we examined the effects of Gaussian blurring on recognition of undistorted and vertically stretched faces. Faces remained recognisable even when they were both stretched and blurred, implying that participants were basing their judgments on configural information from these stimuli, rather than resorting to some strategy based on local featural details. The tolerance of spatial distortions in human face recognition suggests that the configural information used as a basis for face recognition is unlikely to involve information about the absolute position of facial features relative to each other, at least not in any simple way.  相似文献   

17.
For face recognition, observers utilize both shape and texture information. Here, we investigated the relative diagnosticity of shape and texture for delayed matching of familiar and unfamiliar faces (Experiment 1) and identifying familiar and newly learned faces (Experiment 2). Within each familiarity condition, pairs of 3D‐captured faces were morphed selectively in either shape or texture in 20% steps, holding the respective other dimension constant. We also assessed participants’ individual face‐processing skills via the Bielefelder Famous Faces Test (BFFT), the Glasgow Face Matching Test, and the Cambridge Face Memory Test (CFMT). Using multilevel model analyses, we examined probabilities of same versus different responses (Experiment 1) and of original identity versus other/unknown identity responses (Experiment 2). Overall, texture was more diagnostic than shape for both delayed matching and identification, particularly so for familiar faces. On top of these overall effects, above‐average BFFT performance was associated with enhanced utilization of texture in both experiments. Furthermore, above‐average CFMT performance coincided with slightly reduced texture dominance in the delayed matching task (Experiment 1) and stronger sensitivity to morph‐based changes overall, that is irrespective of morph type, in the face identification task (Experiment 2). Our findings (1) show the disproportionate importance of texture information for processing familiar face identity and (2) provide further evidence that familiar and unfamiliar face identity perception are mediated by different underlying processes.  相似文献   

18.
When processing information about human faces, we have to integrate different sources of information like skin colour and emotional expression. In 3 experiments, we investigated how these features are processed in a top-down manner when task instructions determine the relevance of features, and in a bottom-up manner when the stimulus features themselves determine process priority. In Experiment 1, participants learned to respond with approach-avoidance movements to faces that presented both emotion and colour features (e.g. happy faces printed in greyscale). For each participant, only one of these two features was task-relevant while the other one could be ignored. In contrast to our predictions, we found better learning of task-irrelevant colour when emotion was task-relevant than vice versa. Experiment 2 showed that the learning of task-irrelevant emotional information was improved in general when participants’ awareness was increased by adding NoGo-trials. Experiment 3 replicated these results for faces and emotional words. We conclude that during the processing of faces, both bottom-up and top-down processes are involved, such that task instructions and feature characteristics play a role. Ecologically significant features like emotions are not necessarily processed with high priority. The findings are discussed in the light of theories of attention and cognitive biases.  相似文献   

19.
20.
The authors examined preadolescents' ability to recognize faces of unfamiliar peers according to their attractiveness. They hypothesized that highly attractive faces would be less accurately recognized than moderately attractive faces because the former are more typical. In Experiment 1, 106 participants (M age = 10 years) were asked to recognize faces of unknown peers who varied in gender and attractiveness (high- vs. medium-attractiveness). Results showed that attractiveness enhanced the accuracy of recognition for boys' faces and impaired recognition of girls' faces. The same interaction was found in Experiment 2, in which 92 participants (M age = 12 years) were tested for their recognition of another set of faces of unfamiliar peers. The authors conducted Experiment 3 to examine whether the reason for that interaction is that high- and medium-attractive girls' faces differ more in typicality than do boys' faces. The effect size of attractiveness on typicality was similar for boys' and girls' faces. The overall results are discussed with reference to the development of face encoding and biological gender differences with respect to the typicality of faces during preadolescence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号