首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments are reported which examined the influence of context on face recognition accuracy for novel and familiar faces respectively. Context was manipulated by varying the physical background against which the faces appeared. In Experiment I, 80 student subjects observed 18 faces before attempting to recognize them in a sequence of 36 al ternatives , For half the subj ects, the backgrounds changed from study to test, while for the remainder they stayed the same. In addition, for half the subjects, both the pose and expression of the face also changed, while for the others it remained constant. Changes in pose plus expression and context significantly reduced recognition accuracy for the target faces. Experiment 11 used an identical design, except that the faces of celebrities replaced the novel faces. The influence of context was eliminated but the effects of pose and expression were maintained. However, when only faces which were actually identified by subjects were considered, the effects of pose and expression, too, were eliminated. The significance of these findings for theories of contextual memory are discussed.  相似文献   

2.
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.  相似文献   

3.
We examined context-free familiarity information as a source of the effects of face typicality upon face recognition. Experiment 1 tested memory for typical and unusual faces by (1) subjects who received an input list followed immediately by a recognition test (standard condition), (2) subjects who viewed all test faces (targets and lures) prior to the input list (prefamiliarization condition), and (3) subjects who viewed all test faces after the input list but prior to recognition (postfamiliarization condition). Although false-alarm errors in the standard condition were lower for unusual than for typical faces, this effect was reduced by postfamiliarization and was eliminated entirely by prefamiliarization. The prefamiliarization and typicality effects were replicated in Experiment 2, which showed that patterns of old judgments were compatible with the hypothesis that, although familiarity of new faces is greater if these faces are typical, the increment in familiarity that results from presentation is greater if these faces are unusual.  相似文献   

4.
In two experiments, we examined the relation between gaze control and recollective experience in the context of face recognition. In Experiment 1, participants studied a series of faces, while their eye movements were eliminated either during study or test, or both. Subsequently, they made remember/know judgements for each recognized test face. The preclusion of eye movements impaired explicit recollection without affecting familiarity-based recognition. In Experiment 2, participants examined unfamiliar faces under two study conditions (similarity vs. difference judgements), while their eye movements were registered. Similarity vs. difference judgements produced the opposite effects on remember/know responses, with no systematic effects on eye movements. However, face recollection was related to eye movements, so that remember responses were associated with more frequent refixations than know responses. These findings suggest that saccadic eye movements mediate the nature of recollective experience, and that explicit recollection reflects a greater consistency between study and test fixations than familiarity-based face recognition.  相似文献   

5.
6.
Three experiments examined 3- and 5-year-olds’ recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression remained neutral (Experiment 1) or varied between immediate and delayed tests: from neutral to smile and anger (Experiment 2), from smile to neutral and anger (Experiment 3, condition 1), or from anger to neutral and smile (Experiment 3, condition 2). In all experiments, immediate face recognition was not influenced by emotional expression for either age group. Delayed face recognition was most accurate for faces in identical emotional expression. For 5-year-olds, delayed face recognition (with varied emotional expression) was not influenced by which emotional expression had been displayed during the immediate recognition test. Among 3-year-olds, accuracy decreased when facial expressions varied from neutral to smile and anger but was constant when facial expressions varied from anger or smile to neutral, smile or anger. Three-year-olds’ recognition was facilitated when faces initially displayed smile or anger expressions, but this was not the case for 5-year-olds. Results thus indicate a developmental progression in face identity recognition with varied emotional expressions between ages 3 and 5.  相似文献   

7.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

8.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

9.
Perceivers remember own-race faces more accurately than other-race faces (i.e., Own-Race Bias). In the current experiments, we manipulated participants' attentional resources and social group membership to explore their influence on own and other-race face recognition memory. In Experiment 1, Chinese participants viewed own-race and Caucasian faces, and between-subjects we manipulated whether participants attention was divided during face encoding. We found that divided attention eliminated the Own-Race Bias in memory due to a reduction of memory accuracy for own-race faces, implicating that attention allocation plays a role in creating the bias. In Experiment 2, Chinese participants completed an ostensible personality test. Some participants were informed that their personality traits were most commonly found in Caucasian (i.e., other-race) individuals, resulting in these participants sharing a group membership with other-race targets. In contrast, other participants were not told anything about the personality test, resulting in the default own-race group membership. The participants encoded the faces for a subsequent recognition memory test either with or without performing a concurrent arithmetic distracting task. Results showed that other-race group membership and reducing attention during encoding independently eliminated the typical Own-Race Bias in face memory. The implications of these findings on perceptual-expertise and social-categorization models are discussed.  相似文献   

10.
Hole GJ  George PA  Dunsmore V 《Perception》1999,28(3):341-359
Inversion and photographic negation both impair face recognition. Inversion seems to disrupt processing of the spatial relationship between facial features ('relational' processing) which normally occurs with upright faces and which facilitates their recognition. It remains unclear why negation affects recognition. To find out if negation impairs relational processing, we investigated whether negative faces are subject to the 'chimeric-face effect'. Recognition of the top half of a composite face (constructed from top and bottom halves of different faces) is difficult when the face is upright, but not when it is inverted. To perform this task successfully, the bottom half of the face has to be disregarded, but the relational processing which normally occurs with upright faces makes this difficult. Inversion reduces relational processing and thus facilitates performance on this particular task. In our experiments, subjects saw pairs of chimeric faces and had to decide whether or not the top halves were identical. On half the trials the two chimeras had identical tops; on the remaining trials the top halves were different. (The bottom halves were always different.) All permutations of orientation (upright or inverted) and luminance (normal or negative) were used. In experiment 1, each pair of 'identical' top halves were the same in all respects. Experiment 2 used differently oriented views of the same person, to preclude matches being based on incidental features of the images rather than the faces displayed within them. In both experiments, similar chimeric-face effects were obtained with both positive and negative faces, implying that negative faces evoke some form of relational processing. It is argued that there may be more than one kind of relational processing involved in face recognition: the 'chimeric-face effect' may reflect an initial 'holistic' processing which binds facial features into a 'Gestalt', rather than being a demonstration of the configurational processing involved in individual recognition.  相似文献   

11.
Research on aging and face recognition has shown age-related differences that are reflected most clearly in false-alarm errors. Elderly subjects exceed young adults in false recognitions that new faces are "old." To determine if this difference between young and elderly subjects might differ for young versus elderly faces, an experiment was conducted in which half of the young and elderly subjects studied and recognized young and middle-aged faces, and the remainder studied and recognized middle-aged and elderly faces. Replicating prior research, age-related deficits in recognition accuracy (d') were reduced with older faces, and this effect generalized from measures of face recognition to measures of face-picture recognition. However, the age-related increase in false recognitions of faces was not affected by face age.  相似文献   

12.
Mood has varied effects on cognitive performance including the accuracy of face recognition (Lundh & Ost, 1996). Three experiments are presented here that explored face recognition abilities in mood-induced participants. Experiment 1 demonstrated that happy-induced participants are less accurate and have a more conservative response bias than sad-induced participants in a face recognition task. Using a remember/know/guess procedure, Experiment 2 showed that sad-induced participants had more conscious recollections of faces than happy-induced participants. Additionally, sad-induced participants could recognise all faces accurately, whereas, happy- and neutral-induced participants recognised happy faces more accurately than sad faces. In Experiment 3, these effects were not observed when participants intentionally learnt the faces, rather than incidentally learnt the faces. It is suggested that happy-induced participants do not process faces as elaborately as sad-induced participants.  相似文献   

13.
Three experiments are reported in which the effects of viewpoint on the recognition of distinctive and typical faces were explored. Specifically, we investigated whether generalization across views would be better for distinctive faces than for typical faces. In Experiment 1 the time to match different views of the same typical faces and the same distinctive faces was dependent on the difference between the views shown. In contrast, the accuracy and latency of correct responses on trials in which two different faces were presented were independent of viewpoint if the faces were distinctive but were view-dependent if the faces were typical. In Experiment 2 we tested participants'recognition memory for unfamiliar faces that had been studied at a single three-quarter view. Participants were presented with all face views during test. Finally, in Experiment 3, participants were tested on their recognition of unfamiliar faces that had been studied at all views. In both Experiments 2 and 3 we found an effect of distinctiveness and viewpoint but no interaction between these factors. The results are discussed in terms of a model of face representation based on inter-item similarity in which the representations are view specific.  相似文献   

14.
Hole GJ  George PA  Eaves K  Rasek A 《Perception》2002,31(10):1221-1240
The importance of 'configural' processing for face recognition is now well established, but it remains unclear precisely what it entails. Through four experiments we attempted to clarify the nature of configural processing by investigating the effects of various affine transformations on the recognition of familiar faces. Experiment 1 showed that recognition was markedly impaired by inversion of faces, somewhat impaired by shearing or horizontally stretching them, but unaffected by vertical stretching of faces to twice their normal height. In experiment 2 we investigated vertical and horizontal stretching in more detail, and found no effects of either transformation. Two further experiments were performed to determine whether participants were recognising stretched faces by using configural information. Experiment 3 showed that nonglobal vertical stretching of faces (stretching either the top or the bottom half while leaving the remainder undistorted) impaired recognition, implying that configural information from the stretched part of the face was influencing the process of recognition--ie that configural processing involves global facial properties. In experiment 4 we examined the effects of Gaussian blurring on recognition of undistorted and vertically stretched faces. Faces remained recognisable even when they were both stretched and blurred, implying that participants were basing their judgments on configural information from these stimuli, rather than resorting to some strategy based on local featural details. The tolerance of spatial distortions in human face recognition suggests that the configural information used as a basis for face recognition is unlikely to involve information about the absolute position of facial features relative to each other, at least not in any simple way.  相似文献   

15.
The results of two studies on the relationship between evaluations of trustworthiness, valence and arousal of faces are reported. In Experiment 1, valence and trustworthiness judgments of faces were positively correlated, while arousal was negatively correlated with both trustworthiness and valence. In Experiment 2, learning about faces based on their emotional expression and the extent to which this learning is influenced by perceived trustworthiness was investigated. Neutral faces of different models differing in trustworthiness were repeatedly associated with happy or with angry expressions and the participants were asked to categorize each neutral face as belonging to a "friend" or to an "enemy" based on these associations. Four pairing conditions were defined in terms of the congruency between trustworthiness level and expression: Trustworthy-congruent, trustworthy-incongruent, untrustworthy-congruent and untrustworthy-incongruent. Categorization accuracy during the learning phase and face evaluation after learning were measured. During learning, participants learned to categorize with similar efficiency trustworthy and untrustworthy faces as friends or enemies and thus no effects of congruency were found. In the evaluation phase, faces of enemies were rated as more negative and arousing than those of friends, thus showing that learning was effective to change the affective value of the faces. However, faces of untrustworthy models were still judged on average more negative and arousing than those of trustworthy ones. In conclusion, although face trustworthiness did not influence learning of associations between faces and positive or negative social information it did have a significant influence on face evaluation that was manifest even after that learning.  相似文献   

16.
Four experiments were conducted to study the nature of context effects on the perceived physical attractiveness of faces. In Experiment 1, photos of faces scaled on attractiveness were presented in sets of three, with target faces appearing in the middle flanked by two context faces. The target faces were of average attractiveness, with the context faces being either high, average, or low in attractiveness. The effect of the context was one of assimilation, rather than contrast, regardless of whether the persons in the photos were portrayed to be associated. This result was interpreted in terms of a “generalized halo effect” for judgments of the physical attractiveness of stimuli within a group. Presenting the persons of a set as friends enhanced the perceived attractiveness of the target face but only when the context did not contain a face of low attractiveness. In Experiment 2, the assimilation effect was observed to carry over to influence ratings of the target faces several minutes after the context faces had been removed. Experiment 3 showed the assimilation effect to be robust regardless of whether the context was composed of two faces or one, but Experiment 4 showed the assimilation effect to be evident only when the context faces were presented simultaneously with the target.  相似文献   

17.
Facial information is processed interactively. Yet, such interactive processing has been examined for discrimination of face parts rather than complete faces. Here we assess interactive processing using a novel paradigm in which subjects discriminate complete faces. Face stimuli, which comprise unilateral facial information (hemifaces) or bilateral facial information from one face (consistent) or two different faces (inconsistent), are shown centrally in a face‐matching task. If each half of a complete face is processed independently, accuracy for complete faces can be predicted by the union of accuracies for right and left hemifaces. However, accuracy exceeded this independence prediction for consistent faces (facilitation) and fell below the prediction for inconsistent faces (interference). These effects were reduced or absent for inverted faces. Our findings are consistent with reports of stronger interactive processing for upright than for inverted faces and they quantify effects of interactive processing on the discrimination of complete faces.  相似文献   

18.
The role of different spatial frequency bands on face gender and expression categorization was studied in three experiments. Accuracy and reaction time were measured for unfiltered, low-pass (cut-off frequency of 1 cycle/deg) and high-pass (cutoff frequency of 3 cycles/deg) filtered faces. Filtered and unfiltered faces were equated in root-mean-squared contrast. For low-pass filtered faces reaction times were higher than unfiltered and high-pass filtered faces in both categorization tasks. In the expression task, these results were obtained with expressive faces presented in isolation (Experiment 1) and also with neutral-expressive dynamic sequences where each expressive face was preceded by a briefly presented neutral version of the same face (Experiment 2). For high-pass filtered faces different effects were observed on gender and expression categorization. While both speed and accuracy of gender categorization were reduced comparing to unfiltered faces, the efficiency of expression classification remained similar. Finally, we found no differences between expressive and non expressive faces in the effects of spatial frequency filtering on gender categorization (Experiment 3). These results show a common role of information from the high spatial frequency band in the categorization of face gender and expression.  相似文献   

19.
In a sample of 325 college students, we examined how context influences judgments of facial expressions of emotion, using a newly developed facial affect recognition task in which emotional faces are superimposed upon emotional and neutral contexts. This research used a larger sample size than previous studies, included more emotions, varied the intensity level of the expressed emotion to avoid potential ceiling effects from very easy recognition, did not explicitly direct attention to the context, and aimed to understand how recognition is influenced by non-facial information, both situationally-relevant and situationally-irrelevant. Both accuracy and RT varied as a function of context. For all facial expressions of emotion other than happiness, accuracy increased when the emotion of the face and context matched, and decreased when they mismatched. For all emotions, participants responded faster when the emotion of the face and image matched and slower when they mismatched. Results suggest that the judgment of the facial expression is itself influenced by the contextual information instead of both being judged independently and then combined. Additionally, the results have implications for developing models of facial affect recognition and indicate that there are factors other than the face that can influence facial affect recognition judgments.  相似文献   

20.
In visual search of natural scenes, differentiation of briefly fixated but task-irrelevant distractor items from incidental memory is often comparable to explicit memorization. However, many characteristics of incidental memory remain unclear, including the capacity for its conscious retrieval. Here, we examined incidental memory for faces in either upright or inverted orientation using Rapid Serial Visual Presentation (RSVP). Subjects were instructed to detect a target face in a sequence of 8–15 faces cropped from natural scene photographs (Experiment 1). If the target face was identified within a brief time window, the subject proceeded to an incidental memory task. Here, subjects used incidental memory to discriminate between a probe face (a distractor in the RSVP stream) and a novel, foil face. In Experiment 2 we reduced scene-related semantic coherency by intermixing faces from multiple scenes and contrasted incidental memory with explicit memory, a condition where subjects actively memorized each face from the sequence without searching for a target. In both experiments, we measured objective performance (Type 1 AUC) and metacognitive accuracy (Type 2 AUC), revealing sustained and consciously accessible incidental memory for upright and inverted faces. In novel analyses of face categories, we examined whether accuracy or metacognitive judgments are affected by shared semantic features (i.e., similarity in gender, race, age). Similarity enhanced the accuracy of incidental memory discriminations but did not influence metacognition. We conclude that incidental memory is sustained and consciously accessible, is not reliant on scene contexts, and is not enhanced by explicit memorization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号