首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The authors trained pigeons to discriminate images of human faces that displayed: (a) a happy or a neutral expression or (b) a man or a woman. After training the pigeons, the authors used a new procedure called Bubbles to pinpoint the features of the faces that were used to make these discriminations. Bubbles revealed that the features used to discriminate happy from neutral faces were different from those used to discriminate male from female faces. Furthermore, the features that pigeons used to make each of these discriminations overlapped those used by human observers in a companion study (F. Gosselin & P.G. Schyns, 2001). These results show that the Bubbles technique can be effectively applied to nonhuman animals to isolate the functional features of complex visual stimuli.  相似文献   

2.
The 'Bubbles' technique (Gosselin, F. & Schyns, P.G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261-2271) has been widely used to reveal the information adults use to make perceptual categorizations. We present, for the first time, an adapted form of Bubbles, suitable for use with young infants.  相似文献   

3.
Studies on face recognition have shown that observers are faster and more accurate at recognizing faces learned from dynamic sequences than those learned from static snapshots. Here, we investigated whether different learning procedures mediate the advantage for dynamic faces across different spatial frequencies. Observers learned two faces—one dynamic and one static—either in depth (Experiment 1) or using a more superficial learning procedure (Experiment 2). They had to search for the target faces in a subsequent visual search task. We used high-spatial frequency (HSF) and low-spatial frequency (LSF) filtered static faces during visual search to investigate whether the behavioural difference is based on encoding of different visual information for dynamically and statically learned faces. Such encoding differences may mediate the recognition of target faces in different spatial frequencies, as HSF may mediate featural face processing whereas LSF mediates configural processing. Our results show that the nature of the learning procedure alters how observers encode dynamic and static faces, and how they recognize those learned faces across different spatial frequencies. That is, these results point to a flexible usage of spatial frequencies tuned to the recognition task.  相似文献   

4.
Despite the complexity and diversity of natural scenes, humans are very fast and accurate at identifying basic-level scene categories. In this paper we develop a new technique (based on Bubbles, Gosselin & Schyns, 2001a; Schyns, Bonnar, & Gosselin, 2002) to determine some of the information requirements of basic-level scene categorizations. Using 2400 scenes from an established scene database (Oliva & Torralba, 2001), the algorithm randomly samples the Fourier coefficients of the phase spectrum. Sampled Fourier coefficients retain their original phase while the phase of nonsampled coefficients is replaced with that of white noise. Observers categorized the stimuli into 8 basic-level categories. The location of the sampled Fourier coefficients leading to correct categorizations was recorded per trial. Statistical analyses revealed the major scales and orientations of the phase spectrum that observers used to distinguish scene categories.  相似文献   

5.
We adapted the Bubbles procedure [Vis. Res. 41 (2001) 2261] to examine the effective use of information during the first 282 ms of face identification. Ten participants each viewed a total of 5100 faces sub‐sampled in space–time. We obtained a clear pattern of effective use of information: the eye on the left side of the image became diagnostic between 47 and 94 ms after the onset of the stimulus; after 94 ms, both eyes were used effectively. This preference for the eyes increased with practice, and was not solely due to the informativeness of the eyes for the task at hand. The bias for the eye on the left side of the image is explained in terms of hemispheric specialization. Although there were individual differences, most participants exhibited this pattern of effective use of information. An intriguing finding is that most participants displayed a clear sinusoidal modulation of effective use of attention through time with a frequency of about 10.6 Hz.  相似文献   

6.
Two experiments examine a novel method of assessing face familiarity that does not require explicit identification of presented faces. Earlier research (Clutterbuck & Johnston, 2002; Young, Hay, McWeeny, Flude, & Ellis, 1985) has shown that different views of the same face can be matched more quickly for familiar than for unfamiliar faces. This study examines whether exposure to previously novel faces allows the speed with which they can be matched to be increased, thus allowing a means of assessing how faces become familiar. In Experiment 1, participants viewed two sets of unfamiliar faces presented for either many, short intervals or for few, long intervals. At test, previously familiar (famous) faces were matched more quickly than novel faces or learned faces. In addition, learned faces seen on many, brief occasions were matched more quickly than the novel faces or faces seen on fewer, longer occasions. However, this was only observed when participants performed “different” decision matches. In Experiment 2, the similarity between face pairs was controlled more strictly. Once again, matches were performed on familiar faces more quickly than on unfamiliar or learned items. However, matches made to learned faces were significantly faster than those made to completely novel faces. This was now observed for both same and different match decisions. The use of this matching task as a means of tracking how unfamiliar faces become familiar is discussed.  相似文献   

7.
The representation underlying the identification and classification of semirealistic line drawings taken from a computer model of the face was investigated by using a speeded classification task and an identification task. These data were analyzed by using a multidimensional extension of signal detection theory, within which varieties of perceptual interactions between dimensions within and across stimuli can be characterized. The dimensions of interest here were eye separation, nose length, and mouth width. The response time and accuracy data from the speeded classification task suggest that processing of a given feature did depend on whether other features were present or absent, but given that other features were present, the results strongly support separability (a macrolevel, across-stimulus form of invariance) for all pairs of facial dimensions used. This separability was confirmed by the subsequent identification task. Owing to its greater resolution, the identification task can reveal interactions that might exist at more microlevels of processing. In fact, the identification data did indicate the presence of perceptual dependence between facial dimensionswithin a stimulus when the dimensions that were varied were close in spatial proximity (i.e., eye separation and nose length). Within the theoretical framework, perceptual dependence can be interpreted as correlated noise between otherwise separate channels (and hence, is logically distinct from separability). This dependence was greatly reduced for dimensions that were more distant (eyes and mouth). The relation between these results and the configural effects that have been observed with faces as stimuli in other studies is discussed.  相似文献   

8.
The representation underlying the identification and classification of semirealistic line drawings taken from a computer model of the face was investigated by using a speeded classification task and an identification task. These data were analyzed by using a multidimensional extension of signal detection theory, within which varieties of perceptual interactions between dimensions within and across stimuli can be characterized. The dimensions of interest here were eye separation, nose length, and mouth width. The response time and accuracy data from the speeded classification task suggest that processing of a given feature did depend on whether other features were present or absent, but given that other features were present, the results strongly support separability (a macrolevel, across-stimulus form of invariance) for all pairs of facial dimensions used. This separability was confirmed by the subsequent identification task. Owing to its greater resolution, the identification task can reveal interactions that might exist at more microlevels of processing. In fact, the identification data did indicate the presence of perceptual dependence between facial dimensions within a stimulus when the dimensions that were varied were close in spatial proximity (i.e., eye separation and nose length). Within the theoretical framework, perceptual dependence can be interpreted as correlated noise between otherwise separate channels (and hence, is logically distinct from separability). This dependence was greatly reduced for dimensions that were more distant (eyes and mouth). The relation between these results and the configural effects that have been observed with faces as stimuli in other studies is discussed.  相似文献   

9.
A smile is visually highly salient and grabs attention automatically. We investigated how extrafoveally seen smiles influence the viewers' perception of non-happy eyes in a face. A smiling mouth appeared in composite faces with incongruent non-happy (fearful, neutral, etc.) eyes, thus producing blended expressions, or it appeared in intact faces with genuine expressions. Attention to the eye region was spatially cued while foveal vision of the mouth was blocked by gaze-contingent masking. Participants judged whether the eyes were happy or not. Results indicated that the smile biased the evaluation of the eye expression: The same non-happy eyes were more likely to be judged as happy and categorized more slowly as not happy in a face with a smiling mouth than in a face with a non-smiling mouth or with no mouth. This bias occurred when the mouth and the eyes appeared simultaneously and aligned, but also to some extent when they were misaligned and when the mouth appeared after the eyes. We conclude that the highly salient smile projects to other facial regions, thus influencing the perception of the eye expression. Projection serves spatial and temporal integration of face parts and changes.  相似文献   

10.
Barton JJ  Deepak S  Malik N 《Perception》2003,32(1):15-28
We tested detection of changes to eye position, eye color (brightness), mouth position, and mouth color in frontal views of faces. Two faces were presented sequentially for 555 ms each, with a blank screen of 120 ms separating the two. Faces were presented either both upright or both inverted. Measures of detection (d') were calculated for several different degrees of change for each of the four dimensions of change. We first compared results to an earlier experiment that used an oddity design, in which subjects indicated which of three simultaneously viewed and otherwise identical faces had been altered on one of these four dimensions. Subjects in both of these experiments were partially cued, in that they knew the four possible types of changes that could occur on a given trial. The change-detection results correlated well with the oddity data. They confirmed that face inversion had little effect upon detection of changes in eye color, a moderate effect upon detection of eye-position or mouth-color changes, and caused a drastic reduction in the detection of mouth-position changes. An experiment in which uncued and fully cued subjects were compared showed that cueing significantly improved detection of feature color changes, but there was little difference between upright and inverted faces. Full cueing eliminated all effects of inversion. Compared to partial cueing, changes in mouth color were poorly detected by uncued subjects. Last, a change in the frequency of the base (unaltered) face in an experiment from 75% to 40% showed that increased short-term familiarity decreased the detection of eye changes and increased the detection of mouth changes, regardless of face orientation and the type of change made (color or position). We conclude that uncued subjects encode the spatial relations of features more than the colors of features, that mouth color in particular is not considered a relevant dimension for encoding, and that familiarization redistributes attention from more to less salient facial regions. Inversion effects are not simply an exaggeration of the salience effects revealed by withdrawing cueing, but represent an interaction of spatial encoding with salience, in that the greatest inversion effects occur for spatial shifts in less salient facial regions, and can be eliminated through the use of focused attention.  相似文献   

11.
Given that all faces share the same set of features—two eyes, a nose, and a mouth—that are arranged in similar configuration, recognition of a specific face must depend on our ability to discern subtle differences in its featural and configural properties. An enduring question in the face-processing literature is whether featural or configural information plays a larger role in the recognition process. To address this question, the face dimensions task was designed, in which the featural and configural properties in the upper (eye) and lower (mouth) regions of a face were parametrically and independently manipulated. In a same–different task, two faces were sequentially presented and tested in their upright or in their inverted orientation. Inversion disrupted the perception of featural size (Exp. 1), featural shape (Exp. 2), and configural changes in the mouth region, but it had relatively little effect on the discrimination of featural size and shape and configural differences in the eye region. Inversion had little effect on the perception of information in the top and bottom halves of houses (Exp. 3), suggesting that the lower-half impairment was specific to faces. Spatial cueing to the mouth region eliminated the inversion effect (Exp. 4), suggesting that participants have a bias to attend to the eye region of an inverted face. The collective findings from these experiments suggest that inversion does not differentially impair featural or configural face perceptions, but rather impairs the perception of information in the mouth region of the face.  相似文献   

12.
Face recognition involves both processing of information relating to features (e.g., eyes, nose, mouth, hair, i.e., featural processing), as well as the spatial relation between these features (configural processing). In a sequential matching task, participants had to decide whether two faces that differed in either featural or relational aspects were identical or different. In order to test for the microgenesis of face recognition (the development of processing onsets), presentation times of the backward-masked target face were varied (32, 42, 53, 63, 74, 84, or 94 msec.). To test for specific processing onsets and the processing of different facial areas, both featurally and relationally modified faces were manipulated in terms of changes to one facial area (eyes or nose or mouth), two, or three facial areas. For featural processing, an early onset for the eyes and mouth was at 32 msec. of presentation time, but a late onset for the nose was detected. For relationally differing faces, all onsets were delayed.  相似文献   

13.
A pool of 128 schematic faces was generated by varying brow, mouth, nose, eye height, and eye shape. Ratings of meaningfulness (how easy it was to find an adjective describing the face) and meaning (the adjective given to the face) were mainly a function of brow and mouth. When brow and mouth were horizontal, faces were least meaningful and neutral in expression; if either brow or mouth moved from the horizontal, faces increased in meaningfulness, meaning being dependent on the moving feature; when both brow and mouth moved from the horizontal, faces were most meaningful, and their expression was a function of the combination of brow and mouth.  相似文献   

14.
采用多维面孔任务(face dimensions task)探讨了信息类型和位置对7~17岁儿童面孔加工的作用。该任务通过改变面孔的类型信息(特征和结构)和位置信息(眼部和嘴部),要求儿童对同时呈现的面孔图片进行“相同”或“不同”的判断。研究发现,儿童面孔加工水平随年龄提高,13~14岁表现最好;对面孔特征的加工好于结构加工,而眼部和嘴部的加工差异从11~12岁开始出现;类型信息和位置信息共同作用于面孔加工,表现为儿童在嘴部的特征和结构加工上未出现显著差异,但眼部的特征加工显著好于眼部的结构加工。  相似文献   

15.
The role of holistic or parts-based processing in face identification has been explored mostly with neutral faces. In the current study, we investigated the nature of processing (holistic vs. parts) in recognition memory for faces with emotional expressions. There were two phases in this experiment: learning phase and test phase. In the learning phase participants learned face–name associations of happy, neutral, and sad faces. The test phase consisted of a two-choice recognition test (whole face, eyes, or mouth) given either immediately or after a 24-hour delay. Results indicate that emotional faces were remembered better than neutral faces and performance was better with whole faces as compared to isolated parts. The performance in immediate and delayed recognition interacted with emotional information. Sad eyes and happy mouth were remembered better in the delayed recognition condition. These results suggest that in addition to holistic processing, specific parts–emotion combinations play a critical role in delayed recognition memory.  相似文献   

16.
For face recognition, observers utilize both shape and texture information. Here, we investigated the relative diagnosticity of shape and texture for delayed matching of familiar and unfamiliar faces (Experiment 1) and identifying familiar and newly learned faces (Experiment 2). Within each familiarity condition, pairs of 3D‐captured faces were morphed selectively in either shape or texture in 20% steps, holding the respective other dimension constant. We also assessed participants’ individual face‐processing skills via the Bielefelder Famous Faces Test (BFFT), the Glasgow Face Matching Test, and the Cambridge Face Memory Test (CFMT). Using multilevel model analyses, we examined probabilities of same versus different responses (Experiment 1) and of original identity versus other/unknown identity responses (Experiment 2). Overall, texture was more diagnostic than shape for both delayed matching and identification, particularly so for familiar faces. On top of these overall effects, above‐average BFFT performance was associated with enhanced utilization of texture in both experiments. Furthermore, above‐average CFMT performance coincided with slightly reduced texture dominance in the delayed matching task (Experiment 1) and stronger sensitivity to morph‐based changes overall, that is irrespective of morph type, in the face identification task (Experiment 2). Our findings (1) show the disproportionate importance of texture information for processing familiar face identity and (2) provide further evidence that familiar and unfamiliar face identity perception are mediated by different underlying processes.  相似文献   

17.
The own-race bias (ORB) in face recognition can be interpreted as a failure to generalize expert perceptual encoding developed for own-race faces to other-race faces. Further, black participants appear to use different features to describe faces from those used by white participants (Shepherd & Deregowski, 1981). An experiment is reported where the size of the ORB was assessed using a standard face recognition procedure. Four groups were tested at two time intervals. One group received a training regime involving learning to distinguish faces that varied only on their chin, cheeks, nose, and mouth. Three control groups did not receive this training. The ORB, present prior to training, was reduced after the critical perceptual training. It is concluded that the ORB is a consequence of a failure of attention being directed to those features of other race faces that are useful for identification.  相似文献   

18.
The face inversion effect is the finding that inverted faces are more difficult to recognize than other inverted objects. The present study explored the possibility that eye movements have a role in producing the face inversion effect. In Experiment 1, we demonstrated that the faces used here produce a robust face inversion effect when compared with another homogenous set of objects (antique radios). In Experiment 2, participants' eye movements were monitored while they learned a set of faces and during a recognition test. Although we clearly found a face inversion effect, the same features of a face were fixated during the learning and recognition test faces, whether the face was right side up or upside down. Thus, the face inversion effect is not a result of a different pattern of eye movements during the viewing of the face.  相似文献   

19.
Using traditional face perception paradigms the current study explores unfamiliar face processing in two neurodevelopmental disorders. Previous research indicates that autism and Williams syndrome (WS) are both associated with atypical face processing strategies. The current research involves these groups in an exploration of feature salience for processing the eye and mouth regions of unfamiliar faces. The tasks specifically probe unfamiliar face matching by using (a) upper or lower face features, (b) the Thatcher illusion, and (c) featural and configural face modifications to the eye and mouth regions. Across tasks, individuals with WS mirror the typical pattern of performance, with greater accuracy for matching faces using the upper than using the lower features, susceptibility to the Thatcher illusion, and greater detection of eye than mouth modifications. Participants with autism show a generalized performance decrement alongside atypicalities, deficits for utilizing the eye region, and configural face cues to match unfamiliar faces. The results are discussed in terms of feature salience, structural encoding, and the phenotypes typically associated with these neurodevelopmental disorders.  相似文献   

20.
Using traditional face perception paradigms the current study explores unfamiliar face processing in two neurodevelopmental disorders. Previous research indicates that autism and Williams syndrome (WS) are both associated with atypical face processing strategies. The current research involves these groups in an exploration of feature salience for processing the eye and mouth regions of unfamiliar faces. The tasks specifically probe unfamiliar face matching by using (a) upper or lower face features, (b) the Thatcher illusion, and (c) featural and configural face modifications to the eye and mouth regions. Across tasks, individuals with WS mirror the typical pattern of performance, with greater accuracy for matching faces using the upper than using the lower features, susceptibility to the Thatcher illusion, and greater detection of eye than mouth modifications. Participants with autism show a generalized performance decrement alongside atypicalities, deficits for utilizing the eye region, and configural face cues to match unfamiliar faces. The results are discussed in terms of feature salience, structural encoding, and the phenotypes typically associated with these neurodevelopmental disorders.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号