首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Faces learned from multiple viewpoints are recognized better with left than right three-quarter views. This left-view superiority could be explained by perceptual experience, facial asymmetry, or hemispheric specialization. In the present study, we investigated whether left-view sequences are also more effective in recognizing same and novel views of a face. In a sequential matching task, a view sequence showing a face rotating around a left (?30°) or a right (+30°) angle, with an amplitude of 30°, was followed by a static test view with the same viewpoint as the sequence (?30° or +30°) or with a novel one (0°, +30°, or ?30°). We found a superiority of left-view sequences independently of the test viewpoint, but no superiority of left over right test views. These results do not seem compatible with the perceptual experience hypothesis, which predicts superiority only for left-side test views (?30°). Also, a facial asymmetry judgement task showed no correlation between the asymmetry of individual faces and the left-view sequence superiority. A superiority of left-view sequences for novel as well as same test views argues in favour of an explanation by hemispheric specialization, because of the possible role of the right hemisphere in extracting facial identity information.  相似文献   

2.
Pigeons and humans were trained to discriminate between pictures of three-dimensional objects that differed in global shape. Each pair of objects was shown at two orientations that differed by a depth rotation of 90° during training. Pictures of the objects at novel depth rotations were then tested for recognition. The novel test rotations were 30°, 45°, and 90° from the nearest trained orientation and were either interpolated between the trained orientations or extrapolated outside of the training range. For both pigeons and humans, recognition accuracy and/or speed decreased as a function of distance from the nearest trained orientation. However, humans, but not pigeons, were more accurate in recognizing novel interpolated views than novel extrapolated views. The results suggest that pigeons’ recognition was based on independent generalization from each training view, whereas humans showed view-combination processes that resulted in a benefit for novel views interpolated between the training views.  相似文献   

3.
According to theories of emotion and attention, we are predisposed to orient rapidly toward threat. However, previous examination of attentional cueing by threat showed no enhanced capture at brief durations, a finding that may be related to the sensitivity of the manual response measure used. Here we investigated the time course of orienting attention toward fearful faces in the exogenous cueing task. Cue duration (20 ms or 100 ms) and response mode (saccadic or manual) were manipulated. In the saccade mode, both enhanced attentional capture and impaired disengagement from fearful faces were evident and limited to 20 ms, suggesting that saccadic cueing effects emerge rapidly and are short lived. In the manual mode, fearful faces impacted only upon the disengagement component of attention at 100 ms, suggesting that manual cueing effects emerge over longer periods of time. Importantly, saccades could reveal threat biases at brief cue durations consistent with current theories of emotion and attention.  相似文献   

4.
Two experiments examine a novel method of assessing face familiarity that does not require explicit identification of presented faces. Earlier research (Clutterbuck & Johnston, 2002; Young, Hay, McWeeny, Flude, & Ellis, 1985) has shown that different views of the same face can be matched more quickly for familiar than for unfamiliar faces. This study examines whether exposure to previously novel faces allows the speed with which they can be matched to be increased, thus allowing a means of assessing how faces become familiar. In Experiment 1, participants viewed two sets of unfamiliar faces presented for either many, short intervals or for few, long intervals. At test, previously familiar (famous) faces were matched more quickly than novel faces or learned faces. In addition, learned faces seen on many, brief occasions were matched more quickly than the novel faces or faces seen on fewer, longer occasions. However, this was only observed when participants performed “different” decision matches. In Experiment 2, the similarity between face pairs was controlled more strictly. Once again, matches were performed on familiar faces more quickly than on unfamiliar or learned items. However, matches made to learned faces were significantly faster than those made to completely novel faces. This was now observed for both same and different match decisions. The use of this matching task as a means of tracking how unfamiliar faces become familiar is discussed.  相似文献   

5.
The influence of motion information and temporal associations on recognition of non-familiar faces was investigated using two groups which performed a face recognition task. One group was presented with regular temporal sequences of face views designed to produce the impression of motion of the face rotating in depth, the other group with random sequences of the same views. In one condition, participants viewed the sequences of the views in rapid succession with a negligible interstimulus interval (ISI). This condition was characterized by three different presentation times. In another condition, participants were presented a sequence with a 1-sec. ISI among the views. That regular sequences of views with a negligible ISI and a shorter presentation time were hypothesized to give rise to better recognition, related to a stronger impression of face rotation. Analysis of data from 45 participants showed a shorter presentation time was associated with significantly better accuracy on the recognition task; however, differences between performances associated with regular and random sequences were not significant.  相似文献   

6.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

7.
VIEWPOINT DEPENDENCE IN SCENE RECOGNITION   总被引:9,自引:0,他引:9  
Abstract— Two experiments investigated the viewpoint dependence of spatial memories. In Experiment 1, participants learned the locations of objects on a desktop from a single perspective and then took part in a recognition test, test scenes included familiar and novel views of the layout. Recognition latency was a linear function of the angular distance between a test view and the study view. In Experiment 2, participants studied a layout from a single view and then learned to recognize the layout from three additional training views. A final recognition test showed that the study view and the training views were represented in memory, and that latency was a linear function of the angular distance to the nearest study or training view. These results indicate that interobject spatial relations are encoded in a viewpoint-dependent manner, and that recognition of novel views requires normalization to the most similar representation in memory. These findings parallel recent results in visual object recognition  相似文献   

8.
We investigated the effect of the regular sequence of different views and the three‐quarter view effect on the learning of unfamiliar faces by infants. 3–8‐month‐old infants were familiarized with unfamiliar female faces in either the regular condition (presenting 11 different face views from the frontal view to the left‐side profile view in regular order) or the random condition (presenting the same 11 different face views in random order). Following the familiarization, infants were tested with a pair of a familiarized and a novel female face either in a three‐quarter (Experiment 1) or in a profile view (Experiment 2). Results showed that only 6–8‐month‐old infants could identify a familiarized face in the regular condition when they were tested in three‐quarter views. In contrast, 6–8‐month‐old infants showed no significant novelty preference in profile views. The results suggest that the regular sequence of different face views promotes the learning of unfamiliar faces by infants over 6 months old. Moreover, our findings imply that the three‐quarter view effect appears in infants.  相似文献   

9.
The experiments conducted aimed to investigate whether reduced accuracy when counting stimuli presented in rapid temporal sequence in adults with dyslexia could be explained by a sensory processing deficit, a general slowing in processing speed or difficulties shifting attention between stimuli. To achieve these aims, the influence of the inter-stimulus interval (ISI), stimulus duration, and sequence length were evaluated in two experiments. In the first that used skilled readers only, significantly more errors were found with presentation of long sequences when the ISI or stimulus durations were short. Experiment 2 used a wider range of ISIs and stimulus durations. Compared to skilled readers, a group with dyslexia had reduced accuracy on two-stimulus sequences when the ISI was short, but not when the ISI was long. Although reduced accuracy was found on all short and long sequences by the group with dyslexia, when performance on two-stimulus sequences was used as an index of sensory processing efficiency and controlled, group differences were found with presentation of stimuli of short duration only. We concluded that continuous, repetitive stimulation to the same visual area can produce a capacity limitation on rapid counting tasks in all readers when the ISIs or stimulus durations are short. While reduced accuracy on rapid sequential counting tasks can be explained by a sensory processing deficit when the stimulus duration is long, slower processing speed in the group with dyslexia explains the greater inaccuracy found as sequence length is increased when the stimulus duration is short.  相似文献   

10.
This study examined the effect of attention in infants on the ERP changes occurring during the recognition of briefly presented visual stimuli. Infants at ages 4.5, 6 and 7.5 months were presented with a Sesame Street movie that elicited periods of attention and inattention, and computer-generated stimuli were presented overlaid on the movie for 500 ms. One stimulus was familiar to the infants and was presented frequently, a second stimulus was familiar but presented infrequently, and a set of 14 novel stimuli were presented infrequently. An ERP component labeled the 'Nc' (Negative Central, about 450-550 ms after stimulus onset) was larger during attention than inattention and increased in magnitude over the three testing ages during attention. Late slow waves in the ERP (from 1000 to 2000 ms post-stimulus onset) consisted of a positive slow wave in response to the infrequent familiar stimulus at all three testing ages. The late slow wave in response to the infrequent novel stimulus during attention was a positive slow wave for the 4.5-month-old infants, to a positive-negative slow wave for the 6-month-old infants and a negative slow wave for the 7.5-month-old infants. These results show attention facilitates the brain response during infant recognition memory and show that developmental changes in recognition memory are closely related to changes in attention.  相似文献   

11.
In two experiments, prime face stimuli with an emotional or a neutral expression were presented individually for 25 to 125 ms, either in foveal or parafoveal vision; following a mask, a probe face or a word label appeared for recognition. Accurate detection and sensitivity (A') were higher for angry, happy, and sad faces than for nonemotional (neutral) or novel (scheming) faces at short exposure times (25-75 ms), in both the foveal and the parafoveal field, and with both the probe face and the probe word. These results indicate that there is a low perceptual threshold for unambiguous emotional faces, which are especially likely to be detected both within and outside the focus of attention; and that this facilitated detection involves processing of the affective meaning of faces, not only discrimination of formal visual features.  相似文献   

12.
Viewpoint-dependent recognition of familiar faces   总被引:5,自引:0,他引:5  
Troje NF  Kersten D 《Perception》1999,28(4):483-487
The question whether object representations in the human brain are object-centered or viewer-centered has motivated a variety of experiments with divergent results. A key issue concerns the visual recognition of objects seen from novel views. If recognition performance depends on whether a particular view has been seen before, it can be interpreted as evidence for a viewer-centered representation. Earlier experiments used unfamiliar objects to provide the experimenter with complete control over the observer's previous experience with the object. In this study, we tested whether human recognition shows viewpoint dependence for the highly familiar faces of well-known colleagues and for the observer's own face. We found that observers are poorer at recognizing their own profile, whereas there is no difference in response time between frontal and profile views of other faces. This result shows that extensive experience and familiarity with one's own face is not sufficient to produce viewpoint invariance. Our result provides strong evidence for viewer-centered representations in human visual recognition even for highly familiar objects.  相似文献   

13.
Lobmaier JS  Mast FW 《Perception》2007,36(11):1660-1673
It has been suggested that, as a result of expertise, configural information plays a predominant role in face processing. We investigated this idea using novel and learned faces. In experiment 1, sixteen participants matched two subsequently presented blurred or scrambled faces, which could be either upright or inverted, in a sequential same -different matching task. By means of blurring, featural information is hampered, whilst scrambling disrupts configural information. Each face was unfamiliar to the participants and was presented for 1000 ms. An ANOVA on the d' values revealed a significant advantage for scrambled faces. In experiment 2, fourteen participants were tested with the same design, except that the second face was always intact. Again, the ANOVA on the d' values revealed a significant advantage for scrambled faces. In experiment 3 half of the faces were learned in a familiarisation block prior to the experiment. The ANOVA of these d' values revealed a significant interaction of familiarity and condition, showing that blurred stimuli were better recognised when the faces were familiar. These results suggest that recognition of novel faces, compared to learned faces, relies relatively more on the processing of featural information. In the course of familiarisation the importance of configural information increases.  相似文献   

14.
To study links between rapid ERP responses to fearful faces and conscious awareness, a backward‐masking paradigm was employed where fearful or neutral target faces were presented for different durations and were followed by a neutral face mask. Participants had to report target face expression on each trial. When masked faces were clearly visible (200 ms duration), an early frontal positivity, a later more broadly distributed positivity, and a temporo‐occipital negativity were elicited by fearful relative to neutral faces, confirming findings from previous studies with unmasked faces. These emotion‐specific effects were also triggered when masked faces were presented for only 17 ms, but only on trials where fearful faces were successfully detected. When masked faces were shown for 50 ms, a smaller but reliable frontal positivity was also elicited by undetected fearful faces. These results demonstrate that early ERP responses to fearful faces are linked to observers' subjective conscious awareness of such faces, as reflected by their perceptual reports. They suggest that frontal brain regions involved in the construction of conscious representations of facial expression are activated at very short latencies.  相似文献   

15.
Studies on face recognition have shown that observers are faster and more accurate at recognizing faces learned from dynamic sequences than those learned from static snapshots. Here, we investigated whether different learning procedures mediate the advantage for dynamic faces across different spatial frequencies. Observers learned two faces—one dynamic and one static—either in depth (Experiment 1) or using a more superficial learning procedure (Experiment 2). They had to search for the target faces in a subsequent visual search task. We used high-spatial frequency (HSF) and low-spatial frequency (LSF) filtered static faces during visual search to investigate whether the behavioural difference is based on encoding of different visual information for dynamically and statically learned faces. Such encoding differences may mediate the recognition of target faces in different spatial frequencies, as HSF may mediate featural face processing whereas LSF mediates configural processing. Our results show that the nature of the learning procedure alters how observers encode dynamic and static faces, and how they recognize those learned faces across different spatial frequencies. That is, these results point to a flexible usage of spatial frequencies tuned to the recognition task.  相似文献   

16.
We used a fully immersive virtual reality environment to study whether actively interacting with objects would effect subsequent recognition, when compared with passively observing the same objects. We found that when participants learned object structure by actively rotating the objects, the objects were recognized faster during a subsequent recognition task than when object structure was learned through passive observation. We also found that participants focused their study time during active exploration on a limited number of object views, while ignoring other views. Overall, our results suggest that allowing active exploration of an object during initial learning can facilitate recognition of that object, perhaps owing to the control that the participant has over the object views upon which they can focus. The virtual reality environment is ideal for studying such processes, allowing realistic interaction with objects while maintaining experimenter control.  相似文献   

17.
Objects are best recognized from so-called “canonical” views. The characteristics of canonical views of arbitrary objects have been qualitatively described using a variety of different criteria, but little is known regarding how these views might be acquired during object learning. We address this issue, in part, by examining the role of object motion in the selection of preferred views of novel objects. Specifically, we adopt a modeling approach to investigate whether or not the sequence of views seen during initial exposure to an object contributes to observers’ preferences for particular images in the sequence. In two experiments, we exposed observers to short sequences depicting rigidly rotating novel objects and subsequently collected subjective ratings of view canonicality (Experiment 1) and recall rates for individual views (Experiment 2). Given these two operational definitions of view canonicality, we attempted to fit both sets of behavioral data with a computational model incorporating 3-D shape information (object foreshortening), as well as information relevant to the temporal order of views presented during training (the rate of change for object foreshortening). Both sets of ratings were reasonably well predicted using only 3-D shape; the inclusion of terms that capture sequence order improved model performance significantly.  相似文献   

18.
Liu T 《Perception》2007,36(9):1320-1333
How humans recognize objects remains a contentious issue in current research on high-level vision. Here, I test the proposal by Wallis and Bülthoff (1999 Trends in Cognitive Sciences 3 22-31) suggesting that object representations can be learned through temporal association of multiple views of the same object. Participants first studied image sequences of novel, three-dimensional objects in a study block. On each trial, the images were from either an orderly sequence of depth-rotated views of the same object (SS), a scrambled sequence of those views (SR), or a sequence of different objects (RR). Recognition memory was assessed in a following test block. A within-object advantage was consistently observed --greater accuracy in the SR than the RR condition in all four experiments, greater accuracy in the SS than the RR condition in two experiments. Furthermore, spatiotemporal coherence did not produce better recognition than temporal coherence alone (similar or less accuracy in the SS compared to the SR condition). These results suggest that the visual system can use temporal regularity to build invariant object representations, via the temporal-association mechanism.  相似文献   

19.
采用二分法和泛化法范式,在秒下及秒上呈现时距探究疼痛表情对时距知觉的影响。结果发现,使用二分法,疼痛表情在秒上秒下都显著延长了个体的主观时距,但使用泛化法,疼痛表情仅在秒上产生影响。根据时间分段综合模型和范式特点,该实验结果反映疼痛表情对时距知觉的影响在秒上秒下的作用机制可能不同,在秒下主要是通过唤醒影响,在秒上则可能由唤醒和注意共同调节。  相似文献   

20.
Three-quarter views of faces promote better recognition memory for previously unfamiliar faces than do full-face views. This paper reports experiments which examine the possible basis of the effect, and, in particular, examine whether the effect reflects some ‘canonical’ role for the 3/4 view of a face. Experiment 1 showed no advantage of 3/4 views over full-face views when the task was to decide whether or not each of a series of faces was that of a highly familiar colleague. In Experiment 2 a sequential matching task was used, where subjects had to respond positively if both members of a pair of faces were of the same person. When the faces used were highly familiar to the subjects, there was no evidence of an advantage for a 3/4 view in the matching task. Three-quarter views and full-face views led to equivalent performance, though profiles produced decrements in performance. When the same faces were shown to subjects who were unfamiliar with the faces, 3/4 views did lead to increased speeds in same trials, compared with full-face, though profiles again proved difficult. Thus a 3/4 view advantage appeared only where the faces were unfamiliar, and the task had to be performed at the level of visual matching. It appears that the 3/4 view advantage may be obtained only when the task involves explicit matching between test views and remembered target photographs, rather than reflecting any more fundamental properties of the representations used to recognize highly familiar faces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号