首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.  相似文献   

2.
Harman KL  Humphrey GK 《Perception》1999,28(5):601-615
When we look at an object as we move or the object moves, our visual system is presented with a sequence of different views of the object. It has been suggested that such regular temporal sequences of views of objects contain information that can aid in the process of representing and recognising objects. We examined whether seeing a series of perspective views of objects in sequence led to more efficient recognition than seeing the same views of objects but presented in a random order. Participants studied images of 20 novel three-dimensional objects rotating in depth under one of two study conditions. In one study condition, participants viewed an ordered sequence of views of objects that was assumed to mimic important aspects of how we normally encounter objects. In the other study condition, participants were presented the same object views, but in a random order. It was expected that studying a regular sequence of views would lead to more efficient recognition than studying a random presentation of object views. Although subsequent recognition accuracy was equal for the two groups, differences in reaction time between the two study groups resulted. Specifically, the random study group responded reliably faster than the sequence study group. Some possible encoding differences between the two groups are discussed.  相似文献   

3.
Four experiments examined whether or not exposure to two views (A and B) of a novel object improves generalization to a third view (C) through view combination on tasks that required symmetry or recognition memory decisions. The results of Experiment 1 indicated that exposure to either View A or View B alone produced little or no generalization to View C on either task. The results of Experiment 2 indicated that exposure to both View A and View B did improve generalization to View C, but only for symmetrical objects. Experiment 3 replicated this generalization advantage for symmetrical but not asymmetrical objects, when objects were well learned at study. The results of Experiment 4 showed that Views A and B did not have to be presented consecutively to facilitate responses to View C. Together, the pattern of results suggests that generalization to novel views does occur through view combination of temporally separated views, but it is more likely to be observed with symmetrical objects.  相似文献   

4.
Humans and pigeons were trained to discriminate between 2 views of actual 3-D objects or their photographs. They were tested on novel views that were either within the closest rotational distance between the training views (interpolated) or outside of that range (extrapolated). When training views were 60 degrees apart, pigeons, but not humans, recognized novel views of actual objects better than their pictures. Further, both species recognized interpolated views of both stimulus types better than extrapolated views, but a single distinctive geon enhanced recognition of novel views only for humans. When training views were 90 degrees apart, pigeons recognized interpolated views better than extrapolated views with actual objects but not with photographs. Thus, pigeons may represent actual objects differently than their pictures.  相似文献   

5.
The visual system has been suggested to integrate different views of an object in motion. We investigated differences in the way moving and static objects are represented by testing for priming effects to previously seen ("known") and novel object views. We showed priming effects for moving objects across image changes (e.g., mirror reversals, changes in size, and changes in polarity) but not over temporal delays. The opposite pattern of results was observed for objects presented statically; that is, static objects were primed over temporal delays but not across image changes. These results suggest that representations for moving objects are: (1) updated continuously across image changes, whereas static object representations generalize only across similar images, and (2) more short-lived than static object representations. These results suggest two distinct representational mechanisms: a static object mechanism rather spatially refined and permanent, possibly suited for visual recognition, and a motion-based object mechanism more temporary and less spatially refined, possibly suited for visual guidance of motor actions.  相似文献   

6.
Participants viewed objects in the central visual field and then named either same or different depth-orientation views of these objects presented briefly in the left or the right visual field. The different-orientation views contained either the same or a different set of parts and relations. Viewpoint-dependent priming was observed when test views were presented directly to the right hemisphere (RH), but not when test views were presented directly to the left hemisphere (LH). Moreover, this pattern of results did not depend on whether the same or a different set of parts and relations could be recovered from the different-orientation views. Results support the theory that a specific subsystem operates more effectively than an abstract subsystem in the RH and stores objects in a manner that produces viewpoint-dependent effects, whereas an abstract subsystem operates more effectively than a specific subsystem in the LH and does not store objects in a viewpoint-dependent manner.  相似文献   

7.
Objects are best recognized from so-called “canonical” views. The characteristics of canonical views of arbitrary objects have been qualitatively described using a variety of different criteria, but little is known regarding how these views might be acquired during object learning. We address this issue, in part, by examining the role of object motion in the selection of preferred views of novel objects. Specifically, we adopt a modeling approach to investigate whether or not the sequence of views seen during initial exposure to an object contributes to observers’ preferences for particular images in the sequence. In two experiments, we exposed observers to short sequences depicting rigidly rotating novel objects and subsequently collected subjective ratings of view canonicality (Experiment 1) and recall rates for individual views (Experiment 2). Given these two operational definitions of view canonicality, we attempted to fit both sets of behavioral data with a computational model incorporating 3-D shape information (object foreshortening), as well as information relevant to the temporal order of views presented during training (the rate of change for object foreshortening). Both sets of ratings were reasonably well predicted using only 3-D shape; the inclusion of terms that capture sequence order improved model performance significantly.  相似文献   

8.
9.
Four experiments examined children's inferences about the relation between objects' internal parts and their causal properties. In Experiment 1, 4-year-olds recognized that objects with different internal parts had different causal properties, and those causal properties transferred if the internal part moved to another object. In Experiment 2, 4-year-olds made inferences from an object's internal parts to its causal properties without being given verbal labels for objects or being shown that insides and causal properties covaried. Experiment 3 found that 4-year-olds chose an object with the same internal part over one with the same external property when asked which object had the same causal property as the target (which had both the internal part and external property). Finally, Experiment 4 demonstrated that 4-year-olds made similar inferences from causal properties to internal parts, but 3-year-olds relied more on objects' external perceptual appearance. These results suggest that by the age of 4, children have developed an understanding of a relation between an artifact's internal parts and its causal properties.  相似文献   

10.
In an earlier report (Harman, Humphrey, & Goodale, 1999), we demonstrated that observers who actively rotated three-dimensional novel objects on a computer screen later showed faster visual recognition of these objects than did observers who had passively viewed exactly the same sequence of images of these virtual objects. In Experiment 1 of the present study we showed that compared to passive viewing, active exploration of three-dimensional object structure led to faster performance on a "mental rotation" task involving the studied objects. In addition, we examined how much time observers concentrated on particular views during active exploration. As we found in the previous report, they spent most of their time looking at the "side" and "front" views ("plan" views) of the objects, rather than the three-quarter or intermediate views. This strong preference for the plan views of an object led us to examine the possibility in Experiment 2 that restricting the studied views in active exploration to either the plan views or the intermediate views would result in differential learning. We found that recognition of objects was faster after active exploration limited to plan views than after active exploration of intermediate views. Taken together, these experiments demonstrate (1) that active exploration facilitates learning of the three-dimensional structure of objects, and (2) that the superior performance following active exploration may be a direct result of the opportunity to spend more time on plan views of the object.  相似文献   

11.
Z Kourtzi  M Shiffrar 《Acta psychologica》1999,102(2-3):265-292
Depth rotations can reveal new object parts and result in poor recognition of "static" objects (Biederman & Gerhardstein, 1993). Recent studies have suggested that multiple object views can be associated through temporal contiguity and similarity (Edelman & Weinshall, 1991; Lawson, Humphreys & Watson, 1994; Wallis, 1996). Motion may also play an important role in object recognition since observers recognize novel views of objects rotating in the picture plane more readily than novel views of statically re-oriented objects (Kourtzi & Shiffrar, 1997). The series of experiments presented here investigated how different views of a depth-rotated object might be linked together even when these views do not share the same parts. The results suggest that depth rotated object views can be linked more readily with motion than with temporal sequence alone to yield priming of novel views of 3D objects that fall in between "known" views. Motion can also enhance path specific view linkage when visible object parts differ across views. Such results suggest that object representations depend on motion processes.  相似文献   

12.
C H Liu  A Chaudhuri 《Perception》1998,27(9):1107-1122
The question whether face recognition in photographic negative relies more on external features and pictorial cues than in photographic positive was studied in five experiments. Recognition of whole faces as well as both external and internal features of the faces was compared in experiments 1 and 2. The conditions in which views of faces between learning and test were either identical (hence providing maximum pictorial cues) or different (hence reducing such cues) were compared in experiments 3, 4, and 5. The results showed that recognition of internal features in two-tone and multi-tone images suffered more from use of photographic negatives than recognition of external features. Testing with both multi-tone and two-tone images revealed that the deficit caused by view changes between learning and test was no more severe with negatives than with positives. Finally, removing external features made recognition of different views equally more difficult for positives and negatives. Overall, these results point to a qualitative rather than quantitative difference between processing face images in photographic positive and negative.  相似文献   

13.
Three experiments are described in which two pictures of isolated man-made objects were presented in succession. The subjects' task was to decide, as rapidly as possible, whether the two pictured objects had the same name. With a stimulus-onset asynchrony (SOA) of above 200 msec two types of facilitation were observed: (1) the response latency was reduced if the pictures showed the same object, even though seen from different viewpoints (object benefit); (2) decision time was reduced further if the pictures showed the same object from the same angle of view (viewpoint benefit). These facilitation effects were not affected by projecting the pictures to different retinal locations. Significant benefits of both types were also obtained when the projected images differed in size. However, in these circumstances there was a small but significant performance decrement in matching two similar views of a single object, but not if the views were different. Conversely, the object benefit, but not the viewpoint benefit, was reduced when the SOA was only 100 msec. The data suggest the existence of (at least) two different visual codes, one non-retinotopic but viewer-centred, the other object-centred.  相似文献   

14.
Pigeons and humans were trained to discriminate between pictures of three-dimensional objects that differed in global shape. Each pair of objects was shown at two orientations that differed by a depth rotation of 90° during training. Pictures of the objects at novel depth rotations were then tested for recognition. The novel test rotations were 30°, 45°, and 90° from the nearest trained orientation and were either interpolated between the trained orientations or extrapolated outside of the training range. For both pigeons and humans, recognition accuracy and/or speed decreased as a function of distance from the nearest trained orientation. However, humans, but not pigeons, were more accurate in recognizing novel interpolated views than novel extrapolated views. The results suggest that pigeons’ recognition was based on independent generalization from each training view, whereas humans showed view-combination processes that resulted in a benefit for novel views interpolated between the training views.  相似文献   

15.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

16.
Newman GE  Herrmann P  Wynn K  Keil FC 《Cognition》2008,107(2):420-432
This paper reports the results of two sets of studies demonstrating 14-month-olds' tendency to associate an object's behavior with internal, rather than external features. In Experiment 1 infants were familiarized to two animated cats that each exhibited a different style of self-generated motion. Infants then saw a novel individual that had an internal feature (stomach color) similar to one cat, but an external feature (hat color) similar to the other. Infants looked reliably longer when the individual's motion was congruent with the hat than when it was congruent with the stomach. Using a converging method involving object choice, Experiment 2 found that infants prioritized the internal feature over the external feature only when the object's behavior was self-generated. In the absence of self-generated behaviors, however, infants did not show a preference towards the internal feature.  相似文献   

17.
Multiple views of spatial memory   总被引:1,自引:0,他引:1  
Recent evidence indicates that mental representations of large (i.e., navigable) spaces are viewpoint dependent when observers are restricted to a single view. The purpose of the present study was to determine whether two views of a space would produce a single viewpoint-independent representation or two viewpoint-dependent representations. Participants learned the locations of objects in a room from two viewpoints and then made judgments of relative direction from imagined headings either aligned or misaligned with the studied views. The results indicated that mental representations of large spaces were viewpoint dependent, and that two views of a spatial layout appeared to produce two viewpoint-dependent representations in memory. Imagined headings aligned with the study views were more accessible than were novel headings in terms of both speed and accuracy of pointing judgments.  相似文献   

18.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

19.
In the first three experiments, subjects felt solid geometrical forms and matched raised-line pictures to the objects. Performance was best in experiment 1 for top views, with shorter response latencies than for side views, front views, or 3-D views with foreshortening. In a second experiment with blind participants, matching accuracy was not significantly affected by prior visual experience, but speed advantages were found for top views, with 3-D views also yielding better matching accuracy than side views. There were no performance advantages for pictures of objects with a constant cross section in the vertical axis. The early-blind participants had lower performance for side and frontal views. The objects were rotated to oblique orientations in experiment 3. Early-blind subjects performed worse than the other subjects given object rotation. Visual experience with pictures of objects at many angles could facilitate identification at oblique orientations. In experiment 5 with blindfolded sighted subjects, tangible pictures were used as targets and as choices. The results yielded superior overall performance for 3-D views (mean, M = 74% correct) and much lower matching accuracy for top views as targets (M = 58% correct). Performance was highest when the target and matching viewpoint were identical, but 3-D views (M = 96% correct) were still far better than top views. The accuracy advantage of the top views also disappeared when more complex objects were tested in experiment 6. Alternative theoretical implications of the results are discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号