首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.  相似文献   

2.
Perceiving Real-World Viewpoint Changes   总被引:10,自引:0,他引:10  
Retinal images vary as observers move through the environment, but observers seem to have little difficulty recognizing objects and scenes across changes in view. Although real-world view changes can be produced both by object rotations (orientation changes) and by observer movements (viewpoint changes), research on recognition across views has relied exclusively on display rotations. However, research on spatial reasoning suggests a possible dissociation between orientation and viewpoint. Here we demonstrate that scene recognition in the real world depends on more than the retinal projection of the visible array; viewpoint changes have little effect on detection of layout changes, but equivalent orientation changes disrupt performance significantly. Findings from our three experiments suggest that scene recognition across view changes relies on a mechanism that updates a viewer-centered representation during observer movements, a mechanism not available for orientation changes. These results link findings from spatial tasks to work on object and scene recognition and highlight the importance of considering the mechanisms underlying recognition in real environments.  相似文献   

3.
Our intuition that we richly represent the visual details of our environment is illusory. When viewing a scene, we seem to use detailed representations of object properties and interobject relations to achieve a sense of continuity across views. Yet, several recent studies show that human observers fail to detect changes to objects and object properties when localized retinal information signaling a change is masked or eliminated (e.g., by eye movements). However, these studies changed arbitrarily chosen objects which may have been outside the focus of attention. We draw on previous research showing the importance of spatiotemporal information for tracking objects by creating short motion pictures in which objects in both arbitrary locations and the very center of attention were changed. Adult observers failed to notice changes in both cases, even when the sole actor in a scene transformed into another person across an instantaneous change in camera angle (or “cut”).  相似文献   

4.
Active and passive scene recognition across views.   总被引:7,自引:0,他引:7  
R F Wang  D J Simons 《Cognition》1999,70(2):191-210
Recent evidence suggests that scene recognition across views is impaired when an array of objects rotates relative to a stationary observer, but not when the observer moves relative to a stationary display [Simons, D.J., Wang, R.F., 1998. Perceiving real-world viewpoint changes. Psychological Science 9, 315-320]. The experiments in this report examine whether the relatively poorer performance by stationary observers across view changes results from a lack of perceptual information for the rotation or from the lack of active control of the perspective change, both of which are present for viewpoint changes. Three experiments compared performance when observers passively experienced the view change and when they actively caused the change. Even with visual information and active control over the display rotation, change detection performance was still worse for orientation changes than for viewpoint changes. These findings suggest that observers can update a viewer-centered representation of a scene when they move to a different viewing position, but such updating does not occur during display rotations even with visual and motor information for the magnitude of the change. This experimental approach, using arrays of real objects rather than computer displays of isolated individual objects, can shed light on mechanisms that allow accurate recognition despite changes in the observer's position and orientation.  相似文献   

5.
Many previous studies of object recognition have found view-dependent recognition performance when view changes are produced by rotating objects relative to a stationary viewing position. However, the assumption that an object rotation is equivalent to an observer viewpoint change ignores the potential contribution of extraretinal information that accompanies observer movement. In four experiments, we investigated the role of extraretinal information on real-world object recognition. As in previous studies focusing on the recognition of spatial layouts across view changes, observers performed better in an old/new object recognition task when view changes were caused by viewer movement than when they were caused by object rotation. This difference between viewpoint and orientation changes was due not to the visual background, but to the extraretinal information available during real observer movements. Models of object recognition need to consider other information available to an observer in addition to the retinal projection in order to fully understand object recognition in the real world.  相似文献   

6.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

7.
The attentional cost of inattentional blindness   总被引:4,自引:0,他引:4  
Bressan P  Pizzighello S 《Cognition》2008,106(1):370-383
When our attention is engaged in a visual task, we can be blind to events which would otherwise not be missed. In three experiments, 97 out of the 165 observers performing a visual attention task failed to notice an unexpected, irrelevant object moving across the display. Surprisingly, this object significantly lowered accuracy in the primary task when, and only when, it failed to reach awareness. We suggest that an unexpected stimulus causes a state of alert that would normally generate an attentional shift; if this response is prevented by an attention-consuming task, a portion of the attentional resources remains allocated to the object. Such a portion is large enough to disturb performance, but not so large that the object can be recognized as task-irrelevant and accordingly ignored. Our findings have one counterintuitive implication: irrelevant stimuli might hamper some types of performance only when perceived subliminally.  相似文献   

8.
9.
In four experiments, we examined whether generalization to unfamiliar views was better under stereo viewing or under nonstereo viewing across different tasks and stimuli. In the first three experiments, we used a sequential matching task in which observers matched the identities of shaded tube-like objects. Across Experiments 1-3, we manipulated the presentation method of the nonstereo stimuli (having observers wear an eye patch vs. showing observers the same screen image) and the magnitude of the viewpoint change (30 degrees vs. 38 degrees). In Experiment 4, observers identified "easy" and "hard" rotating wire-frame objects at the individual level under stereo and nonstereo viewing conditions. We found a stereo advantage for generalizing to unfamiliar views in all the experiments. However, in these experiments, performance remained view dependent even under stereo viewing. These results strongly argue against strictly 2-D image-based models of object recognition, at least for the stimuli and recognition tasks used, and suggest that observers used representations that contained view-specific local depth information.  相似文献   

10.
Perception of animacy from the motion of a single object   总被引:3,自引:0,他引:3  
Tremoulet PD  Feldman J 《Perception》2000,29(8):943-951
We demonstrate that a single moving object can create the subjective impression that it is alive, based solely on its pattern of movement. Our displays differ from conventional biological motion displays (which normally involve multiple moving points, usually integrated to suggest a human form) in that they contain only a single rigid object moving across a uniform field. We focus on motion paths in which the speed and direction of the target object change simultaneously. Naive subjects' ratings of animacy were significantly influenced by (i) the magnitude of the speed change, (ii) the angular magnitude of the direction change, (iii) the shape of the object, and (iv) the alignment between the principal axis of the object and its direction of motion. These findings are consistent with the hypothesis that observers classify as animate only those objects whose motion trajectories are otherwise unlikely to occur in the observed setting.  相似文献   

11.
Z Kourtzi  M Shiffrar 《Acta psychologica》1999,102(2-3):265-292
Depth rotations can reveal new object parts and result in poor recognition of "static" objects (Biederman & Gerhardstein, 1993). Recent studies have suggested that multiple object views can be associated through temporal contiguity and similarity (Edelman & Weinshall, 1991; Lawson, Humphreys & Watson, 1994; Wallis, 1996). Motion may also play an important role in object recognition since observers recognize novel views of objects rotating in the picture plane more readily than novel views of statically re-oriented objects (Kourtzi & Shiffrar, 1997). The series of experiments presented here investigated how different views of a depth-rotated object might be linked together even when these views do not share the same parts. The results suggest that depth rotated object views can be linked more readily with motion than with temporal sequence alone to yield priming of novel views of 3D objects that fall in between "known" views. Motion can also enhance path specific view linkage when visible object parts differ across views. Such results suggest that object representations depend on motion processes.  相似文献   

12.
Objects that serve as extensions of the body can produce a sensation of embodiment, feeling as if they are a part of us. We investigated the characteristics that drive an object’s embodiment, examining whether cast-body shadows, a purely visual stimulus, are embodied. Tools are represented as an extension of the body when they enable observers to interact with distant targets, perceptually distorting space. We examined whether perceptual distortion would also result from exposure to cast-body shadows in two separate distance estimation perceptual matching tasks. If observers represent cast-body shadows as extensions of their bodies, then when these shadows extend toward a target, it should appear closer than when no shadow is present (Experiment 1). This effect should not occur when a non-cast-body shadow is cast toward a target (Experiment 2). We found perceptual distortions in both cast-body shadow and tool-use conditions, but not in our non-cast-body shadow condition. These results suggest that, although cast-body shadows do not enable interaction with objects or provide direct tactile feedback, observers nonetheless represent their shadows as if they were a part of them.  相似文献   

13.
Stephan BC  Caine D 《Perception》2007,36(2):189-198
In recognising a face the visual system shows a remarkable ability in overcoming changes in viewpoint. However, the mechanisms involved in solving this complex computational problem, particularly in terms of information processing, have not been clearly defined. Considerable evidence indicates that face recognition involves both featural and configural processing. In this study we examined the contribution of featural information across viewpoint change. Participants were familiarised with unknown faces and were later tested for recognition in complete or part-face format, across changes in view. A striking effect of viewpoint resulting in a reduction in profile recognition compared with the three-quarter and frontal views was found. However, a complete-face over part-face advantage independent of transformation was demonstrated across all views. A hierarchy of feature salience was also demonstrated. Findings are discussed in terms of the problem of object constancy as it applies to faces.  相似文献   

14.
In an earlier report (Harman, Humphrey, & Goodale, 1999), we demonstrated that observers who actively rotated three-dimensional novel objects on a computer screen later showed faster visual recognition of these objects than did observers who had passively viewed exactly the same sequence of images of these virtual objects. In Experiment 1 of the present study we showed that compared to passive viewing, active exploration of three-dimensional object structure led to faster performance on a "mental rotation" task involving the studied objects. In addition, we examined how much time observers concentrated on particular views during active exploration. As we found in the previous report, they spent most of their time looking at the "side" and "front" views ("plan" views) of the objects, rather than the three-quarter or intermediate views. This strong preference for the plan views of an object led us to examine the possibility in Experiment 2 that restricting the studied views in active exploration to either the plan views or the intermediate views would result in differential learning. We found that recognition of objects was faster after active exploration limited to plan views than after active exploration of intermediate views. Taken together, these experiments demonstrate (1) that active exploration facilitates learning of the three-dimensional structure of objects, and (2) that the superior performance following active exploration may be a direct result of the opportunity to spend more time on plan views of the object.  相似文献   

15.
Most studies and theories of object recognition have addressed the perception of rigid objects. Yet, physical objects may also move in a nonrigid manner. A series of priming studies examined the conditions under which observers can recognize novel views of objects moving nonrigidly. Observers were primed with 2 views of a rotating object that were linked by apparent motion or presented statically. The apparent malleability of the rotating prime object varied such that the object appeared to be either malleable or rigid. Novel deformed views of malleable objects were primed when falling within the object's motion path. Priming patterns were significantly more restricted for deformed views of rigid objects. These results suggest that moving malleable objects may be represented as continuous events, whereas rigid objects may not. That is, object representations may be "dynamically remapped" during the analysis of the object's motion.  相似文献   

16.
When viewing real-world scenes composed of a myriad of objects, detecting changes can be quite difficult, especially when transients are masked. In general, changes are noticed more quickly and accurately if they occur at the currently (or a recently) attended location. Here, we examine the effects of explicit and implicit semantic cues on the guidance of attention in a change detection task. Participants first attempted to read aloud a briefly presented prime word, then looked for a difference between two alternating versions of a real-world scene. Helpful primes named the object that changed, while misdirecting primes named another (unchanging) object in the picture. Robust effects were found for both explicit and implicit priming conditions, with helpful primes yielding faster change detection times than misdirecting or neutral primes. This demonstrates that observers are able to use higher order semantic information as a cue to guide attention within a natural scene, even when the semantic information is presented outside of explicit awareness.  相似文献   

17.
The Role of Fixation Position in Detecting Scene Changes Across Saccades   总被引:4,自引:1,他引:3  
Target objects presented within color images of naturalistic scenes were deleted or rotated during a saccade to or from the target object or to a control region of the scene. Despite instructions to memorize the details of the scenes and to monitor for object changes, viewers frequently failed to notice the changes. However, the failure to detect change was mediated by three other important factors: First, accuracy generally increased as the distance between the changing region and the fixation immediately before or after the change decreased. Second, changes were sometimes initially missed, but subsequently noticed when the changed region was later refixated. Third, when an object disappeared from a scene, detection of that disappearance was greatly improved when the deletion occurred during the saccade toward that object. These results suggest that fixation position and saccade direction play an important role in determining whether changes will be detected. It appears that more information can be retained across views than has been suggested by previous studies.  相似文献   

18.
Evidence for preserved representations in change blindness   总被引:2,自引:0,他引:2  
People often fail to detect large changes to scenes, provided that the changes occur during a visual disruption. This phenomenon, known as "change blindness," occurs both in the laboratory and in real-world situations in which changes occur unexpectedly. The pervasiveness of the inability to detect changes is consistent with the theoretical notion that we internally represent relatively little information from our visual world from one glance at a scene to the next. However, evidence for change blindness does not necessarily imply the absence of such a representation---people could also miss changes if they fail to compare an existing representation of the pre-change scene to the post-change scene. In three experiments, we show that people often do have a representation of some aspects of the pre-change scene even when they fail to report the change. And, in fact, they appear to "discover" this memory and can explicitly report details of a changed object in response to probing questions. The results of these real-world change detection studies are discussed in the context of broader claims about change blindness.  相似文献   

19.
The visual system has been suggested to integrate different views of an object in motion. We investigated differences in the way moving and static objects are represented by testing for priming effects to previously seen ("known") and novel object views. We showed priming effects for moving objects across image changes (e.g., mirror reversals, changes in size, and changes in polarity) but not over temporal delays. The opposite pattern of results was observed for objects presented statically; that is, static objects were primed over temporal delays but not across image changes. These results suggest that representations for moving objects are: (1) updated continuously across image changes, whereas static object representations generalize only across similar images, and (2) more short-lived than static object representations. These results suggest two distinct representational mechanisms: a static object mechanism rather spatially refined and permanent, possibly suited for visual recognition, and a motion-based object mechanism more temporary and less spatially refined, possibly suited for visual guidance of motor actions.  相似文献   

20.
The time course of perception and retrieval of object features was investigated. Participants completed a perceptual matching task and 2 recognition tasks under time pressure. The recognition tasks imposed different retention loads. A stochastic model of feature sampling with a Bayesian decision component was used to estimate the rate of feature perception and the rate of retrieval of feature information. The results demonstrated that retrieval rates did not differ among object features if only a single object was held in memory. If 2 objects were retained in memory, differences among retrieval rates of features emerged, indicating that features that were quickly perceived were also quickly retrieved. The results from the 2-object retention condition are compatible with process reinstatement models of retrieval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号