首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
How do observers recognize objects after spatial transformations? Recent neurocomputational models have proposed that object recognition is based on coordinate transformations that align memory and stimulus representations. If the recognition of a misoriented object is achieved by adjusting a coordinate system (or reference frame), then recognition should be facilitated when the object is preceded by a different object in the same orientation. In the two experiments reported here, two objects were presented in brief masked displays that were in close temporal contiguity; the objects were in either congruent or incongruent picture-plane orientations. Results showed that naming accuracy was higher for congruent than for incongruent orientations. The congruency effect was independent of superordinate category membership (Experiment 1) and was found for objects with different main axes of elongation (Experiment 2). The results indicate congruency effects for common familiar objects even when they have dissimilar shapes. These findings are compatible with models in which object recognition is achieved by an adjustment of a perceptual coordinate system.  相似文献   

2.
Z Kourtzi  M Shiffrar 《Acta psychologica》1999,102(2-3):265-292
Depth rotations can reveal new object parts and result in poor recognition of "static" objects (Biederman & Gerhardstein, 1993). Recent studies have suggested that multiple object views can be associated through temporal contiguity and similarity (Edelman & Weinshall, 1991; Lawson, Humphreys & Watson, 1994; Wallis, 1996). Motion may also play an important role in object recognition since observers recognize novel views of objects rotating in the picture plane more readily than novel views of statically re-oriented objects (Kourtzi & Shiffrar, 1997). The series of experiments presented here investigated how different views of a depth-rotated object might be linked together even when these views do not share the same parts. The results suggest that depth rotated object views can be linked more readily with motion than with temporal sequence alone to yield priming of novel views of 3D objects that fall in between "known" views. Motion can also enhance path specific view linkage when visible object parts differ across views. Such results suggest that object representations depend on motion processes.  相似文献   

3.
Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.  相似文献   

4.
Vuong QC  Tarr MJ 《Perception》2006,35(4):497-510
The spatiotemporal pattern projected by a moving object is specific to that object, as it depends on both the shape and the dynamics of the object. Previous research has shown that observers learn to make use of this spatiotemporal signature to recognize dynamic faces and objects. In two experiments, we assessed the extent to which the structural similarity of the objects and the presence of spatiotemporal noise affect how these signatures are learned and subsequently used in recognition. Observers first learned to identify novel, structurally distinctive or structurally similar objects that rotated with a particular motion. At test, each learned object moved with its studied motion or with a non-studied motion. In the non-studied motion condition we manipulated either dynamic information alone (experiment 1) or both static and dynamic information (experiment 2). Across both experiments we found that changing the learned motion of an object impaired recognition performance when 3-D shape was similar or when the visual input was noisy during learning. These results are consistent with the hypothesis that observers use learned spatiotemporal signatures and that such information becomes progressively more important as shape information becomes less reliable.  相似文献   

5.
Experimental evidence has shown that the time taken to recognize objects is often dependent on stimulus orientation in the image plane. This effect has been taken as evidence that recognition is mediated by orientation-specific stored representations of object shapes. However, the factors that determine the orientation specificity of these representations remain unclear. This issue is examined using a word-picture verification paradigm in which subjects identified line drawings of common mono- and polyoriented objects at different orientations. A detailed analysis of the results showed that, in contrast to mono-oriented objects, the recognition of polyoriented objects is not dependent on stimulus orientation. This interaction provides a further constraint on hypotheses about the factors that determine the apparent orientation specificity of stored shape representations. In particular, they support previous proposals that objects are encoded in stored representations at familiar stimulus orientations.  相似文献   

6.
Under some circumstances, moving objects capture attention. Whether a change in the direction of a moving object attracts attention is still unexplored. We investigated this using a continuous tracking task. In Experiment 1, four grating patches changed smoothly and semirandomly in their positions and orientations, and observers attempted to track the orientations of two of them. After the stimuli disappeared, one of the two target gratings was queried and observers reported its orientation; hence direction of the gratings' motion across the screen was an irrelevant feature. Despite the irrelevance of its motion, when the nonqueried grating had collided with an invisible boundary within the last 200 msec of the trial, accuracy reporting the queried grating was worse than when it had not. Attention was likely drawn by the unexpected nature of these changes in direction of motion, since the effect was eliminated when the boundaries were visible (Experiment 2). This tendency for unexpected motion changes to attract attention has important consequences for the monitoring of objects in everyday environments.  相似文献   

7.
Five experiments were designed to investigate the influence of three-dimensional (3-D) orientation change on apparent motion. Projections of an orientation-specific 3-D object were sequentially flashed in different locations and at different orientations. Such an occurrence could be resolved by perceiving a rotational motion in depth around an axis external to the object. Consistent with this proposal, it was found that observers perceived curved paths in depth. Although the magnitude of perceived trajectory curvature often fell short of that required for rotational motions in depth (3-D circularity), judgments of the slant of the virtual plane on which apparent motions occurred were quite close to the predictions of a model that proposes circular paths in depth.  相似文献   

8.
Learning to recognize objects appears to depend critically on extended observation of appearance over time. Specifically, temporal association between dissimilar views of an object has been proposed as a tool for learning invariant representations for recognition. We examined heretofore untested aspects of the temporal association hypothesis using a familiar dynamic object, the human body. Specifically, we examined the role of appearance prediction (temporal asymmetry) in temporal association. In our task, observers performed a change detection task using upright and inverted images of a walking body either with or without previous exposure to a motion stimulus depicting an upright walker. Observers who were exposed to the dynamic stimulus were further divided into two groups dependent on whether the observed motion depicted forward or backward walking. We find that the effect of the motion stimulus on sensitivity is highly dependent on whether the observed motion is consistent with past experience.  相似文献   

9.
Most studies and theories of object recognition have addressed the perception of rigid objects. Yet, physical objects may also move in a nonrigid manner. A series of priming studies examined the conditions under which observers can recognize novel views of objects moving nonrigidly. Observers were primed with 2 views of a rotating object that were linked by apparent motion or presented statically. The apparent malleability of the rotating prime object varied such that the object appeared to be either malleable or rigid. Novel deformed views of malleable objects were primed when falling within the object's motion path. Priming patterns were significantly more restricted for deformed views of rigid objects. These results suggest that moving malleable objects may be represented as continuous events, whereas rigid objects may not. That is, object representations may be "dynamically remapped" during the analysis of the object's motion.  相似文献   

10.
Face recognition depends critically on horizontal orientations (Goffaux & Dakin, Frontiers in Psychology, 1(143), 1–14, 2010): Face images that lack horizontal features are harder to recognize than those that have this information preserved. We asked whether facial emotional recognition also exhibits this dependency by asking observers to categorize orientation-filtered happy and sad expressions. Furthermore, we aimed to dissociate image-based orientation energy from object-based orientation by rotating images 90 deg in the picture plane. In our first experiment, we showed that the perception of emotional expression does depend on horizontal orientations, and that object-based orientation constrained performance more than image-based orientation did. In Experiment 2, we showed that mouth openness (i.e., open vs. closed mouths) also influenced the emotion-dependent reliance on horizontal information. Finally, we describe a simple computational analysis that demonstrates that the impact of mouth openness was not predicted by variation in the distribution of orientation energy across horizontal and vertical orientation bands. Overall, our results suggest that emotion recognition largely does depend on horizontal information defined relative to the face, but that this bias is modulated by multiple factors that introduce variation in appearance across and within distinct emotions.  相似文献   

11.
Many previous studies of object recognition have found view-dependent recognition performance when view changes are produced by rotating objects relative to a stationary viewing position. However, the assumption that an object rotation is equivalent to an observer viewpoint change ignores the potential contribution of extraretinal information that accompanies observer movement. In four experiments, we investigated the role of extraretinal information on real-world object recognition. As in previous studies focusing on the recognition of spatial layouts across view changes, observers performed better in an old/new object recognition task when view changes were caused by viewer movement than when they were caused by object rotation. This difference between viewpoint and orientation changes was due not to the visual background, but to the extraretinal information available during real observer movements. Models of object recognition need to consider other information available to an observer in addition to the retinal projection in order to fully understand object recognition in the real world.  相似文献   

12.
In four experiments, we examined the haptic recognition of 3-D objects. In Experiment 1, blindfolded participants named everyday objects presented haptically in two blocks. There was significant priming of naming, but no cost of an object changing orientation between blocks. However, typical orientations of objects were recognized more quickly than nonstandard orientations. In Experiment 2, participants accurately performed an unannounced test of memory for orientation. The lack of orientation-specific priming in Experiment 1, therefore, was not because participants could not remember the orientation at which they had first felt an object. In Experiment 3, we examined haptic naming of objects that were primed either haptically or visually. Haptic priming was greater than visual priming, although significant cross-modal priming was also observed. In Experiment 4, we tested recognition memory for familiar and unfamiliar objects using an old-new recognition task. Objects were recognized best when they were presented in the same orientation in both blocks, suggesting that haptic object recognition is orientation sensitive. Photographs of the unfamiliar objects may be downloaded from www.psychonomic.org/archive.  相似文献   

13.
Wakita M 《Animal cognition》2008,11(3):535-545
It was previously demonstrated that monkeys divide the orientation continuum into cardinal and oblique categories. However, it is still unclear how monkeys perceive within-category orientations. To better understand monkeys' perception of orientation, two experiments were conducted using five monkeys. In experiment 1, they were trained to identify either one cardinal or one oblique target orientation out of six orientations. The results showed that they readily identified the cardinal target whether it was oriented horizontally or vertically. However, a longer training period was needed to identify the oblique target orientation regardless of its degree and direction of tilt. In experiment 2, the same monkeys were trained to identify two-oblique target orientations out of six orientations. These orientations were paired, either sharing the degree of tilt, direction of tilt, or neither property. The results showed that the monkeys readily identified oblique orientations when they had either the same degree or direction of tilt. However, when the target orientations had neither the same degree nor direction of tilt, the animals had difficulty in identifying them. In summary, horizontal and vertical orientations are individually processed, indicating that monkeys do not have a category for cardinal orientation, but they may recognize cardinal orientations as non-obliques. In addition, monkeys efficiently abstract either the degree or the direction of tilt from oblique orientations, but they have difficulty combining these features to identify an oblique orientation. Thus, not all orientations within the oblique category are equally perceived.  相似文献   

14.
15.
《Visual cognition》2013,21(4):373-382
Left-right orientation and size incongruence is known to affect recognition memory for objects but not object priming. In the present study, the effects of study-test changes in left-right orientation and size on old-new recognition decisions and long-term priming of human motion patterns were examined. Experiment 1 showed effects of orientation incongruence on both recognition and priming. Experiment 2 showed an effectof size incongruence on recognition memory but not on priming. It is suggested that the representations of human actions that underlie human motion priming are on a level that preserve orientation, possibly because of the importance of dynamic information for perceiving motion patterns or because encoding of human motion is governed by a body schema (e.g. Reed & Farah, 1995). In contrast, low-level metric information such as size is inconsequential to priming because priming involves identification of shape, which is not affected by size transformations. The effect of size on recognition memory, on the other hand, shows thatexplicitrecognition decisions may draw on any available episodic information, including metric attributes, to make an old new discrimination.  相似文献   

16.
《Visual cognition》2013,21(2):157-199
Scene recognition across a perspective change typically exhibits viewpoint dependence. Accordingly, the more the orientation of the test viewpoint departs from that of the study viewpoint, the more time its takes and the less accurate observers are to recognize the spatial layout. Three experiments show that observers can take advantage of a virtual avatar that specifies their future “embodied” perspective on the visual scene. This “out-of-body” priming reduces or even abolishes viewpoint dependence for detecting a change in an object location when the environment is respectively unknown or familiar to the observer. Viewpoint dependence occurs when both the priming and primed viewpoints do not match. Changes to permanent extended structures (such as walls) or altered object-to-object spatial relations across viewpoint change are detrimental to viewpoint priming. A neurocognitive model describes the coordination of “out-of-body” and “embodied” perspectives relevant to social perception when understanding what another individual sees.  相似文献   

17.
If perspective views of an object in two orientations are displayed in alternation, observers will experience the object rotating back and forth in three-dimensional space. Rotational motion is perceived even though only two views are displayed and each view is two-dimensional. The results of 5 experiments converge on the conclusion that the perception of apparent rotational motion produces representations in visual memory corresponding to the spatial structure of the object along its arc of rotation. These representations are view-dependent, preserving information about spatial structure from particular perspectives, but do not preserve low-level perceptual details of the stimulus.  相似文献   

18.
Perceiving Real-World Viewpoint Changes   总被引:10,自引:0,他引:10  
Retinal images vary as observers move through the environment, but observers seem to have little difficulty recognizing objects and scenes across changes in view. Although real-world view changes can be produced both by object rotations (orientation changes) and by observer movements (viewpoint changes), research on recognition across views has relied exclusively on display rotations. However, research on spatial reasoning suggests a possible dissociation between orientation and viewpoint. Here we demonstrate that scene recognition in the real world depends on more than the retinal projection of the visible array; viewpoint changes have little effect on detection of layout changes, but equivalent orientation changes disrupt performance significantly. Findings from our three experiments suggest that scene recognition across view changes relies on a mechanism that updates a viewer-centered representation during observer movements, a mechanism not available for orientation changes. These results link findings from spatial tasks to work on object and scene recognition and highlight the importance of considering the mechanisms underlying recognition in real environments.  相似文献   

19.
The present experiment examined whether subjects can form and store imagined objects in various orientations. Subjects in a training phase named line drawings of natural objects shown at six orientations, named objects shown upright, or imagined upright objects at six orientations. Time to imagine an upright object at another orientation increased the farther the designated orientation was from the upright, with faster image formation times at 180° than at 120°. Similar systematic patterns of effects of orientation on identification time were found for rotated objects. During the test phase, all subjects named the previously experienced objects as well as new objects, at six orientations. The orientation effect for old objects seen previously in a variety of orientations was much reduced relative to the orientation effect for new objects. In contrast, substantial effects of orientation on naming time were observed for old objects for subjects who had previously seen the objects upright only or upright but imagined at different orientations. The results suggest that the attenuation of initially large effects of orientation with practice cannot be due to imagining and forming representations of objects at a number of orientations.  相似文献   

20.
Detection and recognition of point-light walking is reduced when the display is inverted, or turned upside down. This indicates that past experience influences biological motion perception. The effect could be the result of either presenting the human form in a novel orientation or presenting the event of walking in a novel orientation, as the two are confounded in the case of walking on feet. This study teased apart the effects of object and event orientation by examining detection accuracy for upright and inverted displays of a point-light figure walking on his hands. Detection of this walker was greater in the upright display, which had a familiar event orientation and an unfamiliar object orientation, than in the inverted display, which had a familiar object orientation and an unfamiliar event orientation. This finding supports accounts of event perception and recognition that are based on spatiotemporal patterns of motion associated with the dynamics of an event.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号