首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the present study, we examined whether it is easier to judge when an object will pass one’s head if the object’s surface is textured. There are three reasons to suspect that this might be so: First, the additional (local) optic flow may help one judge the rate of expansion and the angular velocity more reliably. Second, local deformations related to the change in angle between the object and the observer could help track the object’s position along its path. Third, more reliable judgments of the object’s shape could help separate global expansion caused by changes in distance from expansion due to changes in the angle between the object and the observer. We can distinguish among these three reasons by comparing performance for textured and uniform spheres and disks. Moving objects were displayed for 0.5–0.7 sec. Subjects had to decide whether the object would pass them before or after a beep that was presented 1 sec after the object started moving. Subjects were not more precise with textured objects. When the disk rotated in order to compensate for the orientation-related contraction that its image would otherwise undergo during its motion, it appeared to arrive later, despite the fact that this strategy increases the global rate of expansion. We argue that this is because the expected deformation of the object’s image during its motion is considered when time to passage is judged. Therefore, the most important role for texture in everyday judgments of time to passage is probably that it helps one judge the object’s shape and thereby estimate how its image will deform as it moves.  相似文献   

2.
Effects of information specifying the position of an object in a 3-D scene were investigated in two experiments with twelve observers. To separate the effects of the change in scene position from the changes in the projection that occur with increased distance from the observer, the same projections were produced by simulating (a) a constant object at different scene positions and (b) different objects at the same scene position. The simulated scene consisted of a ground plane, a ceiling plane, and a cylinder on a pole attached to both planes. Motion-parallax scenes were studied in one experiment; texture-gradient scenes were studied in the other. Observers adjusted a line to match the perceived internal depth of the cylinder. Judged depth for objects matched in simulated size decreased as simulated distance from the observer increased. Judged depth decreased at a faster rate for the same projections shown at a constant scene position. Adding object-centered depth information (object rotation) increased judged depth for the motion-parallax displays. These results demonstrate that the judged internal depth of an object is reduced by the change in projection that occurs with increased distance, but this effect is diminished if information for change in scene position accompanies the change in projection.  相似文献   

3.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

4.
The aim of this narrow-focused text is to argue against the claim that the appresentation of unperceived features of objects that is implied in perceptual intentionality presupposes a reference to perceptions other subjects could have of these objects. This claim, as it has been defended by Dan Zahavi, rests upon an erroneous supposition about the modal status of the perceptual possibilities to which the perceived object refers, which shall not be interpreted as effectively realizable but as mere de jure possibilities, perceptions that could have been realized in principle, but that are maybe beyond one’s reach considering one’s concrete factual powers and opportunities. Horizontal intentionality is better accounted for in terms of perceptions that one could have had because of one’s embodied character and the always open possibility of occupying another position with respect to the object. This modal ubiquity which is inherent to one’s being-in-space is what supports the field of de jure possibilities that is implied in horizontal intentionality. The co-presence of the parts and features one does not perceive from here is a counterpoint to one’s being-possibly-there.  相似文献   

5.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

6.
Models of spatial updating attempt to explain how representations of spatial relationships between the actor and objects in the environment change as the actor moves. In allocentric models, object locations are encoded in an external reference frame, and only the actor’s position and orientation in that reference frame need to be updated. Thus, spatial updating should be independent of the number of objects in the environment (set size). In egocentric updating models, object locations are encoded relative to the actor, so the location of each object relative to the actor must be updated as the actor moves. Thus, spatial updating efficiency should depend on set size. We examined which model better accounts for human spatial updating by having people reconstruct the locations of varying numbers of virtual objects either from the original study position or from a changed viewing position. In consistency with the egocentric updating model, object localization following a viewpoint change was affected by the number of objects in the environment.  相似文献   

7.
In a mental rotation task, participants must determine whether two stimuli match when one undergoes a rotation in 3-D space relative to the other. The key evidence for mental rotation is the finding of a linear increase in response times as objects are rotated farther apart. This signature increase in response times is also found in recognition of rotated objects, which has led many theorists to postulate mental rotation as a key transformational procedure in object recognition. We compared mental rotation and object recognition in tasks that used the same stimuli and presentation conditions and found that, whereas mental rotation costs increased relatively linearly with rotation, object recognition costs increased only over small rotations. Taken in conjunction with a recent brain imaging study, this dissociation in behavioral performance suggests that object recognition is based on matching of image features rather than on 3-D mental transformations.  相似文献   

8.
Zhang H  Mou W  McNamara TP 《Cognition》2011,(3):419-429
Three experiments examined the role of reference directions in spatial updating. Participants briefly viewed an array of five objects. A non-egocentric reference direction was primed by placing a stick under two objects in the array at the time of learning. After a short interval, participants detected which object had been moved at a novel view that was caused by table rotation or by their own locomotion. The stick was removed at test. The results showed that detection of position change was better when an object not on the stick was moved than when an object on the stick was moved. Furthermore change detection was better in the observer locomotion condition than in the table rotation condition only when an object on the stick was moved but not when an object not on the stick was moved. These results indicated that when the reference direction was not accurately indicated in the test scene, detection of position change was impaired but this impairment was less in the observer locomotion condition. These results suggest that people not only represent objects’ locations with respect to a fixed reference direction but also represent and update their orientation according to the same reference direction, which can be used to recover the accurate reference direction and facilitate detection of position change when no accurate reference direction is presented in the test scene.  相似文献   

9.
When deciding if a rotated object would face to the left or to the right, if imagined at the upright, mental rotation is typically assumed to be carried out through the shortest angular distance to the upright prior to determining the direction of facing. However, the response time functions for left- and right-facing objects are oppositely asymmetric, which is not consistent with the standard explanation. Using Searle and Hamm’s individual differences adaption of Kung and Hamm’s Mixture Model, the current study compares the predicted response time functions derived when assuming that objects are rotated through the shortest route to the upright with the predicted response time functions derived when assuming that objects are rotated in the direction they face. The latter model provides a better fit to the majority of the individual data. This allows us to conclude that, when deciding if rotated objects would face to the left or to the right if imagined at the upright, mental rotation is carried out in the direction that the objects face and not necessarily in the shortest direction to the upright. By comparing results for mobile and immobile object sets we can also conclude that semantic information regarding the mobility of an object does not appear to influence the speed of mental rotation, but it does appear to influence pre-rotation processes and the likelihood of employing a mental rotation strategy.  相似文献   

10.
Rushton SK  Bradshaw MF  Warren PA 《Cognition》2007,105(1):237-245
An object that moves is spotted almost effortlessly; it "pops out". When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion. Without the unique identifier of retinal motion an object moving relative to the scene should be difficult to locate. Using a search task, we investigated this proposition. Computer-rendered objects were moved and transformed in a manner consistent with movement of the observer. Despite the complex pattern of retinal motion, objects moving relative to the scene were found to pop out. We suggest the brain uses its sensitivity to optic flow to "stabilise" the scene, allowing the scene-relative movement of an object to be identified.  相似文献   

11.
段海军  连灵 《心理科学》2012,35(1):76-81
物体识别的两大理论一直存在争议。以物体为中心理论认为不管物体出现在什么位置,识别均与空间位置无关,而以观察者为中心理论认为识别与空间位置有关。研究参照物体识别的“小几何体”思想自制实验材料,采用启动范式下的分类任务,通过操纵物体自身的结构信息和相对的结构信息,考察了三维物体识别的影响机制。结果发现:(1)物体自身组成部分之间的分离水平和物体之间的相对空间位置对物体识别的影响均呈层级式。支持以观察者为中心理论的整体表征观;(2)不分离水平和相同位置上,整体启动快于部分启动;全分离水平和远距离位置上,部分启动快于整体启动。支持以物体为中心理论的小几何体优先加工观。实现两大理论的融合需要进一步厘清“What + Where”两通路联合表征的二级子层级。  相似文献   

12.
Many previous studies of object recognition have found view-dependent recognition performance when view changes are produced by rotating objects relative to a stationary viewing position. However, the assumption that an object rotation is equivalent to an observer viewpoint change ignores the potential contribution of extraretinal information that accompanies observer movement. In four experiments, we investigated the role of extraretinal information on real-world object recognition. As in previous studies focusing on the recognition of spatial layouts across view changes, observers performed better in an old/new object recognition task when view changes were caused by viewer movement than when they were caused by object rotation. This difference between viewpoint and orientation changes was due not to the visual background, but to the extraretinal information available during real observer movements. Models of object recognition need to consider other information available to an observer in addition to the retinal projection in order to fully understand object recognition in the real world.  相似文献   

13.
When pictures of simple shapes (square, diamond) were seen frontally and obliquely, (1) the shapes with a deeper extent into pictured space underwent morerotation (Goldstein, 1979), which is an apparent turning to keep an orientation toward an observer’s changing position; (2) there was little effect of whether the observer knew the picture surface’s orientation in real space, except that such knowledge could prevent multistability; and (3) depicted picture frames also rotated. In other experiments, figural and frame rotations were independent of each other, and rotation was shown for real frames. The rotation of depthless depictions suggests that at least two rotational factors exist, one that involves the object’s virtual depth and one that does not. The nature of this second factor is discussed. Frame rotation appeared to subtract from object rotation when the two were being compared; this could explain a paradox in picture perception: Depicted orientations often seem little changed over viewpoints, despite (apparent) rotations with respect to real-space coordinates.  相似文献   

14.
Observers perceive objects in the world as stable over space and time, even though the visual experience of those objects is often discontinuous and distorted due to masking, occlusion, camouflage, or noise. How are we able to easily and quickly achieve stable perception in spite of this constantly changing visual input? It was previously shown that observers experience serial dependence in the perception of features and objects, an effect that extends up to 15 seconds back in time. Here, we asked whether the visual system utilizes an object’s prior physical location to inform future position assignments in order to maximize location stability of an object over time. To test this, we presented subjects with small targets at random angular locations relative to central fixation in the peripheral visual field. Subjects reported the perceived location of the target on each trial by adjusting a cursor’s position to match its location. Subjects made consistent errors when reporting the perceived position of the target on the current trial, mislocalizing it toward the position of the target in the preceding two trials (Experiment 1). This pull in position perception occurred even when a response was not required on the previous trial (Experiment 2). In addition, we show that serial dependence in perceived position occurs immediately after stimulus presentation, and it is a fast stabilization mechanism that does not require a delay (Experiment 3). This indicates that serial dependence occurs for position representations and facilitates the stable perception of objects in space. Taken together with previous work, our results show that serial dependence occurs at many stages of visual processing, from initial position assignment to object categorization.  相似文献   

15.
Due to the diffraction of light and other optical distortions of the eye, the image of an object is not exactly the same as the object. When two objects are close enough, their two images overlap so as to form one image, located at a position somewhere between the two original images. This fact is used to explain illusions produced by the crossing of lines, including Poggendorffs, Zollner’s, Hering’s, Wundt’s, the Müller-Lyer and other illusions of this class.  相似文献   

16.
ABSTRACT

Previous research has demonstrated repeatedly that the mental rotation of human-like objects can be performed more quickly than the mental rotation of abstract objects (a body analogy effect). According to existing accounts, the body analogy effect is mediated by projections of one’s own body axes onto objects (spatial embodiment), and the mental emulation of the observed body posture (motoric embodiment). To test whether motoric embodiment facilitates the mental rotation of human-like objects, we conducted an experiment using a snake-like object that had its own body axes but would be difficult to emulate. Twenty-four participants performed the mental rotation of snake-shaped cubes with or without a snake face as well as human-shaped cubes with or without a human face. Results showed that the presence of a face increased mental rotation speeds for both human-shaped and snake-shaped cubes, confirming both the human-body and snake analogy effects. More importantly, the snake analogy effect was equal to the human-body analogy effect. These findings contradict the motoric embodiment account and suggest that any object that can be regarded as a unit facilitates holistic mental rotation, which in turn leads to improved performance.  相似文献   

17.
The present study investigated whether and how the location of bystander objects is encoded, maintained, and integrated across an eye movement. Bystander objects are objects that remain unfixated directly before and after the saccade for which transsaccadic integration is being examined. Three experiments are reported that examine location coding of bystander objects relative to the future saccade target object, relative to the saccade source object, and relative to other bystander objects. Participants were presented with a random‐dot pattern and made a saccade from a central source to a designated saccade target. During this saccade the position of a single bystander was changed on half of the trials and participants had to detect the displacement. Postsaccadically the presence of the target, source, and other bystanders was manipulated. Results indicated that the location of bystander objects could be integrated across a saccade, and that this relied on configurational coding. Furthermore the present data provide evidence for the view that transsaccadic perception of spatial layout is not inevitably tied to the saccade target or the saccade source, that it makes use of objects and object configurations in a flexible manner that is partly governed by the task relevance of the various display items, and that it exploits the incidental configurational structure in the display's layout in order to increase its capacity limits.  相似文献   

18.
Studies show that visual-manual object exploration influences spatial cognition, and specifically mental rotation performance in infancy. The current work with 9-month-old infants investigated which specific exploration procedures (related to crawling experience) support mental rotation performance. In two studies, we examined the effects of two different exploration procedures, manual rotation (Study 1) and haptic scanning (Study 2), on subsequent mental rotation performance. To this end, we constrained infants’ exploration possibilities to only one of the respective procedures, and then tested mental rotation performance using a live experimental set-up based on the task used by Moore and Johnson (2008). Results show that, after manual rotation experience with a target object, crawling infants were able to distinguish between exploration objects and their mirror objects, while non-crawling infants were not (Study 1). Infants who were given prior experience with objects through haptic scans (Study 2) did not discriminate between objects, regardless of their crawling experience. Results indicated that a combination of manual rotations and crawling experience are valuable for building up the internal spatial representation of an object.  相似文献   

19.
The current study examined whether carrying objects in one's hands influenced different parameters associated with independent locomotion. Specifically, 14- and 24-month-olds walked in a straight path under four conditions of object carriage – no object (control), one object carried in one hand (one object-one hand), two objects carried in each of the hands (two objects-two hands), and one object carried in both hands simultaneously (one object-two hands). Although carrying objects failed to influence a variety of kinematic parameters of gait, it did affect children's arm postures, with children adopting less mature arm positions when carrying objects. Finally, arm position was related to walking skill, but only for older children when they were not carrying objects. These findings indicate that although a relation does exist between arm positions and gait parameters, this relation is easily disrupted by carrying loads, even small ones.  相似文献   

20.
Can we imagine how objects look from other viewpoints?   总被引:2,自引:0,他引:2  
Many psychologists who study cognition believe that perception achieves object-centered representations that make it possible to extract representations of how the object would appear from differing viewpoints. Others believe we can achieve representations of how an object would appear by a process of visualization or mental rotation. We report experiments in which the subject tries to imagine how three-dimensional novel wire objects would appear from positions other than the one they are in. Subjects are unable to perform this task unless they make use of strategies that circumvent the process of visualization. It is suggested that the linear increase in time required to succeed in mental rotation tasks as a function of the angular discrepancy between the figures compared is the result of increasing difficulty rather than of the time required for rotation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号