首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We tested recognition of familiar objects in two different conditions: mono, where stimuli were displayed as flat, 2-D images, and stereo, where objects were displayed with stereoscopic depth information. In three experiments, participants performed a sequential matching task, where an object was rotated by up to 180° between presentations. When the 180° rotation resulted in large changes in depth for object components, recognition performance in the mono condition showed better performance at 180° rotations than at smaller rotations, but stereo presentations showed a monotonic increase in response time with rotation. However, 180° rotations that did not result in much depth variation showed similar patterns of results for mono and stereo conditions. These results suggest that in some circumstances, the lack of explicit 3-D information in 2-D images may influence the recognition of familiar objects when they are depicted on flat computer monitors.  相似文献   

2.
Performance is often impaired linearly with increasing angular disparity between two objects in tasks that measure mental rotation or object recognition. But increased angular disparity is often accompanied by changes in the similarity between views of an object, confounding the impact of the two factors in these tasks. We examined separately the effects of angular disparity and image similarity on handedness (to test mental rotation) and identity (to test object recognition) judgments with 3-D novel objects. When similarity was approximately equated, an effect of angular disparity was only found for handedness but not identity judgments. With a fixed angular disparity, performance was better for similar than dissimilar image pairs in both tasks, with a larger effect for identity than handedness judgments. Our results suggest that mental rotation involves mental transformation procedures that depend on angular disparity, but that object recognition is predominately dependent on the similarity of image features.  相似文献   

3.
Wilson KD  Farah MJ 《Perception》2006,35(10):1351-1366
A fundamental but unanswered question about the human visual system concerns the way in which misoriented objects are recognized. One hypothesis maintains that representations of incoming stimuli are transformed via parietally based spatial normalization mechanisms (eg mental rotation) to match view-specific representations in long-term memory. Using fMRI, we tested this hypothesis by directly comparing patterns of brain activity evoked during classic mental rotation and misoriented object recognition involving everyday objects. BOLD activity increased systematically with stimulus rotation within the ventral visual stream during object recognition and within the dorsal visual stream during mental rotation. More specifically, viewpoint-dependent activity was significantly greater in the right superior parietal lobule during mental rotation than during object recognition. In contrast, viewpoint-dependent activity was significantly greater in the right fusiform gyrus during object recognition than during mental rotation. In addition to these differences in viewpoint-dependent activity, object recognition and mental rotation produced distinct patterns of brain activity, independent of stimulus rotation: object recognition resulted in greater overall activity within ventral stream visual areas and mental rotation resulted in greater overall activity within dorsal stream visual areas. The present results are inconsistent with the hypothesis that misoriented object recognition is mediated by structures within the parietal lobe that are known to be involved in mental rotation.  相似文献   

4.
Eighty-four participants mentally rotated meaningful and meaningless objects. Within each type of object, half were simple and half were complex; the complexity was the same across the meaningful and meaningless objects. The patterns of errors were examined as a function of the type of stimuli (meaningful vs. meaningless), complexity, and angle of rotation. The data for the meaningful objects showed steeper slopes of rotation for complex objects than that for simple objects. In contrast, the simple and complex meaningless objects showed comparable increases in error rates as a function of angle of rotation. Furthermore, the slopes remained comparable after pretraining that increased familiarity with the objects. The results are discussed in terms of underlying representations of meaningful and meaningless objects and their implications to mental transformations. The data are consistent with a piecemeal rotation of the meaningful stimuli and a holistic rotation of the meaningless stimuli.  相似文献   

5.
When deciding if a rotated object would face to the left or to the right, if imagined at the upright, mental rotation is typically assumed to be carried out through the shortest angular distance to the upright prior to determining the direction of facing. However, the response time functions for left- and right-facing objects are oppositely asymmetric, which is not consistent with the standard explanation. Using Searle and Hamm’s individual differences adaption of Kung and Hamm’s Mixture Model, the current study compares the predicted response time functions derived when assuming that objects are rotated through the shortest route to the upright with the predicted response time functions derived when assuming that objects are rotated in the direction they face. The latter model provides a better fit to the majority of the individual data. This allows us to conclude that, when deciding if rotated objects would face to the left or to the right if imagined at the upright, mental rotation is carried out in the direction that the objects face and not necessarily in the shortest direction to the upright. By comparing results for mobile and immobile object sets we can also conclude that semantic information regarding the mobility of an object does not appear to influence the speed of mental rotation, but it does appear to influence pre-rotation processes and the likelihood of employing a mental rotation strategy.  相似文献   

6.
The effects of picture-plane rotations on times taken to name familiar objects (RTs) may reflect a process of mental rotation to stored viewpoint-specific representations: the rotate-to-recognize hypothesis. Alternatively, mental rotation might be used after stored object representations are activated by a misoriented stimulus in order to verify a weak or distorted shape percept: the double-checking hypothesis. We tested these two accounts of rotation effects in object recognition by having subjects verify the orientations (to within 90 degrees) and basic-level names of 14-msec, backward-masked depictions of common objects. The stimulus-mask interval (SOA) varied from 14 to 41 msec, permitting interpolation of the SOA required for 75% accuracy (SOAc). Whereas the SOAc to verify orientation increased with rotation up to 180 degrees, the SOAc to verify identity was briefer and asymptoted at approximately 60 degrees. We therefore reject the rotate-to-recognize hypothesis, which implies that SOAc should increase steadily with rotation in both tasks. Instead, we suggest that upright and near-upright stimuli are matched by a fast direct process and that misoriented stimuli are matched at a featural level by a slightly slower view-independent process. We also suggest that rotation effects on RTs reflect a postrecognition stage of orientation verification: the rotate-to-orient hypothesis, a version of double-checking that also explains the well-known reduction in orientation effects on RTs when naming repeated objects.  相似文献   

7.
Subjects either named rotated objects or decided whether the objects would face left or right if they were upright. Response time in the left-right task was influenced by a rotation aftereffect or by the physical rotation of the object, which is consistent with the view that the objects were mentally rotated to the upright and that, depending on its direction, the perceived rotary motion of the object either speeded or slowed mental rotation. Perceived rotary motion did not influence naming time, which suggests that the identification of rotated objects does not involve mental rotation.  相似文献   

8.
Three experiments involving different angular orientations of tactual shapes were performed. In experiment 1 subjects were timed as they made 'same-different' judgments about two successive rotated shapes. Results showed that no rotation effect is obtained, i.e., reaction times and error percentage do not increase linearly with rotation angle. The same negative results were found in experiment 2, in which subjects were similarly timed while they made mirror-image discriminations. In experiment 3 a single-stimulus paradigm was used and subjects were asked to decide if a rotated stimulus was a 'normal' or 'reversed' version. Reaction times increased linearly with angular departure from the vertical. Therefore, for tactual stimuli too, this study confirms previous results, which suggest that a mental rotation strategy only occurs if it is facilitated by both type of task and type of stimulus. Results also show a significant difference between hands, and between hands and type of response. Implied hemispheric differences are discussed.  相似文献   

9.
Previous work revealed that mental rotation is not purely inserted into a same-different discrimination task. Instead, response time (RT) is slowed to upright stimuli in blocks containing rotated stimuli compared to RT to the same upright stimuli in pure upright blocks. This interference effect is a result of maintaining readiness for mental rotation. In two experiments we investigated previous evidence that these costs depend upon distinct sub-processes for children and for adults. In Experiment 1, the maintaining costs turned out to be independent of the visual quality of the stimulus for adults but not so for children. Experiment 2 revealed that the maintaining costs were greatly reduced for adults when they performed mental rotation as a go-no-go task, but not so for children. Taken together, both experiments provide evidence that whereas perceptual processes seem to be important for school-age children to maintain readiness for mental rotation, response selection is relevant for adults.  相似文献   

10.
Haptic recognition of familiar objects by the early blind, the late blind, and the sighted was investigated with two-dimensional (2-D) and three-dimensional (3-D) stimuli produced by small tactor-pins. The 2-D stimulus was an outline of an object that was depicted by raising tactor-pins to 1.5 mm. The 3-D stimulus was a relief that was produced by raising the tactors up to 10 mm, corresponding to the height of the object. Mean recognition times for correct answers to the 3-D stimuli were faster than those for the 2-D stimuli, in all three subject groups. No statistically significant differences in percentage of correct responses between the 2-D and the 3-D stimuli were found for the late-blind and sighted groups, but the early-blind group demonstrated a significant difference. In addition, the haptic legibility for the quality of depiction of the object, without regard to whether or not the stimulus was understood, was measured. The haptic legibility of the 3-D stimuli was significantly higher than that of the 2-D stimuli for all the groups. These results suggest that 3-D presentation seems to promise a way to overcome the limitations of 2-D graphic display.  相似文献   

11.
Jolicoeur (1985, Memory & Cognition 13 289-303) found a linear increase in the latency to name line drawings of objects rotated (0 degrees to 120 degrees) from the upright (0 degrees) in the initial trial block. This effect was much shallower in later blocks. He proposed that the initial effect may indicate that mental rotation is the default process for recognising rotated objects, and that the decrease in this effect, seen with practice, may reflect the increased use of learned orientation-invariant features. Initially, we were interested in whether object-colour associations that may be learned during the initial block, could account for the reduced latency to name rotated objects, seen in later blocks. In experiment 1 we used full-cue colour images of objects that depicted colour and other surface cues. Surprisingly, given that Jolicoeur's findings were replicated several times with line drawings, we found that even the initial linear trend in naming latency was shallow. We replicated this result in follow-up experiments. In contrast, when we used less-realistic depictions of the same objects that had fewer visual cues (ie line drawings, coloured drawings, greyscale images), the results were comparable to those of Jolicoeur. Also, the initial linear trends were steeper for these depictions than for full-cue colour images. The results suggest that, when multiple surface cues are available in the image, mental rotation may not be the default recognition process.  相似文献   

12.
Mental imagery and the third dimension   总被引:1,自引:0,他引:1  
What sort of medium underlies imagery for three-dimensional scenes? In the present investigation, the time subjects took to scan between objects in a mental image was used to infer the sorts of geometric information that images preserve. Subjects studied an open box in which five objects were suspended, and learned to imagine this display with their eyes closed. In the first experiment, subjects scanned by tracking an imaginary point moving in a straight line between the imagined objects. Scanning times increased linearly with increasing distance between objects in three dimensions. Therefore metric 3-D information must be preserved in images, and images cannot simply be 2-D "snapshots." In a second experiment, subjects scanned across the image by "sighting" objects through an imaginary rifle sight. Here scanning times were found to increase linearly with the two-dimensional separations between objects as they appeared from the original viewing angle. Therefore metric 2-D distance information in the original perspective view must be preserved in images, and images cannot simply be 3-D "scale-models" that are assessed from any and all directions at once. In a third experiment, subjects mentally rotated the display 90 degrees and scanned between objects as they appeared in this new perspective view by tracking an imaginary rifle signt, as before. Scanning times increased linearly with the two-dimensional separations between objects as they would appear from the new relative viewing perspective. Therefore images can display metric 2-D distance information in a perspective view never actually experiences, so mental images cannot simply be "snapshot plus scale model" pairs. These results can be explained by a model in which the three-dimensional structure of objects is encoded in long-term memory in 3-D object-centered coordinate systems. When these objects are imagined, this information is then mapped onto a single 2-D "surface display" in which the perspective properties specific to a given viewing angle can be depicted. In a set of perceptual control experiments, subjects scanned a visible display by (a) simply moving their eyes from one object to another, (b) sweeping an imaginary rifle sight over the display, or (c) tracking an imaginary point moving from one object to another. Eye-movement times varied linearly with 2-D interobject distance, as did time to scan with an imaginary rifle sight; time to tract a point varied independently with the 3-D and 2-D interobject distances. These results are compared with the analogous image scanning results to argue that imagery and perception share some representational structures but that mental image scanning is a process distinct from eye movements or eye-movement commands.  相似文献   

13.
In three experiments, we independently manipulated the angular disparity between objects to be compared and the angular distance between the central axis of the objects and the vertical axis in a mental rotation paradigm. There was a linear increase in reaction times that was attributable to both factors. This result held whether the objects were rotated (with respect to each other and to the upright) within the frontal-parallel plane (Experiment 1) or in depth (Experiment 2), although the effects of both factors were greater for objects rotated in depth than for objects rotated within the frontal-parallel plane (Experiment 3). In addition, the factors interacted when the subjects had to search for matching ends of the figures (Experiments 1 and 2), but they were additive when the ends that matched were evident (Experiment 3). These data may be interpreted to mean that subjects normalize or reference an object with respect to the vertical upright as well as compute the rotational transformations used to determine shape identity.  相似文献   

14.
15.
We examined the effects of plane rotation, task, and visual complexity on the recognition of familiar and chimeric objects. The effects of rotation, with response times increasing linearly and monotonically with rotation from the upright, were equivalent for tasks requiring different degrees of visual differentiation of the target from contrasting stimuli--namely, (1) deciding whether the stimulus was living or nonliving (semantic classification), (2) deciding whether the stimulus was an object or a nonobject (object decision), and (3) naming. The effects of complexity, with shorter response times to more complex stimuli, were most apparent in semantic classification and object decision and were additive with the effects of rotation. We discuss the implications of these results for theories of the relationship between the process of normalization and the determining of object identity.  相似文献   

16.
Most models of object recognition and mental rotation are based on the matching of an object's 2-D view with representations of the object stored in memory. They propose that a time-consuming normalization process compensates for any difference in viewpoint between the 2-D percept and the stored representation. Our experiment shows that such normalization is less time consuming when it has to compensate for disorientations around the vertical than around the horizontal axis of rotation. By decoupling the different possible reference frames, we demonstrate that this anisotropy of the normalization process is defined not with respect to the retinal frame of reference, but, rather, according to the gravitational or the visuocontextual frame of reference. Our results suggest that the visual system may call upon both the gravitational vertical and the visuocontext to serve as the frame of reference with respect to which 3-D objects are gauged in internal object transformations.  相似文献   

17.
An orientation-matching task, based on a mental rotation paradigm, was used to investigate how participants manually rotated a Shepard-Metzler object in the real world and in an immersive virtual environment (VE). Participants performed manual rotation more quickly and efficiently than virtual rotation, but the general pattern of results was similar for both. The rate of rotation increased with the starting angle between the stimuli meaning that, in common with many motor tasks, an amplitude-based relationship such as P. M. Fitts' (1954) law is applicable. When rotation was inefficient (i.e., not by the shortest path), it was often because participants incorrectly perceived the orientation of one of the objects, and this happened more in the VE than in the real world. Thus, VEs allow objects to be manipulated naturally to a limited extent, indicating the need for timing-scale factors to be used for applications such as method-time-motion studies of manufacturing operations.  相似文献   

18.
E K Warrington  M James 《Perception》1986,15(3):355-366
An investigation is reported of the ability of normal subjects and patients with right-hemisphere lesions to identify 3-D shadow images of common objects from different viewpoints. Object recognition thresholds were measured in terms of angle of rotation (through the horizontal or vertical axis) required for correct identification. Effects of axial rotation were very variable and no evidence was found of a typical recognition threshold function relating angle of view to object identification. Although the right-hemisphere-lesion group was consistently and significantly worse than the control group, no qualitative differences between the groups were observed. The findings are discussed in relation to Marr's theory that the geometry of a 3-D shape is derived from axial information, and it is argued that the data reported are more consistent with a distinctive-features model of object recognition.  相似文献   

19.
物体识别的绩效随物体的视角变化而变化,这一物体识别的视角依赖现象引发了研究者对物体识别的机制的广泛讨论。有研究者认为,心理旋转是导致物体识别视角依赖的原因,而另一种观点认为物体识别中不包含心理旋转过程。两种观点都有来自于行为和神经机制两方面研究的证据。然而,现有的行为证据都是间接的证据,缺乏说服力。进一步的研究应注重直接操纵影响心理旋转与物体识别过程的因素,并把行为研究与能进行实时监测的眼动、脑成像等研究结合起来  相似文献   

20.
Motion in the mind’s eye: Comparing mental and visual rotation   总被引:1,自引:0,他引:1  
Mental rotation is among the most widely studied visuospatial skills in humans. The processes involved in mental rotation have been described as analogous to seeing an object physically rotate. We used functional magnetic resonance imaging of the whole brain and localized motion-sensitive hV5/MT1 to compare brain activation for stimuli when they were mentally or visually rotated. The results provided clear evidence for activation in hV5/MT1 during both mental and visual rotation of figures, with increased activation for larger rotations. Combined with the overall similarities between mental and visual rotation in this study, these results suggest that mental rotation recruits many of the same neural substrates as observing motion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号