首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The effects of picture-plane rotations on times taken to name familiar objects (RTs) may reflect a process of mental rotation to stored viewpoint-specific representations: the rotate-to-recognize hypothesis. Alternatively, mental rotation might be used after stored object representations are activated by a misoriented stimulus in order to verify a weak or distorted shape percept: the double-checking hypothesis. We tested these two accounts of rotation effects in object recognition by having subjects verify the orientations (to within 90 degrees) and basic-level names of 14-msec, backward-masked depictions of common objects. The stimulus-mask interval (SOA) varied from 14 to 41 msec, permitting interpolation of the SOA required for 75% accuracy (SOAc). Whereas the SOAc to verify orientation increased with rotation up to 180 degrees, the SOAc to verify identity was briefer and asymptoted at approximately 60 degrees. We therefore reject the rotate-to-recognize hypothesis, which implies that SOAc should increase steadily with rotation in both tasks. Instead, we suggest that upright and near-upright stimuli are matched by a fast direct process and that misoriented stimuli are matched at a featural level by a slightly slower view-independent process. We also suggest that rotation effects on RTs reflect a postrecognition stage of orientation verification: the rotate-to-orient hypothesis, a version of double-checking that also explains the well-known reduction in orientation effects on RTs when naming repeated objects.  相似文献   

2.
We used repetition blindness (RB) as a measure of object recognition and compared the pattern of RB obtained for objects with a well-established upright orientation (mono-oriented objects) and those without a usual upright orientation (polyoriented objects), when the critical objects were either in identical orientations or differed by 30 degrees, 60 degrees, 90 degrees, or 180 degrees. Overall, we found robust RB despite differences in orientation, consistent with the idea that object recognition, as indexed by RB, is largely independent of orientation. However, whereas for polyoriented objects RB was obtained in all orientation conditions, for mono-oriented objects there was no RB between upright and upside-down versions of the stimuli. These findings suggest that the usual orientation of an object-when it exists-is stored in memory and can facilitate orientation processing when the principal axis of a viewed object is aligned with the stored axis orientation. This, in turn, allows for more rapid and successful construction of distinct episodic representations of an object, thus alleviating RB.  相似文献   

3.
Two experiments examined the effects of plane rotation on the recognition of briefly displayed pictures of familiar objects, using a picture—word verification task. Mirroring the results of earlier picture naming studies (Jolicoeur, 1985; Jolicoeur & Milliken, 1989), plane rotation away from a canonical upright orientation reduced the efficiency of recognition, although in contrast to the results from picture naming studies, the rotation effects were not reduced with experience with the stimuli. However, the rotation effects were influenced by the visual similarity of the distractor objects to the picture of the object presented, with greater orientation sensitivity being observed when visually similar distractors were presented. We suggest that subjects use orientation-sensitive representations to recognize objects in both the present unspeeded verification and in the earlier speeded naming tests of picture identification.  相似文献   

4.
The purpose of the present investigation was to determine whether the orientation between an object's parts is coded categorically for object recognition and physical discrimination. In three experiments, line drawings of novel objects in which the relative orientation of object parts varied by steps of 30 degrees were used. Participants performed either an object recognition task, in which they had to determine whether two objects were composed of the same set of parts, or a physical discrimination task, in which they had to determine whether two objects were physically identical. For object recognition, participants found it more difficult to compare the 0 degrees and 30 degrees versions and the 90 degrees and 60 degrees versions of an object than to compare the 30 degrees and 60 degrees versions, but only at an extended interstimulus interval (ISI). Categorical coding was also found in the physical discrimination task. These results suggest that relative orientation is coded categorically for both object recognition and physical discrimination, although metric information appears to be coded as well, especially at brief ISIs.  相似文献   

5.
We examined the effects of plane rotation, task, and visual complexity on the recognition of familiar and chimeric objects. The effects of rotation, with response times increasing linearly and monotonically with rotation from the upright, were equivalent for tasks requiring different degrees of visual differentiation of the target from contrasting stimuli--namely, (1) deciding whether the stimulus was living or nonliving (semantic classification), (2) deciding whether the stimulus was an object or a nonobject (object decision), and (3) naming. The effects of complexity, with shorter response times to more complex stimuli, were most apparent in semantic classification and object decision and were additive with the effects of rotation. We discuss the implications of these results for theories of the relationship between the process of normalization and the determining of object identity.  相似文献   

6.
Dux PE  Harris IM 《Cognition》2007,104(1):47-58
Do the viewpoint costs incurred when naming rotated familiar objects arise during initial identification or during consolidation? To answer this question we employed an attentional blink (AB) task where two target objects appeared amongst a rapid stream of distractor objects. Our assumption was that while both targets and distractors undergo initial identification only targets are consolidated in a form that allows overt report. We presented line drawings of objects with a usual upright canonical orientation, and separately manipulated the orientation of targets and distractors. In two experiments, targets were defined by colour, whereas in a third experiment they were defined by semantic category. Target 1 orientation influenced the AB, with objects rotated by 90 degrees causing a larger second target deficit than upright and upside-down objects. However, distractor orientation did not affect the magnitude of the second target deficit, regardless of whether targets were defined by colour or semantic category. Taken together, these findings suggest that the visual representations involved in the preliminary recognition of familiar objects are viewpoint-invariant and that viewpoint costs are incurred when these objects are consolidated for report.  相似文献   

7.
In three experiments, we independently manipulated the angular disparity between objects to be compared and the angular distance between the central axis of the objects and the vertical axis in a mental rotation paradigm. There was a linear increase in reaction times that was attributable to both factors. This result held whether the objects were rotated (with respect to each other and to the upright) within the frontal-parallel plane (Experiment 1) or in depth (Experiment 2), although the effects of both factors were greater for objects rotated in depth than for objects rotated within the frontal-parallel plane (Experiment 3). In addition, the factors interacted when the subjects had to search for matching ends of the figures (Experiments 1 and 2), but they were additive when the ends that matched were evident (Experiment 3). These data may be interpreted to mean that subjects normalize or reference an object with respect to the vertical upright as well as compute the rotational transformations used to determine shape identity.  相似文献   

8.
Effects of stimulus orientation across trial blocks and the spatial reference frame were investigated with a task in which Ss, with their head upright or tilted, judged a dot to be near the top or the bottom of rotated line drawings of objects. Objects used in this task were also named. Response times from the first block of trials increased linearly for objects rotated from 0 degrees to 120 degrees from the upright. Across blocks, orientation effects diminished for naming but remained the same for top-bottom discriminations. Practice with top-bottom discriminations diminished orientation effects when the same objects were subsequently named. The spatial reference frame for top-bottom discriminations was midway between retinal and environmental coordinates. Specifying the location of object features is of greater importance for top-bottom discriminations than for naming and underlies orientation effects in these tasks.  相似文献   

9.
The proposal that identification of inverted objects is accomplished by either a relatively slow rotation in the picture plane or a faster rotation in the depth plane about the horizontal axis was tested. In Experiment 1, subjects decided whether objects at 0° or 180° corresponded to previously learned normal views of the upright objects, or were mirror images. Instructions to mentally flip an inverted object in the depth plane to the upright produced faster decision times than did instructions to mentally spin the object in the picture plane. In Experiment 2, the effects of orientation were compared across an object-naming task and a normal-mirror task for six orientations from 0° to 300°. In the normal-mirror task, objects at 180° were cued for rotation in the picture plane or in the depth plane in equal numbers. The naming function for one group of subjects did not differ from the normalmirror function where inverted objects had been mentally rotated to the upright. For both functions, response time (RT) increased linearly from 0° to 180° and the slopes did not differ. The naming function for a second group of subjects did not differ from the normal-mirror function where inverted objects had been mentally flipped to the upright. For both functions, RT increased linearly at a similar rate from 0° to 120°, but decreased from 120° to 180°. The results are discussed in terms of theories of orientation-specific identification.  相似文献   

10.
Subjects either named rotated objects or decided whether the objects would face left or right if they were upright. Response time in the left-right task was influenced by a rotation aftereffect or by the physical rotation of the object, which is consistent with the view that the objects were mentally rotated to the upright and that, depending on its direction, the perceived rotary motion of the object either speeded or slowed mental rotation. Perceived rotary motion did not influence naming time, which suggests that the identification of rotated objects does not involve mental rotation.  相似文献   

11.
李莹  商玲玲 《心理科学》2017,40(1):29-36
当前研究采用事件相关电位(ERPs)技术,同时沿用经典的句图匹配范式,考察句子理解中物体典型颜色与非典型颜色的心理加工过程在脑电活动上的反映。实验中被试先阅读句子再判断句子后呈现的图片物体是否在句子中出现过,句子中隐含的物体颜色或是关键物体的典型颜色或是非典型颜色。实验结果发现,典型颜色句子版本下句图不匹配条件比句图匹配条件引发了更大的N400效应;而非典型颜色句子版本下两者N400差异不显著。研究结果表明,人们在汉语句子理解过程中会实时对隐含的物体颜色信息进行心理模拟。并且,句子隐含物体颜色的典型性是造成匹配易化或不匹配易化的关键因素之一。  相似文献   

12.
Many previous studies of object recognition have found view-dependent recognition performance when view changes are produced by rotating objects relative to a stationary viewing position. However, the assumption that an object rotation is equivalent to an observer viewpoint change ignores the potential contribution of extraretinal information that accompanies observer movement. In four experiments, we investigated the role of extraretinal information on real-world object recognition. As in previous studies focusing on the recognition of spatial layouts across view changes, observers performed better in an old/new object recognition task when view changes were caused by viewer movement than when they were caused by object rotation. This difference between viewpoint and orientation changes was due not to the visual background, but to the extraretinal information available during real observer movements. Models of object recognition need to consider other information available to an observer in addition to the retinal projection in order to fully understand object recognition in the real world.  相似文献   

13.
In a mental rotation task, participants must determine whether two stimuli match when one undergoes a rotation in 3-D space relative to the other. The key evidence for mental rotation is the finding of a linear increase in response times as objects are rotated farther apart. This signature increase in response times is also found in recognition of rotated objects, which has led many theorists to postulate mental rotation as a key transformational procedure in object recognition. We compared mental rotation and object recognition in tasks that used the same stimuli and presentation conditions and found that, whereas mental rotation costs increased relatively linearly with rotation, object recognition costs increased only over small rotations. Taken in conjunction with a recent brain imaging study, this dissociation in behavioral performance suggests that object recognition is based on matching of image features rather than on 3-D mental transformations.  相似文献   

14.
We examined the effects of interstimulus interval (ISI) and orientation changes on the haptic recognition of novel objects, using a sequential shape-matching task. The stimuli consisted of 36 wedge-shaped plastic objects that varied along two shape dimensions (hole/bump and dip/ridge). Two objects were presented at either the same orientation or a different orientation, separated by either a short (3-sec) ISI or a long (15-sec) ISI. In separate conditions, ISI was blocked or randomly intermixed. Participants ignored orientation changes and matched on shape alone. Although performance was better in the mixed condition, there were no other differences between conditions. There was no decline in performance at the long ISI. There were similar, marginally significant benefits to same-orientation matching for short and long ISIs. The results suggest that the perceptual object representations activated from haptic inputs are both stable, being maintained for at least 15 sec, and orientation sensitive.  相似文献   

15.
In four experiments, we examined the haptic recognition of 3-D objects. In Experiment 1, blindfolded participants named everyday objects presented haptically in two blocks. There was significant priming of naming, but no cost of an object changing orientation between blocks. However, typical orientations of objects were recognized more quickly than nonstandard orientations. In Experiment 2, participants accurately performed an unannounced test of memory for orientation. The lack of orientation-specific priming in Experiment 1, therefore, was not because participants could not remember the orientation at which they had first felt an object. In Experiment 3, we examined haptic naming of objects that were primed either haptically or visually. Haptic priming was greater than visual priming, although significant cross-modal priming was also observed. In Experiment 4, we tested recognition memory for familiar and unfamiliar objects using an old-new recognition task. Objects were recognized best when they were presented in the same orientation in both blocks, suggesting that haptic object recognition is orientation sensitive. Photographs of the unfamiliar objects may be downloaded from www.psychonomic.org/archive.  相似文献   

16.
17.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

18.
Many theorists have postulated that axes of elongation and/or symmetry play an important role in the recognition of objects. In this paper, evidence is presented that mitigates this claim from independent assessments of the effects of axes of elongation or symmetry on the time to name rotated line drawings of common objects. This conclusion was further supported in a stronger test in which both of these variables were orthogonally controlled, the aspect ratio of elongation was manipulated,and only objects that were completely geometrically symmetrical or asymmetrical were used. In all the experiments, objects were named for several blocks to determine the influence of these variables on effects of orientation with practice. Symmetry was found to diminish the effects of orientation after practice in naming the object set, and the effects of the most extreme orientation tested (120 degrees from upright) were diminished when both axes defined the same orientation, relative to when they defined different orientations. Contrary to many theories, these findings relegate the axes of symmetry and elongation to relatively minor roles during object identification.  相似文献   

19.
Lawson R  Bülthoff HH  Dumbell S 《Perception》2003,32(12):1465-1498
Four experiments are reported in which pictures of different morphs of novel, complex, 3-D objects, similar to objects which we must identify in the real world, were presented. We investigated how changes of viewpoint influence our ability to discriminate between morphs. View changes had a powerful effect on performance in picture-picture matching tasks when similarly shaped morphs had to be discriminated. Shape changes were detected faster and more accurately when morphs were depicted from the same rather than different views. In contrast, view change had no effect when dissimilarly shaped morphs had to be discriminated. This interaction between the effects of view change and shape change was found both for simultaneous stimulus presentation and for sequential presentation with interstimulus intervals up to 3600 ms. The interaction was found after repeated presentations of the stimuli before the matching task and after practice at the matching task as well as after no such pre-exposure to the stimuli or to the task. The results demonstrate the difficulty in activating abstract, view-insensitive representations to help to achieve object constancy, even when matching over long interstimulus intervals or after stimuli have already been seen many times.  相似文献   

20.
If configurations of objects are presented in a S1-S2 matching task for the identity of objects a spatial mismatch effect occurs. Changing the (irrelevant) spatial layout lengthens response times. We investigated what causes this effect. We observed a reliable mismatch effect that was not influenced by a secondary task during maintenance. Neither articulatory suppression (Experiment 1), nor unattended (Experiments 2 and 6) or attended visual material (Experiment 3) reduced the effect, and this was independent of the length of the retention interval (Experiment 6). The effect was also rather independent of the visual appearance of the local elements. It was of similar size with color patches (Experiment 4) and with completely different surface information when testing was cross modal (Experiment 5), and the name-ability of the global configuration was not relevant (Experiments 6 and 7). In contrast, the figurative similarity of the configurations of S1 and S2 systematically influenced the size of the spatial mismatch effect (Experiment 7). We conclude that the spatial mismatch effect is caused by a mismatch of the global shape of the configuration stored together with the objects of S1 and not by a mismatch of templates of perceptual records maintained in a visual cache.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号