首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Wilson KD  Farah MJ 《Perception》2006,35(10):1351-1366
A fundamental but unanswered question about the human visual system concerns the way in which misoriented objects are recognized. One hypothesis maintains that representations of incoming stimuli are transformed via parietally based spatial normalization mechanisms (eg mental rotation) to match view-specific representations in long-term memory. Using fMRI, we tested this hypothesis by directly comparing patterns of brain activity evoked during classic mental rotation and misoriented object recognition involving everyday objects. BOLD activity increased systematically with stimulus rotation within the ventral visual stream during object recognition and within the dorsal visual stream during mental rotation. More specifically, viewpoint-dependent activity was significantly greater in the right superior parietal lobule during mental rotation than during object recognition. In contrast, viewpoint-dependent activity was significantly greater in the right fusiform gyrus during object recognition than during mental rotation. In addition to these differences in viewpoint-dependent activity, object recognition and mental rotation produced distinct patterns of brain activity, independent of stimulus rotation: object recognition resulted in greater overall activity within ventral stream visual areas and mental rotation resulted in greater overall activity within dorsal stream visual areas. The present results are inconsistent with the hypothesis that misoriented object recognition is mediated by structures within the parietal lobe that are known to be involved in mental rotation.  相似文献   

2.
Leek EC 《Perception》1998,27(7):803-816
How does the visual system recognise stimuli presented at different orientations? According to the multiple-views hypothesis, misoriented objects are matched to one of several orientation-specific representations of the same objects stored in long-term memory. Much of the evidence for this hypothesis comes from the observation of group mean orientation effects in recognition memory tasks showing that the time taken to identify objects increases as a function of the angular distance between the orientation of the stimulus and its nearest familiar orientation The aim in this paper is to examine the validity of this interpretation of group mean orientation effects. In particular, it is argued that analyses based on group performance averages that appear consistent with the multiple-views hypothesis may, under certain circumstances, obscure a different theoretically relevant underlying pattern of results. This problem is examined by using hypothetical data and through the detailed analysis of the results from an experiment based on a recognition memory task used in several previous studies. Although a pattern of results that is consistent with the multiple-views hypothesis was observed in both the group mean performance and the underlying data, it is argued that the potential limitations of analyses based solely on group performance averages must be considered in future studies that use orientation effects to make inferences about the kinds of shape representations that mediate visual recognition.  相似文献   

3.
We used repetition blindness (RB) as a measure of object recognition and compared the pattern of RB obtained for objects with a well-established upright orientation (mono-oriented objects) and those without a usual upright orientation (polyoriented objects), when the critical objects were either in identical orientations or differed by 30 degrees, 60 degrees, 90 degrees, or 180 degrees. Overall, we found robust RB despite differences in orientation, consistent with the idea that object recognition, as indexed by RB, is largely independent of orientation. However, whereas for polyoriented objects RB was obtained in all orientation conditions, for mono-oriented objects there was no RB between upright and upside-down versions of the stimuli. These findings suggest that the usual orientation of an object-when it exists-is stored in memory and can facilitate orientation processing when the principal axis of a viewed object is aligned with the stored axis orientation. This, in turn, allows for more rapid and successful construction of distinct episodic representations of an object, thus alleviating RB.  相似文献   

4.
The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an object-discrimination task and pattern masked with various scene-to-mask stimulus-onset asynchronies (SOAs). Full psychometric functions and reaction times (RTs) were measured. The authors found that (a) rotating the full scenes increased threshold SOA at intermediate rotation angles but not for inversion; (b) rotating object or context degraded classification performance in a similar manner; (c) semantically congruent contexts had negligible facilitatory effects on object classification compared with meaningless baseline contexts with a matching contrast structure, but incongruent contexts severely degraded performance; (d) any object-context incongruence (orientation or semantic) increased RTs at longer SOAs, indicating dependent processing of object and context; and (e) facilitatory effects of context emerged only when the context shortly preceded the object. The authors conclude that the effects of natural scene context on object classification are primarily inhibitory and discuss possible reasons.  相似文献   

5.
6.
Visual information can be stored relative to a particular point of view or independently of any particular point of view. Research on mental rotation has shown that people can store and use viewer-centered visual representations of objects and scenes. Some theories of object recognition posit that object-centered representations are also stored and used to represent the shape of three-dimensional objects. In this paper a series of experiments provides evidence that people can store and use both viewer-centered and object-centered representations of three-dimensional objects.  相似文献   

7.
The effects of stimulus orientation on naming were examined in two experiments in which subjects identified line drawings of natural objects following practice with the objects at the same or different orientations. Half the rotated objects were viewed in the orientation that matched the earlier presentations, and half were viewed at an orientation that mismatched the earlier presentations. Systematic effects of orientation on naming time were found during the early presentations. These effects were reduced during later presentations, and the size of this reduction did not depend on the orientation in which the object had been seen originally. The results are consistent with a dual-systems model of object identification in which initially large effects of disorientation are the result of a normalization process such as mental rotation, and in which attenuation of the effects is due to a shift from the normalization system to a feature/part-based  相似文献   

8.
Achieving visual object constancy across plane rotation and depth rotation.   总被引:5,自引:0,他引:5  
R Lawson 《Acta psychologica》1999,102(2-3):221-245
Visual object constancy is the ability to recognise an object from its image despite variation in the image when the object is viewed from different angles. I describe research which probes the human visual system's ability to achieve object constancy across plane rotation and depth rotation. I focus on the ecologically important case of recognising familiar objects, although the recognition of novel objects is also discussed. Cognitive neuropsychological studies of patients with specific deficits in achieving object constancy are reviewed, in addition to studies which test neurally intact subjects. In certain cases, the recognition of invariant features allows objects to be recognised irrespective of the view depicted, particularly if small, distinctive sets of objects are presented repeatedly. In contrast, in most situations, recognition is sensitive to both the view in-plane and in-depth from which an object is depicted. This result suggests that multiple, view-specific, stored representations of familiar objects are accessed in everyday, entry-level visual recognition, or that transformations such as mental rotation or interpolation are used to transform between retinal images of objects and view-specific, stored representations.  相似文献   

9.
Two experiments examined the effects of plane rotation on the recognition of briefly displayed pictures of familiar objects, using a picture—word verification task. Mirroring the results of earlier picture naming studies (Jolicoeur, 1985; Jolicoeur & Milliken, 1989), plane rotation away from a canonical upright orientation reduced the efficiency of recognition, although in contrast to the results from picture naming studies, the rotation effects were not reduced with experience with the stimuli. However, the rotation effects were influenced by the visual similarity of the distractor objects to the picture of the object presented, with greater orientation sensitivity being observed when visually similar distractors were presented. We suggest that subjects use orientation-sensitive representations to recognize objects in both the present unspeeded verification and in the earlier speeded naming tests of picture identification.  相似文献   

10.
Phinney RE  Siegel RM 《Perception》1999,28(6):725-737
Object recognition was studied in human subjects to determine whether the storage of the visual objects was in a two-dimensional or a three-dimensional representation. Novel motion-based and disparity-based stimuli were generated in which three-dimensional and two-dimensional form cues could be manipulated independently. Subjects were required to generate internal representations from motion stimuli that lacked explicit two-dimensional cues. These stored internal representations were then matched against internal three-dimensional representations constructed from disparity stimuli. These new stimuli were used to confirm prior studies that indicated the primacy of two-dimensional cues for view-based object storage. However, under tightly controlled conditions for which only three-dimensional cues were available, human subjects were also able to match an internal representation derived from motion of that of disparity. This last finding suggests that there is an internal storage of an object's representations in three dimensions, a tenet that has been rejected by view-based theories. Thus, any complete theory of object recognition that is based on primate vision must incorporate three-dimensional stored representations.  相似文献   

11.
Eighty-four participants mentally rotated meaningful and meaningless objects. Within each type of object, half were simple and half were complex; the complexity was the same across the meaningful and meaningless objects. The patterns of errors were examined as a function of the type of stimuli (meaningful vs. meaningless), complexity, and angle of rotation. The data for the meaningful objects showed steeper slopes of rotation for complex objects than that for simple objects. In contrast, the simple and complex meaningless objects showed comparable increases in error rates as a function of angle of rotation. Furthermore, the slopes remained comparable after pretraining that increased familiarity with the objects. The results are discussed in terms of underlying representations of meaningful and meaningless objects and their implications to mental transformations. The data are consistent with a piecemeal rotation of the meaningful stimuli and a holistic rotation of the meaningless stimuli.  相似文献   

12.
Three experiments were designed to investigate whether the characteristic function relating response time to stimulus orientation reflects the observer imagining the rotation of the stimulus to upright (the "image rotation" hypothesis) or rotation of an internal reference frame in response to the misoriented stimulus (the "frame rotation" hypothesis). Identification times in response to misoriented words were measured in Experiment 1, whereas in Experiments 2 and 3, lexical decision times in response to misoriented letter strings were measured. Trials occurred in blocks; words within a block were presented at the same orientation. It was argued that this mode of presentation would facilitate the use of a frame rotation strategy by allowing for a gradual readjustment of an internal reference frame. The characteristic "mental rotation" function was observed in all three experiments. However, the data indicated that observers continued to imagine the rotation of the word to upright in each trial; there was no evidence of readjustment of an internal reference frame. An additional finding of interest occurred in Experiment 1, in which observers identified the same set of misoriented words across two sessions. The identification times were faster, and the slope of the mental rotation function was lower, in the second session. These results are discussed as in relation to the image rotation hypothesis of mental rotation and to "instance-based skill acquisition" (Masson, 1986) in word recognition.  相似文献   

13.
Common processes and representations engaged by visuospatial tasks were investigated by looking at four frequently used visuospatial research paradigms, the aim being to contribute to a better understanding of which specific processes are addressed in the different paradigms compared. In particular, the relation between spontaneous and instructed perspective taking, as well as mental rotation of body-part/non-body-part objects, was investigated. To this end, participants watched animations that have been shown to lead to spontaneous perspective taking. While they were watching these animations, participants were asked to explicitly adopt another perspective (Experiment 1), perform a mental object rotation task that involved a non-body-part object (Experiment 2), or perform a mental rotation of a body-part object (Experiment 3). Patterns of interference between the tasks, reflected in the reaction time patterns, showed that spontaneous and instructed perspective taking rely on similar representational elements to encode orientation. By contrast, no such overlap was found between spontaneous perspective taking and the rotation of non-body-part objects. Also, no overlap in orientation representation was evident with mental body-part rotations. Instead of an overlap in orientation representations, the results suggest that spontaneous perspective taking and the mental rotation of body parts rely on similar—presumably, motor—processes. These findings support the view that motor processes are involved in perspective taking and mental rotation of body parts.  相似文献   

14.
Most models of object recognition and mental rotation are based on the matching of an object's 2-D view with representations of the object stored in memory. They propose that a time-consuming normalization process compensates for any difference in viewpoint between the 2-D percept and the stored representation. Our experiment shows that such normalization is less time consuming when it has to compensate for disorientations around the vertical than around the horizontal axis of rotation. By decoupling the different possible reference frames, we demonstrate that this anisotropy of the normalization process is defined not with respect to the retinal frame of reference, but, rather, according to the gravitational or the visuocontextual frame of reference. Our results suggest that the visual system may call upon both the gravitational vertical and the visuocontext to serve as the frame of reference with respect to which 3-D objects are gauged in internal object transformations.  相似文献   

15.
Priming effects on the object possibility task, in which participants decide whether line drawings could or could not be possible three-dimensional objects, may be supported by the same processes and representations used in recognizing and identifying objects. Three experiments manipulating objects' picture-plane orientation provided limited support for this hypothesis. Like old/new recognition performance, possibility priming declined as study-test orientation differences increased from 0 degree to 60 degrees. However, while significant possibility priming was not observed for larger orientation differences, recognition performance continued to decline following 60 degrees-180 degrees orientation shifts. These results suggest that possibility priming and old/new recognition may rely on common viewpoint-specific representations but that access to these representations in the possibility test occurs only when study and test views are sufficiently similar (i.e., rotated less than 60 degrees).  相似文献   

16.
When visual stimuli (letters, words or pictures of objects) are presented sequentially at high rates (8–12 items/s), observers have difficulty in detecting and reporting both occurrences of a repeated item: This is repetition blindness. Two experiments investigated the effects of repetition of novel objects, and whether the representations bound to episodic memory tokens that yield repetition blindness are viewpoint dependent or whether they are object centred. Subjects were shown coloured drawings of simple three‐dimensional novel objects, and rate of presentation (Experiment 1) and rotation in depth (Experiment 2) were manipulated. Repetition blindness occurred only at the higher rate (105 ms/item), and was found even for stimuli differing in orientation. We conclude that object‐centred representations are bound to episodic memory tokens, and that these are constructed prior to object recognition operating on novel as well as known objects. These results are contrasted with those found with written materials, and implications for explanations of repetition blindness are considered.  相似文献   

17.
Forces are experienced in actions on objects. The mechanoreceptor system is stimulated by proximal forces in interactions with objects, and experiences of force occur in a context of information yielded by other sensory modalities, principally vision. These experiences are registered and stored as episodic traces in the brain. These stored representations are involved in generating visual impressions of forces and causality in object motion and interactions. Kinematic information provided by vision is matched to kinematic features of stored representations, and the information about forces and causality in those representations then forms part of the perceptual interpretation. I apply this account to the perception of interactions between objects and to motions of objects that do not have perceived external causes, in which motion tends to be perceptually interpreted as biological or internally caused. I also apply it to internal simulations of events involving mental imagery, such as mental rotation, trajectory extrapolation and judgment, visual memory for the location of moving objects, and the learning of perceptual judgments and motor skills. Simulations support more accurate judgments when they represent the underlying dynamics of the event simulated. Mechanoreception gives us whatever limited ability we have to perceive interactions and object motions in terms of forces and resistances; it supports our practical interventions on objects by enabling us to generate simulations that are guided by inferences about forces and resistances, and it helps us learn novel, visually based judgments about object behavior.  相似文献   

18.
The aim of the present research was to study the processes involved in knowledge emergence. In a short-term priming paradigm, participants had to categorize pictures of objects as either "kitchen objects" or "do-it-yourself tools". The primes and targets represented objects belonging to either the same semantic category or different categories (object category similarity), and their use involved gestures that were either similar or very different (gesture similarity). The condition with a SOA of 100ms revealed additive effects of motor similarity and object category similarity, whereas another condition with a SOA of 300ms showed an interaction between motor and category similarity. These results were interpreted in terms of the activation and integration processes involved in the emergence of mental representations.  相似文献   

19.
Mental rotation, as a covert simulation of motor rotation, could benefit from spatial updating of object representations. We are interested in what kind of visual cue could trigger spatial updating. Three experiments were conducted to examine the effect of dynamic and static orientation cues on mental rotation, using a sequential matching task with three-dimensional novel objects presented in different views. Experiment 1 showed that a rotating orientation cue with constant speed reduced viewpoint costs in mental rotation. Experiment 2 extended this effect with a varied-speed rotating orientation cue. However, no such benefit was observed with a static orientation cue in Experiment 3. These findings indicated that a visually continuous orientation cue is sufficient to elicit spatial updating in mental rotation. Furthermore, there may be differences in the underlying mechanisms of spatial updating on the basis of constant-speed rotating cues and varied-speed rotating cues.  相似文献   

20.
Experimental evidence has shown that the time taken to recognize objects is often dependent on stimulus orientation in the image plane. This effect has been taken as evidence that recognition is mediated by orientation-specific stored representations of object shapes. However, the factors that determine the orientation specificity of these representations remain unclear. This issue is examined using a word-picture verification paradigm in which subjects identified line drawings of common mono- and polyoriented objects at different orientations. A detailed analysis of the results showed that, in contrast to mono-oriented objects, the recognition of polyoriented objects is not dependent on stimulus orientation. This interaction provides a further constraint on hypotheses about the factors that determine the apparent orientation specificity of stored shape representations. In particular, they support previous proposals that objects are encoded in stored representations at familiar stimulus orientations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号