首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
康廷虎  白学军 《心理科学》2013,36(3):558-569
采用眼动研究范式,通过两个实验考察靶刺激变换与情景信息属性对情景再认的影响。实验一结果显示,靶刺激变换对情景再认、靶刺激所属兴趣区的凝视时间均有显著影响,这表明在情景再认过程中,观察者会有意识地搜索靶刺激,靶刺激具有诊断效应;实验二应用了知觉信息与语义信息重合和分离两种情景材料。结果显示,观察者对语义信息的首次注视次数显著多于知觉信息;观察者对知觉信息和语义信息分离条件下语义信息的首次注视时间显著长于重合条件下。这一结果提示,在情景识别过程中,语义信息具有注意优先性,但其优先性会受到知觉信息启动的干扰。  相似文献   

2.
This study advances the hypothesis that, in the course of object recognition, attention is directed to distinguishing features: visual information that is diagnostic of object identity in a specific context. In five experiments, observers performed an object categorization task involving drawings of fish (Experiments 1–4) and photographs of natural sea animals (Experiment 5). Allocation of attention to distinguishing and non-distinguishing features was examined using primed-matching (Experiment 1) and visual probe (Experiments 2, 4, 5) methods, and manipulated by spatial precuing (Experiment 3). Converging results indicated that in performing the object categorization task, attention was allocated to the distinguishing features in a context-dependent manner, and that such allocation facilitated performance. Based on the view that object recognition, like categorization, is essentially a process of discrimination between probable alternatives, the implications of the findings for the role of attention to distinguishing features in object recognition are discussed.  相似文献   

3.
4.
客体投影方式对空间问题解决和再认的影响   总被引:2,自引:0,他引:2  
通过设置四种实验条件,旨在阐明空间问题解决和再认的水平是如何受到客体不同投影方式的影响。除进一步支持了正投影问题的解决是以一个具有三维结构特性的心理表征为基础而并非是对二维正投影信息进行了充分识别的观点外,研究结果还表明:(1)这个具有客体三维结构特性的心理表征是建构于对正投影问题解决而非轴测投影图的再认过程中;(2)正投影问题解决的复杂性显著地高于轴测投影图问题解决而且一个建构于正投影问题解决中  相似文献   

5.
Learning verbal semantic knowledge for objects has been shown to attenuate recognition costs incurred by changes in view from a learned viewpoint. Such findings were attributed to the semantic or meaningful nature of the learned verbal associations. However, recent findings demonstrate surprising benefits to visual perception after learning even noninformative verbal labels for stimuli. Here we test whether learning verbal information for novel objects, independent of its semantic nature, can facilitate a reduction in viewpoint-dependent recognition. To dissociate more general effects of verbal associations from those stemming from the semantic nature of the associations, participants learned to associate semantically meaningful (adjectives) or nonmeaningful (number codes) verbal information with novel objects. Consistent with a role of semantic representations in attenuating the viewpoint-dependent nature of object recognition, the costs incurred by a change in viewpoint were attenuated for stimuli with learned semantic associations relative to those associated with nonmeaningful verbal information. This finding is discussed in terms of its implications for understanding basic mechanisms of object perception as well as the classic viewpoint-dependent nature of object recognition.  相似文献   

6.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

7.
VIEWPOINT DEPENDENCE IN SCENE RECOGNITION   总被引:9,自引:0,他引:9  
Abstract— Two experiments investigated the viewpoint dependence of spatial memories. In Experiment 1, participants learned the locations of objects on a desktop from a single perspective and then took part in a recognition test, test scenes included familiar and novel views of the layout. Recognition latency was a linear function of the angular distance between a test view and the study view. In Experiment 2, participants studied a layout from a single view and then learned to recognize the layout from three additional training views. A final recognition test showed that the study view and the training views were represented in memory, and that latency was a linear function of the angular distance to the nearest study or training view. These results indicate that interobject spatial relations are encoded in a viewpoint-dependent manner, and that recognition of novel views requires normalization to the most similar representation in memory. These findings parallel recent results in visual object recognition  相似文献   

8.
Inverting scenes interferes with visual perception and memory on many tasks. Might scene inversion eliminate boundary extension (BE) for briefly-presented photographs? In Experiment 1, an upright or inverted photograph (133, 258, or 383?ms) was followed by a 258 ms masked interval and a test photograph showing the identical view. Test photographs were rated as “same”, “closer”, or “farther away” (5-point scale). BE was just as great for inverted as upright views at the 133 and 383 ms durations, but surprisingly was greater for inverted views at the 258 ms duration. In Experiment 2, 258-ms views yielded greater BE when the study photographs were always tested in the opposite orientation, indicating that the difference in BE was related to encoding. Results suggest that scene construction beyond the view boundaries occurs rapidly and is not impeded by scene inversion, but that changes in the relative quality of visual details available for upright and inverted views may sometimes yield increased BE for inverted scenes.  相似文献   

9.
Abstract: Two experiments were carried out to investigate the factors that produce the misinformation effect (Loftus, 1979a). Experiment 1 of the present study using the visual recognition test was a replica of Experiment 1 of Loftus, Miller, and Burns (1978). The misinformation effect was not found. Experiment 2 differed from Experiment 1 in the following respects: the modality of recognition test was varied between visual and verbal; memorableness (ease of memorization) of the critical objects was varied. The results of Experiment 2 suggest that the original visual memory might be more likely to be recovered with a visual recognition test than with a verbal recognition test, and that the postevent information would not necessarily interfere effectively with memory of an original object of too high or low memorableness, while it works well on objects of intermediate memorableness.  相似文献   

10.
How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or absent. Visual search efficiency does not change after hundreds of trials through an unchanging scene (Experiment 1). Memory search, in contrast, begins inefficiently but becomes efficient with practice. Given a choice between vision and memory, observers choose vision (Experiments 2 and 3). However, if forced to use their memory on some trials, they learn to use memory on all trials, even when reliable visual information remains available (Experiment 4). The results suggest that observers make a pragmatic choice between vision and memory, with a strong bias toward visual search even for memorized stimuli.  相似文献   

11.
There is a view that faces and objects are processed by different brain mechanisms. Different factors may modulate the extent to which face mechanisms are used for objects. To distinguish these factors, we present a new parametric multipart three-dimensional object set that provides researchers with a rich degree of control of important features for visual recognition such as individual parts and the spatial configuration of those parts. All other properties being equal, we demonstrate that perceived facelikeness in terms of spatial configuration facilitated performance at matching individual exemplars of the new object set across viewpoint changes (Experiment 1). Importantly, facelikeness did not affect perceptual discriminability (Experiment 2) or similarity (Experiment 3). Our findings suggest that perceptual resemblance to faces based on spatial configuration of parts is important for visual recognition even after equating physical and perceptual similarity. Furthermore, the large parametrically controlled object set and the standardized procedures to generate additional exemplars will provide the research community with invaluable tools to further understand visual recognition and visual learning.  相似文献   

12.
In two experiments, participants were trained to recognize a playground scene from four vantage points and were subsequently asked to recognize the playground from a novel perspective between the four learned viewing perspectives, as well as from the trained perspectives. In both experiments, people recognized the novel view more efficiently than those that they had recently used in order to learn the scene. Additionally, in Experiment 2, participants who viewed a novel stimulus on their very first test trial correctly recognized it more quickly (and also tended to recognize it more accurately) than did participants whose first test trial was a familiar view of the scene. These findings call into question the idea that scenes are recognized by comparing them with single previous experiences, and support a growing body of literature on the existence of psychological mechanisms that combine spatial information from multiple views of a scene.  相似文献   

13.
In two experiments, we investigated the activation of perceptual representations of referent objects during word processing. In both experiments, participants learned to associate pictures of novel three-dimensional objects with pseudowords. They subsequently performed a recognition task (Experiment 1) or a naming task (Experiment 2) on the object names while being primed with different types of visual stimuli. Only the stimuli that the participants had encountered as referent objects during the training phase facilitated recognition or naming responses. New stimuli did not facilitate the processing of object names, even if they matched a schematic or prototypical representation of the referent object that the participants might have abstracted during word-referent learning. These results suggest that words learned by way of examples of referent objects are associated with experiential traces of encounters with these objects.  相似文献   

14.
View combination in scene recognition   总被引:1,自引:0,他引:1  
Becoming familiar with an environment requires the ability to integrate spatial information from different views. We provide evidence that view combination, a mechanism believed to underlie the ability to recognize novel views of familiar objects, is also used to recognize coherent, real-world scenes. In two experiments, we trained participants to recognize a real-world scene from two perspectives. When the angular difference between the learned views was relatively small, the participants subsequently recognized novel views from locations between the learned views about as well as they recognized the learned views and better than novel views situated outside of the shortest distance between the learned views. In contrast, with large angles between training views, all the novel views were recognized less well than the trained views. These results extend the view combination approach to scenes and are difficult to reconcile with models proposing that scenes are recognized by transforming them to match only the nearest stored view.  相似文献   

15.
In an earlier report (Harman, Humphrey, & Goodale, 1999), we demonstrated that observers who actively rotated three-dimensional novel objects on a computer screen later showed faster visual recognition of these objects than did observers who had passively viewed exactly the same sequence of images of these virtual objects. In Experiment 1 of the present study we showed that compared to passive viewing, active exploration of three-dimensional object structure led to faster performance on a "mental rotation" task involving the studied objects. In addition, we examined how much time observers concentrated on particular views during active exploration. As we found in the previous report, they spent most of their time looking at the "side" and "front" views ("plan" views) of the objects, rather than the three-quarter or intermediate views. This strong preference for the plan views of an object led us to examine the possibility in Experiment 2 that restricting the studied views in active exploration to either the plan views or the intermediate views would result in differential learning. We found that recognition of objects was faster after active exploration limited to plan views than after active exploration of intermediate views. Taken together, these experiments demonstrate (1) that active exploration facilitates learning of the three-dimensional structure of objects, and (2) that the superior performance following active exploration may be a direct result of the opportunity to spend more time on plan views of the object.  相似文献   

16.
Four experiments examined whether or not exposure to two views (A and B) of a novel object improves generalization to a third view (C) through view combination on tasks that required symmetry or recognition memory decisions. The results of Experiment 1 indicated that exposure to either View A or View B alone produced little or no generalization to View C on either task. The results of Experiment 2 indicated that exposure to both View A and View B did improve generalization to View C, but only for symmetrical objects. Experiment 3 replicated this generalization advantage for symmetrical but not asymmetrical objects, when objects were well learned at study. The results of Experiment 4 showed that Views A and B did not have to be presented consecutively to facilitate responses to View C. Together, the pattern of results suggests that generalization to novel views does occur through view combination of temporally separated views, but it is more likely to be observed with symmetrical objects.  相似文献   

17.
Human spatial encoding of three-dimensional navigable space was studied, using a virtual environment simulation. This allowed subjects to become familiar with a realistic scene by making simulated rotational and translational movements during training. Subsequent tests determined whether subjects could generalize their recognition ability by identifying novel-perspective views and topographic floor plans of the scene. Results from picture recognition tests showed that familiar direction views were most easily recognized, although significant generalization to novel views was observed. Topographic floor plans were also easily identified. In further experiments, novel-view performance diminished when active training was replaced by passive viewing of static images of the scene. However, the ability to make self-initiated movements, as opposed to watching dynamic movie sequences, had no effect on performance. These results suggest that representation of navigable space is view dependent and highlight the importance of spatial-temporal continuity during learning.  相似文献   

18.
Objects are best recognized from so-called “canonical” views. The characteristics of canonical views of arbitrary objects have been qualitatively described using a variety of different criteria, but little is known regarding how these views might be acquired during object learning. We address this issue, in part, by examining the role of object motion in the selection of preferred views of novel objects. Specifically, we adopt a modeling approach to investigate whether or not the sequence of views seen during initial exposure to an object contributes to observers’ preferences for particular images in the sequence. In two experiments, we exposed observers to short sequences depicting rigidly rotating novel objects and subsequently collected subjective ratings of view canonicality (Experiment 1) and recall rates for individual views (Experiment 2). Given these two operational definitions of view canonicality, we attempted to fit both sets of behavioral data with a computational model incorporating 3-D shape information (object foreshortening), as well as information relevant to the temporal order of views presented during training (the rate of change for object foreshortening). Both sets of ratings were reasonably well predicted using only 3-D shape; the inclusion of terms that capture sequence order improved model performance significantly.  相似文献   

19.
Two experiments dissociated the roles of intrinsic orientation of a shape and participants’ study viewpoint in shape recognition. In Experiment 1, participants learned shapes with a rectangular background that was oriented differently from their viewpoint, and then recognized target shapes, which were created by splitting study shapes along different intrinsic axes, at different views. Results showed that recognition was quicker when the study shapes were split along the axis parallel to the orientation of the rectangular background than when they were split along the axis parallel to participants’ viewpoint. In Experiment 2, participants learned shapes without the rectangular background. The results showed that recognition was quicker when the study shape was split along the axis parallel to participants’ viewpoint. In both experiments, recognition was quicker at the study view than at a novel view. An intrinsic model of object representation and recognition was proposed to explain these findings.  相似文献   

20.
Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号