首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
View combination in scene recognition   总被引:1,自引:0,他引:1  
Becoming familiar with an environment requires the ability to integrate spatial information from different views. We provide evidence that view combination, a mechanism believed to underlie the ability to recognize novel views of familiar objects, is also used to recognize coherent, real-world scenes. In two experiments, we trained participants to recognize a real-world scene from two perspectives. When the angular difference between the learned views was relatively small, the participants subsequently recognized novel views from locations between the learned views about as well as they recognized the learned views and better than novel views situated outside of the shortest distance between the learned views. In contrast, with large angles between training views, all the novel views were recognized less well than the trained views. These results extend the view combination approach to scenes and are difficult to reconcile with models proposing that scenes are recognized by transforming them to match only the nearest stored view.  相似文献   

2.
肖承丽  刘传军 《心理学报》2014,46(9):1289-1300
学界传统上将空间更新定义为个体随着身体运动对其所处真实环境空间表征的自动更新过程, 然而近年来有研究发现个体还可以对想象环境进行空间更新, 但其机制尚不明确。本研究实验1被试站在场景内学习物体空间位置之后, 保持学习朝向直线走到测试位置。其中, 0度组保持学习朝向站立, 180度组原地转动180度, 面对学习朝向的相反方向站立。两组被试均想象自己仍然站在学习位置, 面对学习朝向。然后被试旋转90度, 从3个想象朝向(记忆一致朝向、感觉运动一致朝向和不一致朝向)完成空间判断任务。结果发现0度组记忆一致朝向和感觉运动一致朝向成绩均优于不一致朝向, 而180度组无此优势。实验2被试从学习空间移动到测试空间过程中被迷向, 其他条件与实验1的180度组相同。但是, 实验2被试在记忆一致朝向和感觉运动一致朝向的成绩均优于不一致朝向。结果证明人们可以通过对在线空间表征进行想象平移或将离线记忆与空间更新系统相联结两种方式, 对想象环境进行空间更新。  相似文献   

3.
Kelly, Avraamides和Loomis (2007)发现被试在新环境中空间更新失败, 而肖承丽和刘传军(2014)发现被试在新环境中可使用想象平移等策略来实现空间更新。为了探索该两项研究的异同, 本研究采用与Kelly等相同的实验范式进行研究。实验1被试在原学习环境中完成实验任务。实验2被试记忆完物体空间位置后, 转移至新环境, 在只依靠离线表征、离线表征与在线表征相协调和离线表征与在线表征相矛盾三种条件下完成相同的实验任务。结果发现, 被试在原环境中通过躯体运动和记忆两种方式促进空间更新, 具有同等有效性, 而在新环境中躯体运动显著差于记忆对空间更新的促进作用; 躯体运动和记忆对空间更新的促进作用在两种环境中均高度相关。研究表明, 躯体运动促进空间更新具有环境依赖效应, 与记忆对空间更新的促进作用相比, 躯体运动对空间更新的促进作用会随着环境的改变而降低。  相似文献   

4.
Two experiments dissociated the roles of intrinsic orientation of a shape and participants’ study viewpoint in shape recognition. In Experiment 1, participants learned shapes with a rectangular background that was oriented differently from their viewpoint, and then recognized target shapes, which were created by splitting study shapes along different intrinsic axes, at different views. Results showed that recognition was quicker when the study shapes were split along the axis parallel to the orientation of the rectangular background than when they were split along the axis parallel to participants’ viewpoint. In Experiment 2, participants learned shapes without the rectangular background. The results showed that recognition was quicker when the study shape was split along the axis parallel to participants’ viewpoint. In both experiments, recognition was quicker at the study view than at a novel view. An intrinsic model of object representation and recognition was proposed to explain these findings.  相似文献   

5.
Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0°–360° in 36° increments) around the scene, and participants judged whether the objects’ positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.  相似文献   

6.
Li X  Mou W  McNamara TP 《Cognition》2012,124(2):143-155
Four experiments tested whether there are enduring spatial representations of objects' locations in memory. Previous studies have shown that under certain conditions the internal consistency of pointing to objects using memory is disrupted by disorientation. This disorientation effect has been attributed to an absence of or to imprecise enduring spatial representations of objects' locations. Experiment 1 replicated the standard disorientation effect. Participants learned locations of objects in an irregular layout and then pointed to objects after physically turning to face an object and after disorientation. The expected disorientation was observed. In Experiment 2, after disorientation, participants were asked to imagine they were facing the original learning direction and then physically turned to adopt the test orientation. In Experiment 3, after disorientation, participants turned to adopt the test orientation and then were informed of the original viewing direction by the experimenter. A disorientation effect was not observed in Experiment 2 or 3. In Experiment 4, after disorientation, participants turned to face the test orientation but were not told the original learning orientation. As in Experiment 1, a disorientation effect was observed. These results suggest that there are enduring spatial representations of objects' locations specified in terms of a spatial reference direction parallel to the learning view, and that the disorientation effect is caused by uncertainty in recovering the spatial reference direction relative to the testing orientation following disorientation.  相似文献   

7.
Three experiments are reported in which the effects of viewpoint on the recognition of distinctive and typical faces were explored. Specifically, we investigated whether generalization across views would be better for distinctive faces than for typical faces. In Experiment 1 the time to match different views of the same typical faces and the same distinctive faces was dependent on the difference between the views shown. In contrast, the accuracy and latency of correct responses on trials in which two different faces were presented were independent of viewpoint if the faces were distinctive but were view-dependent if the faces were typical. In Experiment 2 we tested participants'recognition memory for unfamiliar faces that had been studied at a single three-quarter view. Participants were presented with all face views during test. Finally, in Experiment 3, participants were tested on their recognition of unfamiliar faces that had been studied at all views. In both Experiments 2 and 3 we found an effect of distinctiveness and viewpoint but no interaction between these factors. The results are discussed in terms of a model of face representation based on inter-item similarity in which the representations are view specific.  相似文献   

8.
Four experiments examined whether or not exposure to two views (A and B) of a novel object improves generalization to a third view (C) through view combination on tasks that required symmetry or recognition memory decisions. The results of Experiment 1 indicated that exposure to either View A or View B alone produced little or no generalization to View C on either task. The results of Experiment 2 indicated that exposure to both View A and View B did improve generalization to View C, but only for symmetrical objects. Experiment 3 replicated this generalization advantage for symmetrical but not asymmetrical objects, when objects were well learned at study. The results of Experiment 4 showed that Views A and B did not have to be presented consecutively to facilitate responses to View C. Together, the pattern of results suggests that generalization to novel views does occur through view combination of temporally separated views, but it is more likely to be observed with symmetrical objects.  相似文献   

9.
10.
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.  相似文献   

11.
In two experiments, participants were trained to recognize a playground scene from four vantage points and were subsequently asked to recognize the playground from a novel perspective between the four learned viewing perspectives, as well as from the trained perspectives. In both experiments, people recognized the novel view more efficiently than those that they had recently used in order to learn the scene. Additionally, in Experiment 2, participants who viewed a novel stimulus on their very first test trial correctly recognized it more quickly (and also tended to recognize it more accurately) than did participants whose first test trial was a familiar view of the scene. These findings call into question the idea that scenes are recognized by comparing them with single previous experiences, and support a growing body of literature on the existence of psychological mechanisms that combine spatial information from multiple views of a scene.  相似文献   

12.
To determine if layout affects boundary extension (BE; false memory beyond view boundaries; Intraub & Richardson, 1989), 12 single-object scenes were photographed from three vantage points: central (0°), shifted rightward (45°), and shifted leftward (45°). Size and position of main objects were held constant. Pictures were presented for 15 s each and were repeated at test with their boundaries: (a) displaced inward or outward (Experiment 1: N=120), or (b) identical to the stimulus views (Experiment 2: N=72). When participants adjusted test boundaries to match memory, BE always occurred, but tended to be smaller for 45° views. We propose this reflects the fact that more of the 3-D scene is visible in the 45° views. This suggests that scene representation reflects the 3-D world conveyed by the global characteristics of layout, rather than the 2-D distance between the main object and the boundaries of a picture.  相似文献   

13.
It has been proposed that spatial reference frames with which object locations are specified in memory are intrinsic to a to-be-remembered spatial layout (intrinsic reference theory). Although this theory has been supported by accumulating evidence, it has only been collected from paradigms in which the entire spatial layout was simultaneously visible to observers. The present study was designed to examine the generality of the theory by investigating whether the geometric structure of a spatial layout (bilateral symmetry) influences selection of spatial reference frames when object locations are sequentially learned through haptic exploration. In two experiments, participants learned the spatial layout solely by touch and performed judgments of relative direction among objects using their spatial memories. Results indicated that the geometric structure can provide a spatial cue for establishing reference frames as long as it is accentuated by explicit instructions (Experiment 1) or alignment with an egocentric orientation (Experiment 2). These results are entirely consistent with those from previous studies in which spatial information was encoded through simultaneous viewing of all object locations, suggesting that the intrinsic reference theory is not specific to a type of spatial memory acquired by the particular learning method but instead generalizes to spatial memories learned through a variety of encoding conditions. In particular, the present findings suggest that spatial memories that follow the intrinsic reference theory function equivalently regardless of the modality in which spatial information is encoded.  相似文献   

14.
Multiple views of spatial memory   总被引:1,自引:0,他引:1  
Recent evidence indicates that mental representations of large (i.e., navigable) spaces are viewpoint dependent when observers are restricted to a single view. The purpose of the present study was to determine whether two views of a space would produce a single viewpoint-independent representation or two viewpoint-dependent representations. Participants learned the locations of objects in a room from two viewpoints and then made judgments of relative direction from imagined headings either aligned or misaligned with the studied views. The results indicated that mental representations of large spaces were viewpoint dependent, and that two views of a spatial layout appeared to produce two viewpoint-dependent representations in memory. Imagined headings aligned with the study views were more accessible than were novel headings in terms of both speed and accuracy of pointing judgments.  相似文献   

15.
16.
被试在矩形房间中从两个不同的观察点学习物体场景并在多个朝向上对物体形成的空间关系进行判断,通过控制场景中物体主要内在轴相对于环境结构(房间和地毯)的方向和被试的学习顺序,探讨被试在场景空间表征中采用何种参照系和参照系选取时的影响因素。两个实验结果发现:(1)内在参照系(intrinsic reference systems)和环境参照系均可以用于物体场景的表征,两类参照系之间的关系却是影响被试物体场景表征时参照系选取的重要因素,即当内在参照系与环境参照系方向一致时,被试无论从哪个朝向学习,都选择从垂直于内在参照系和环境参照系的朝向进行表征。反之,当二者方向不一致时,表征时参照系的选择取决于被试的学习经历;(2)无论内在参照系与环境参照系方向是否一致,物体场景本身内在结构的规则性都能够促进空间记忆,即内在结构的规则性既有助于准确编码物体的相对位置,也有助于提高空间关系判断的准确性。  相似文献   

17.
Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6–7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4–5‐year‐olds and adults. In Experiment 2, a new, forced‐choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide‐angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3–5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age‐related differences observed.  相似文献   

18.
By having subjects drive a virtual taxicab through a computer-rendered town, we examined how landmark and layout information interact during spatial navigation. Subject-drivers searched for passengers, and then attempted to take the most efficient route to the requested destinations (one of several target stores). Experiment 1 demonstrated that subjects rapidly learn to find direct paths from random pickup locations to target stores. Experiment 2 varied the degree to which landmark and layout cues were preserved across two successively learned towns. When spatial layout was preserved, transfer was low if only target stores were altered, and high if both target stores and surrounding buildings were altered, even though in the latter case all local views were changed. This suggests that subjects can rapidly acquire a survey representation based on the spatial layout of the town and independent of local views, but that subjects will rely on local views when present, and are harmed when associations between previously learned landmarks are disrupted. We propose that spatial navigation reflects a hierarchical system in which either layout or landmark information is sufficient for orienting and wayfinding; however, when these types of cues conflict, landmarks are preferentially used.  相似文献   

19.
Research with humans and with nonhuman species has suggested a special role of room geometry in spatial memory functioning. In two experiments, participants learned the configuration of a room with four corners, along with the configuration of four objects within the room, while standing in a fixed position at the room’s periphery. The configurations were either rectangular (Experiment 1) or irregular (Experiment 2). Room geometry was not recalled better than object layout geometry, and memories for both configurations were orientation dependent. These results suggest that room geometry and object layout geometry are represented similarly in human memory, at least in situations that promote long-term learning of object locations. There were also some differences between corners and objects in orientation dependence, suggesting that the two sources of information are represented in similar but separate spatial reference systems.  相似文献   

20.
Past research (e.g., J. M. Loomis, Y. Lippa, R. L. Klatzky, & R. G. Golledge, 2002) has indicated that spatial representations derived from spatial language can function equivalently to those derived from perception. The authors tested functional equivalence for reporting spatial relations that were not explicitly stated during learning. Participants learned a spatial layout by visual perception or spatial language and then made allocentric direction and distance judgments. Experiments 1 and 2 indicated allocentric relations could be accurately reported in all modalities, but visually perceived layouts, tested with or without vision, produced faster and less variable directional responses than language. In Experiment 3, when participants were forced to create a spatial image during learning (by spatially updating during a backward translation), functional equivalence of spatial language and visual perception was demonstrated by patterns of latency, systematic error, and variability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号