首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This experiment is aimed at understanding how egocentric experiences, allocentric viewpoint-dependent representations, and allocentric viewpoint-independent representations interact when encoding and retrieving a spatial environment. Although several cognitive theories have highlighted the interaction between reference frames, it is less clear about the role of a real-time presentation of allocentric viewpoint-dependent representation on the spatial organization of information. Sixty participants were asked to navigate in two virtual cities to memorize the position of one hidden object. Half of the participants had the possibility to visualize the virtual city with an interactive aerial view. Then, they were required to find the position of the object in three different experimental conditions (“retrieval with an interactive aerial view” vs. “retrieval on a map” vs. “retrieval without an interactive aerial view”). Results revealed that participants were significantly more precise in retrieving the position of the object when immersed in an egocentric experience with the interactive aerial view. The retrieval of spatial information is facilitated by the presence of the interactive aerial view of the city, since it provides a real-time allocentric viewpoint-dependent representation. More participants with high preference for using cardinal points tend to be more accurate when they were asked to retrieve the position of the object on the map. As suggested by the mental frame syncing hypothesis, the presence of an allocentric viewpoint-dependent representation during the retrieval seems to ease the imposition on a specific viewpoint on the stored abstract allocentric viewpoint-independent representation. Our findings represent another significant step toward the comprehension of the organization of spatial representations of our environment.  相似文献   

2.
The body specificity hypothesis (Casasanto, 2009) posits that the way in which people interact with the world affects their mental representation of information. For instance, right- versus left-handedness affects the mental representation of affective valence, with right-handers categorically associating good with rightward areas and bad with leftward areas, and left-handers doing the opposite. In two experiments we test whether this hypothesis can: extend to spatial memory, be measured in a continuous manner, be predicted by extent of handedness, and how the application of such a heuristic might vary as a function of informational specificity. Experiment 1 demonstrates systematic and continuous spatial location memory biases as a function of associated affective information; right-handed individuals misremembered positively- and negatively-valenced locations as further right and left, respectively, relative to their original locations. Left-handed individuals did the opposite, and in general those with stronger right- or left-handedness showed greater spatial memory biases. Experiment 2 tested whether participants would show similar effects when studying a map with high visual specificity (i.e., zoomed in); they did not. Overall we support the hypothesis that handedness affects the coding of affective information, and better specify the scope and nature of body-specific effects on spatial memory.  相似文献   

3.
Mental representations of spatial relations   总被引:6,自引:0,他引:6  
  相似文献   

4.
The authors investigated whether college students possess abstract rules concerning the applicability conditions for three spatial diagrams that are important tools for thinking—matrices, networks, and hierarchies. A total of 127 students were asked to select which type of diagram would be best for organising the information in each of several short scenarios. The scenarios were written using three different story contexts: (a) neutral, presenting a real-life situation but not cueing a particular representation; (b) abstract, presenting only variable names and relations; and (c) incongruent, in which the context and informational structure cued different representations. The results indicated above-chance performance on the abstract scenarios, as well as comparable performance on the abstract and neutral context scenarios. In a follow-up study in which eight students thought out loud while selecting diagrams for the abstract scenarios, there were almost no references to concrete examples. The results of these studies suggest that students possess abstract rules concerning the applicability conditions for matrices, networks, and hierarchies.  相似文献   

5.
Ekstrom et al. report the responses of single neurons recorded from the brains of human subjects performing a spatial navigation task in virtual reality. They found cells encoding the subject's current location, view and destination. These data, and related findings in animals, directly reveal some of the representations underlying spatial cognition. They highlight the potential for cognitive psychology and systems neuroscience to combine to provide a neuronal-level understanding of human behaviour.  相似文献   

6.
Can the visual system extrapolate spatial layout of a scene to new viewpoints after a single view? In the present study, we examined this question by investigating the priming of spatial layout across depth rotations of the same scene (Sanocki &; Epstein, 1997). Participants had to indicate which of two dots superimposed on objects in the target scene appeared closer to them in space. There was as much priming from a prime with a viewpoint that was 10° different from the test image as from a prime that was identical to the target; however, there was no reliable priming from larger differences in viewpoint. These results suggest that a scene’s spatial layout can be extrapolated, but only to a limited extent.  相似文献   

7.
Active navigation and orientation-free spatial representations   总被引:4,自引:0,他引:4  
In this study, we examined the orientation dependency of spatial representations following various learning conditions. We assessed the spatial representations of human participants after they had learned a complex spatial layout via map learning, via navigating within a real environment, or via navigating through a virtual simulation of that environment. Performances were compared between conditions involving (1) multiple- versus single-body orientation, (2) active versus passive learning, and (3) high versus low levels of proprioceptive information. Following learning, the participants were required to produce directional judgments to target landmarks. Results showed that the participants developed orientation-specific spatial representations following map learning and passive learning, as indicated by better performance when tested from the initial learning orientation. These results suggest that neither the number of vantage points nor the level of proprioceptive information experienced are determining factors; rather, it is the active aspect of direct navigation that leads to the development of orientation-free representations.  相似文献   

8.
Li X  Mou W  McNamara TP 《Cognition》2012,124(2):143-155
Four experiments tested whether there are enduring spatial representations of objects' locations in memory. Previous studies have shown that under certain conditions the internal consistency of pointing to objects using memory is disrupted by disorientation. This disorientation effect has been attributed to an absence of or to imprecise enduring spatial representations of objects' locations. Experiment 1 replicated the standard disorientation effect. Participants learned locations of objects in an irregular layout and then pointed to objects after physically turning to face an object and after disorientation. The expected disorientation was observed. In Experiment 2, after disorientation, participants were asked to imagine they were facing the original learning direction and then physically turned to adopt the test orientation. In Experiment 3, after disorientation, participants turned to adopt the test orientation and then were informed of the original viewing direction by the experimenter. A disorientation effect was not observed in Experiment 2 or 3. In Experiment 4, after disorientation, participants turned to face the test orientation but were not told the original learning orientation. As in Experiment 1, a disorientation effect was observed. These results suggest that there are enduring spatial representations of objects' locations specified in terms of a spatial reference direction parallel to the learning view, and that the disorientation effect is caused by uncertainty in recovering the spatial reference direction relative to the testing orientation following disorientation.  相似文献   

9.
Two studies investigate the nature of representations in spatial working memory, directly addressing the question of whether people represent the configuration information above and beyond independent positional information when representing multiple sequentially presented locations. In Experiment 1, participants performed a location memory task in which they recalled the location of objects that were presented sequentially on a screen. Comparison of participants' data with simulated data modelled to represent independent positional representation of spatial locations revealed that participants represented the configural properties of shorter sequences (3 and 4 locations) but not of longer ones (5 and 7 locations). Experiment 2 employed a sequential recognition task in which participants were asked to judge whether two consequently presented spatial sequences were identical. These experiments confirmed sensitivity to configural properties of spatial sequences.  相似文献   

10.
When multidimensional scaling solutions are used to study semantic concepts, the dimensionality of the optimal configuration has to be determined. Several strategies have been proposed to choose the appropriate dimensionality. In the present paper, the traditional dimensionality choice criteria were evaluated and compared to a method based on the prediction of an external criterion. Two studies were conducted in which typicality of an exemplar within a semantic concept was predicted from its distance to the concept centroid. In contrast to the low-dimensional solutions selected by the traditional methods, predictions of an external criterion improved with additional dimensions up till dimensionalities that were much higher than what is common in the literature. This suggests that traditional methods underestimate the richness of semantic concepts as revealed in spatial representations derived from similarity measures.  相似文献   

11.
Four experiments investigated the representation and integration in memory of spatial and nonspatial relations. Subjects learned two-dimensional spatial arrays in which critical pairs of object names were semantically related (Experiment 1), semantically and episodically related (Experiment 2), or just episodically related (Experiments 3a and 3b). Episodic relatedness was established in a paired-associate learning task that preceded array learning. After learning an array, subjects participated in two tasks: item recognition, in which the measure of interest was priming; and distance estimation. Priming in item recognition was sensitive to the Euclidean distance between object names and, for neighbouring locations, to nonspatial relations. Errors in distance estimations varied as a function of distance but were unaffected by nonspatial relations. These and other results indicated that nonspatial relations influenced the probability of encoding spatial relations between locations but did not lead to distorted spatial memories.  相似文献   

12.
Cole, Gellatly, and Blurton have shown that targets presented adjacent to geometric corners are detected more efficiently than targets presented adjacent to straight edges. In six experiments, we examined how this corner enhancement effect is modulated by corner-of-object representations (i.e., corners that define an object's shape) and local base-level corners that occur as a result of, for instance, overlapping the straight edges of two objects. The results show that the corner phenomenon is greater for corners of object representations than for corners that do not define an object's shape. We also examined whether the corner effect persists within the contour boundaries of an object, as well as on the outside. The results showed that a spatial gradient of attention accompanies the corner effect outside the contour boundaries of an object but that processing within an object is uniform, with no corner effect occurring. We discuss these findings in relation to space-based and object-based theories of attention.  相似文献   

13.
The structure of graphemic representations   总被引:5,自引:0,他引:5  
A Caramazza  G Miceli 《Cognition》1990,37(3):243-297
The analysis of the spelling performance of a brain-damaged dysgraphic subject is reported. The subject's spelling performance was affected by various graphotactic factors, such as the distinction between consonant and vowel and graphosyllabic structure. For example, while the subject produced many consonant and vowel deletion errors when these were part of consonant and vowel clusters, respectively (e.g., sfondo----sondo; giunta----gunta), deletions were virtually never produced for single consonants flanked by two vowels (e.g., onesto----oesto) or for single vowels flanked by two consonants (e.g., tirare----trare). The demonstration that graphosyllabic factors affect spelling performance disconfirms the hypothesis that graphemic representations consist simply of linearly ordered sets of graphemes. It is concluded that graphemic representations are multidimensional structures: one dimension specifies the grapheme identities that comprise the spelling of a word; a second dimension specifies the consonant/vowel status of the graphemes; a third dimension represents the graphosyllabic structure of the grapheme string; and, a fourth dimension provides information about geminate features.  相似文献   

14.
It has been shown that spatial information can be acquired from both visual and nonvisual modalities. The present study explored how spatial information from vision and proprioception was represented in memory, investigating orientation dependence of spatial memories acquired through visual and proprioceptive spatial learning. Experiment 1 examined whether visual learning alone and proprioceptive learning alone yielded orientation-dependent spatial memory. Results showed that spatial memories from both types of learning were orientation dependent. Experiment 2 explored how different orientations of the same environment were represented when they were learned visually and proprioceptively. Results showed that both visually and proprioceptively learned orientations were represented in spatial memory, suggesting that participants established two different reference systems based on each type of learning experience and interpreted the environment in terms of these two reference systems. The results provide some initial clues to how different modalities make unique contributions to spatial representations.  相似文献   

15.
When people learn an environment, they appear to establish a principle orientation just as they would determine the “top” of a novel object. Evidence for reference orientations has largely come from observations of orientation dependence in pointing judgments: Participants are most accurate when asked to recall the space from a particular orientation. However, these investigations have used highly constrained encoding in both time-scale and navigational goals, leaving open the possibility that larger spaces experienced during navigational learning depend on a different organizational scheme. To test this possibility, we asked undergraduates to perform judgments of relative direction on familiar landmarks around their well-learned campus. Participants showed clear evidence for a single reference orientation, generally aligned along salient axes defined by the buildings and paths. This result argues that representing space involves the establishment of a reference orientation, a requirement that endures over repeated exposures and extensive experience.  相似文献   

16.
When one moves, the spatial relationship between oneself and the entire world changes. Spatial updating refers to the cognitive process that computes these relationships as one moves. In two experiments, we tested whether spatial updating occurs automatically for multiple environments simultaneously. Participants turned relative to either a room or the surrounding campus buildings and then pointed to targets in both the environment in which they turned (updated environment) and the other environment (nonupdated environment). The participants automatically updated the room targets when they moved relative to the campus, but they did not update the campus targets when they moved relative to the room. Thus, automatic spatial updating depends on the nature of the environment. Implications for theories of spatial learning and the structure of human spatial representations are discussed.  相似文献   

17.
Humans can reach for objects with their hands whether the objects are seen, heard or touched. Thus, the position of objects is recoded in a joint-centered frame of reference regardless of the sensory modality involved. Our study indicates that this frame of reference is not the only one shared across sensory modalities. The location of reaching targets is also encoded in eye-centered coordinates, whether the targets are visual, auditory, proprioceptive or imaginary. Furthermore, the remembered eye-centered location is updated after each eye and head movement. This is quite surprising since, in principle, a reaching motor command can be computed from any non-visual modality without ever recovering the eye-centered location of the stimulus. This finding may reflect the predominant role of vision in human spatial perception.  相似文献   

18.
Two experiments used a selective interference procedure in an attempt to determine whether nonverbal visual stimuli were represented in memory in a verbal or spatial format. A spatial representation was clearly implicated. In both experiments, Ss were required to remember either the positions or the identities of seven target items in a 25-item array. During the retention interval for that information, Ss attempted to recognize schematic face or airplane photograph stimuli in a same-different memory task. Memory performance on one or both tasks was greatly impaired when the recall task involved position or spatial information, but was either much less or not at all affected by an identity or verbal information recall task. Because of the selective nature of the interference and on the basis of certain correlational evidence, the experimental results were also interpreted as providing support for the notion that verbal and spatial information are stored and processed in separate information-processing systems.  相似文献   

19.
Spatial memory and reasoning rely heavily on allocentric (often map-like) representations of spatial knowledge. While research has documented many ways in which spatial information can be represented in allocentric form, less is known about how such representations are constructed. For example: Are the very early, pre-attentive parts of the process hard-wired, or can they be altered by experience? We addressed this issue by presenting sub-saccadic (53 ms) masked stimuli consisting of a target among one to three reference features. We then shifted the location of the feature array, and asked participants to identify the target’s new relative location. Experience altered feature processing even when the display duration was too short to allow attention re-allocation. The results demonstrate the importance of early perceptual processes in the creation of representations of spatial location, and the malleability of those processes based on experience and expectations.  相似文献   

20.
This study investigates whether inductive processes influencing spatial memory performance generalize to supervised learning scenarios with differential feedback. After providing a location memory response in a spatial recall task, participants received visual feedback showing the target location. In critical blocks, feedback was systematically biased either 4° toward the vertical axis (toward condition) or 4° farther away from the vertical axis (away condition). Results showed that the weaker teaching signal (i.e., a smaller difference between the remembered location and the feedback location) produced a stronger experience-dependent change over blocks in the away condition than in the toward condition. This violates delta rule learning. Subsequent simulations of the dynamic field theory of spatial cognition provide a theoretically unified account of these results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号