首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Research on the relationship between the representation of space and time has produced two contrasting proposals. ATOM posits that space and time are represented via a common magnitude system, suggesting a symmetrical relationship between space and time. According to metaphor theory, however, representations of time depend on representations of space asymmetrically. Previous findings in humans have supported metaphor theory. Here, we investigate the relationship between time and space in a nonverbal species, by testing whether non-human primates show space–time interactions consistent with metaphor theory or with ATOM. We tested two rhesus monkeys and 16 adult humans in a nonverbal task that assessed the influence of an irrelevant dimension (time or space) on a relevant dimension (space or time). In humans, spatial extent had a large effect on time judgments whereas time had a small effect on spatial judgments. In monkeys, both spatial and temporal manipulations showed large bi-directional effects on judgments. In contrast to humans, spatial manipulations in monkeys did not produce a larger effect on temporal judgments than the reverse. Thus, consistent with previous findings, human adults showed asymmetrical space–time interactions that were predicted by metaphor theory. In contrast, monkeys showed patterns that were more consistent with ATOM.  相似文献   

2.
People confined to a closed space live in a visual environment that differs from a natural open-space environment in several respects. The view is restricted to no more than a few meters, and nearby objects cannot be perceived relative to the position of a horizon. Thus, one might expect to find changes in visual space perception as a consequence of the prolonged experience of confinement. The subjects in our experimental study were participants of the Mars-500 project and spent nearly a year and a half isolated from the outside world during a simulated mission to Mars. The participants were presented with a battery of computer-based psychophysical tests examining their performance on various 3-D perception tasks, and we monitored changes in their perceptual performance throughout their confinement. Contrary to our expectations, no serious effect of the confinement on the crewmembers’ 3-D perception was observed in any experiment. Several interpretations of these findings are discussed, including the possibilities that (1) the crewmembers’ 3-D perception really did not change significantly, (2) changes in 3-D perception were manifested in the precision rather than the accuracy of perceptual judgments, and/or (3) the experimental conditions and the group sample were problematic.  相似文献   

3.
What is the relationship between space and time in the human mind? Studies in adults show an asymmetric relationship between mental representations of these basic dimensions of experience: Representations of time depend on space more than representations of space depend on time. Here we investigated the relationship between space and time in the developing mind. Native Greek‐speaking children watched movies of two animals traveling along parallel paths for different distances or durations and judged the spatial and temporal aspects of these events (e.g., Which animal went for a longer distance, or a longer time?). Results showed a reliable cross‐dimensional asymmetry. For the same stimuli, spatial information influenced temporal judgments more than temporal information influenced spatial judgments. This pattern was robust to variations in the age of the participants and the type of linguistic framing used to elicit responses. This finding demonstrates a continuity between space‐time representations in children and adults, and informs theories of analog magnitude representation.  相似文献   

4.
Yu AB  Zacks JM 《Memory & cognition》2010,38(7):982-993
We present evidence that different mental spatial transformations are used to reason about three different types of items representing a spectrum of animacy: human bodies, nonhuman animals, and inanimate objects. Participants made two different judgments about rotated figures: handedness judgments (“Is this the left or right side?”) and matching judgments (“Are these figures the same?”). Perspective-taking strategies were most prevalent when participants made handedness judgments about human bodies and animals. In contrast, participants generally did not imagine changes in perspective to perform matching judgments. Such results suggest that high-level information about semantic categories, including information about a thing’s animacy, can influence how spatial representations are transformed when performing online problem solving. Supplemental materials for this article may be downloaded from http://mc.psychonomic-journals.org/content/supplemental.  相似文献   

5.
通过要求被试分别在近处空间和远处空间完成空间参照框架的判断任务, 考察了听障和听力正常人群空间主导性和空间参照框架的交互作用。结果表明:(1)相对于听力正常人群, 听障人群完成自我参照框架判断任务的反应时更长, 而在完成环境参照框架判断任务无显著差异; (2)听障人群和听力正常人群空间主导性和空间参照框架交互作用呈现出相反模式。研究表明, 听障人群在听力功能受损后, 其空间主导性和空间参照框架的交互作用也产生了变化。  相似文献   

6.
Time in the mind: using space to think about time   总被引:4,自引:0,他引:4  
Casasanto D  Boroditsky L 《Cognition》2008,106(2):579-593
How do we construct abstract ideas like justice, mathematics, or time-travel? In this paper we investigate whether mental representations that result from physical experience underlie people's more abstract mental representations, using the domains of space and time as a testbed. People often talk about time using spatial language (e.g., a long vacation, a short concert). Do people also think about time using spatial representations, even when they are not using language? Results of six psychophysical experiments revealed that people are unable to ignore irrelevant spatial information when making judgments about duration, but not the converse. This pattern, which is predicted by the asymmetry between space and time in linguistic metaphors, was demonstrated here in tasks that do not involve any linguistic stimuli or responses. These findings provide evidence that the metaphorical relationship between space and time observed in language also exists in our more basic representations of distance and duration. Results suggest that our mental representations of things we can never see or touch may be built, in part, out of representations of physical experiences in perception and motor action.  相似文献   

7.
Primary somatosensory maps in the brain represent the body as a discontinuous, fragmented set of two-dimensional (2-D) skin regions. We nevertheless experience our body as a coherent three-dimensional (3-D) volumetric object. The links between these different aspects of body representation, however, remain poorly understood. Perceiving the body's location in external space requires that immediate afferent signals from the periphery be combined with stored representations of body size and shape. At least for the back of the hand, this body representation is massively distorted, in a highly stereotyped manner. Here we test whether a common pattern of distortions applies to the entire hand as a 3-D object, or whether each 2-D skin surface has its own characteristic pattern of distortion. Participants judged the location in external space of landmark points on the dorsal and palmar surfaces of the hand. By analyzing the internal configuration of judgments, we produced implicit maps of each skin surface. Qualitatively similar distortions were observed in both cases. The distortions were correlated across participants, suggesting that the two surfaces are bound into a common underlying representation. The magnitude of distortion, however, was substantially smaller on the palmar surface, suggesting that this binding is incomplete. The implicit representation of the human hand may be a hybrid, intermediate between a 2-D representation of individual skin surfaces and a 3-D representation of the hand as a volumetric object.  相似文献   

8.
Multiple views of spatial memory   总被引:1,自引:0,他引:1  
Recent evidence indicates that mental representations of large (i.e., navigable) spaces are viewpoint dependent when observers are restricted to a single view. The purpose of the present study was to determine whether two views of a space would produce a single viewpoint-independent representation or two viewpoint-dependent representations. Participants learned the locations of objects in a room from two viewpoints and then made judgments of relative direction from imagined headings either aligned or misaligned with the studied views. The results indicated that mental representations of large spaces were viewpoint dependent, and that two views of a spatial layout appeared to produce two viewpoint-dependent representations in memory. Imagined headings aligned with the study views were more accessible than were novel headings in terms of both speed and accuracy of pointing judgments.  相似文献   

9.
ABSTRACT

We live in a 3D world, and yet the majority of vision research is restricted to 2D phenomena, with depth research typically treated as a separate field. Here we ask whether 2D spatial information and depth information interact to form neural representations of 3D space, and if so, what are the perceptual implications? Using fMRI and behavioural methods, we reveal that human visual cortex gradually transitions from 2D to 3D spatial representations, with depth information emerging later along the visual hierarchy, and demonstrate that 2D location holds a fundamentally special place in early visual processing.  相似文献   

10.
Previous evidence suggests that attention can operate on object-based representations. It is not known whether these representations encode depth information and whether object depth, if encoded, is in viewer- or objectcentered coordinates. To examine these questions, we employed a spatial cuing paradigm in which one corner of a 3-D object was exogenously cued with 75% validity. By rotating the object in depth, we can determine whether validity effects are modulated by 2-D or 3-D cue-target distance and whether validity effects depend on the position of the viewer relative to the object. When the image of a 3-D object was present (Experiments 1A and 1B), validity effects were not modulated by changes in 2-D cue-target distance, and shifting attention toward the viewer led to smaller validity effects than did shifting attention away from the viewer. When there was no object in the display (Experiments 2A and 2B), validity effects increased linearly as a function of 2-D cue-target distance. These results demonstrate that attention spreads across representations of perceived objects that encode depth information and that the object’s orientation in depth is encoded in viewer-centered coordinates.  相似文献   

11.
Visual processing breaks the world into parts and objects, allowing us not only to examine the pieces individually, but also to perceive the relationships among them. There is work exploring how we perceive spatial relationships within structures with existing representations, such as faces, common objects, or prototypical scenes. But strikingly, there is little work on the perceptual mechanisms that allow us to flexibly represent arbitrary spatial relationships, e.g., between objects in a novel room, or the elements within a map, graph or diagram. We describe two classes of mechanism that might allow such judgments. In the simultaneous class, both objects are selected concurrently. In contrast, we propose a sequential class, where objects are selected individually over time. We argue that this latter mechanism is more plausible even though it violates our intuitions. We demonstrate that shifts of selection do occur during spatial relationship judgments that feel simultaneous, by tracking selection with an electrophysiological correlate. We speculate that static structure across space may be encoded as a dynamic sequence across time. Flexible visual spatial relationship processing may serve as a case study of more general visual relation processing beyond space, to other dimensions such as size or numerosity.  相似文献   

12.
Active navigation and orientation-free spatial representations   总被引:4,自引:0,他引:4  
In this study, we examined the orientation dependency of spatial representations following various learning conditions. We assessed the spatial representations of human participants after they had learned a complex spatial layout via map learning, via navigating within a real environment, or via navigating through a virtual simulation of that environment. Performances were compared between conditions involving (1) multiple- versus single-body orientation, (2) active versus passive learning, and (3) high versus low levels of proprioceptive information. Following learning, the participants were required to produce directional judgments to target landmarks. Results showed that the participants developed orientation-specific spatial representations following map learning and passive learning, as indicated by better performance when tested from the initial learning orientation. These results suggest that neither the number of vantage points nor the level of proprioceptive information experienced are determining factors; rather, it is the active aspect of direct navigation that leads to the development of orientation-free representations.  相似文献   

13.
Kosslyn (1987) theorized that the visual system uses two types of spatial relations. Categorical spatial relations represent a range of locations as an equivalence class, whereas coordinate spatial relations represent the precise distance between two objects. Data indicate a left hemisphere (LH) advantage for processing categorical spatial relations and a right hemisphere (RH) advantage for processing coordinate spatial relations. Although generally assumed to be independent processes, this article proposes a possible connection between categorical and coordinate spatial relations. Specifically, categorical spatial relations may be an initial stage in the formation of coordinate spatial relations. Three experiments tested the hypothesis that categorical information would benefit tasks that required coordinate judgments. Experiments 1 and 2 presented categorical information before participants made coordinate judgments and coordinate information before participants made categorical judgments. Categorical information sped the processing of a coordinate task under a range of experimental variables; however, coordinate information did not benefit categorical judgments. Experiment 3 used this priming paradigm to present stimuli in the left or right visual field. Although visual field differences were present in the third experiment, categorical information did not speed the processing of a coordinate task. The lack of priming effects in Experiment 3 may have been due to methodological changes. In general, support is provided that categorical spatial relations may act as an initial step in the formation of more precise distance representations, i.e., coordinate spatial relations.  相似文献   

14.
用不同外部表征方式集中呈现信息条件下的因果力判断   总被引:2,自引:0,他引:2  
王墨耘  傅小兰 《心理学报》2004,36(3):298-306
在分别用文字陈述、表格和图形三种外部表征方式集中呈现因果信息的条件下,用直接估计因果力大小的实验范式考察单一因果关系因果力估计的特点,检验概率对比模型,效力PC理论和pCI规则。让287名大学生被试估计不同化学药物影响动物基因变异的能力。结果发现,对单一因果关系因果力估计具有以下4个特点:⑴不对称性:在预防原因条件下的因果力估计较多符合效力PC理论,而在产生原因条件下的因果力估计一般符合概率对比模型;⑵文字陈述、表格和图形三种信息外部表征方式,不影响产生原因条件下的因果力估计,但影响预防原因条件下的因果力估计。在预防原因条件下,与文字陈述和表格表征相比,图形表征会促使更多被试按效力PC理论来做因果力估计;⑶没有被试使用pCI规则;⑷被试估计因果力所使用的规则存在明显的个体差异。  相似文献   

15.
The ability of observers to perceive three-dimensional (3-D) distances or lengths along intrinsically curved surfaces was investigated in three experiments. Three physically curved surfaces were used: convex and/or concave hemispheres (Experiments 1 and 3) and a hyperbolic paraboloid (Experiment 2). The first two experiments employed a visual length-matching task, but in the final experiment the observers estimated the surface lengths motorically by varying the separation between their two index fingers. In general, the observers' judgments of surface length in both tasks (perceptual vs. motoric matching) were very precise but were not necessarily accurate. Large individual differences (overestimation, underestimation, etc.) in the perception of length occurred. There were also significant effects of viewing distance, type of surface, and orientation of the spatial intervals on the observers' judgments of surface length. The individual differences and failures of perceptual constancy that were obtained indicate that there is no single relationship between physical and perceived distances on 3-D surfaces that is consistent across observers.  相似文献   

16.
Three experiments explored representations of spaces depicted on long-running television shows. The first two experiments tested representations of the space depicted in the show ER, which is filmed on a multiple-view set that allows the action to be viewed from any vantage point. Participants who had not seen the show, as well as those who had seen it frequently, made judgments about relative directions on the ER set. The experienced viewers were unable to perform this task more accurately than novices. In the third experiment, representations of two multiple-view sets (ER and West Wing) were compared with representations of more traditional constrained-view sets in which camera positions are limited to the region behind a “fourth wall.” Results demonstrated that experience watching the constrained-view shows was much more strongly associated with accurate representations than was experience with the multiple-view shows. In addition, a novel view of a constrained set was tested, and experience again did not facilitate correct responding. These results suggest that long-term spatial memories can result from short-term spatial coding of individual scenes, but only when views are generally consistent.  相似文献   

17.
Across cultures people construct spatial representations of time. However, the particular spatial layouts created to represent time may differ across cultures. This paper examines whether people automatically access and use culturally specific spatial representations when reasoning about time. In Experiment 1, we asked Hebrew and English speakers to arrange pictures depicting temporal sequences of natural events, and to point to the hypothesized location of events relative to a reference point. In both tasks, English speakers (who read left to right) arranged temporal sequences to progress from left to right, whereas Hebrew speakers (who read right to left) arranged them from right to left, replicating previous work. In Experiments 2 and 3, we asked the participants to make rapid temporal order judgments about pairs of pictures presented one after the other (i.e., to decide whether the second picture showed a conceptually earlier or later time-point of an event than the first picture). Participants made responses using two adjacent keyboard keys. English speakers were faster to make "earlier" judgments when the "earlier" response needed to be made with the left response key than with the right response key. Hebrew speakers showed exactly the reverse pattern. Asking participants to use a space-time mapping inconsistent with the one suggested by writing direction in their language created interference, suggesting that participants were automatically creating writing-direction consistent spatial representations in the course of their normal temporal reasoning. It appears that people automatically access culturally specific spatial representations when making temporal judgments even in nonlinguistic tasks.  相似文献   

18.
Three studies investigated the factors that lead spatial information to be stored in an orientation-specific versus orientation-free manner. In Experiment 1, we replicated the findings of Presson and Hazelrigg (1984) that learning paths from a small map versus learning the paths directly from viewing a world leads to different functional characteristics of spatial memory. Whether the route display was presented as the path itself or as a large map of the path did not affect how the information was stored. In Experiment 2, we examined the effects of size of stimulus display, size of world, and scale transformations on how spatial information in maps is stored and available for use in later judgments. In Experiment 3, we examined the effect of size on the orientation specificity of the spatial coding of paths that are viewed directly. The major determinant of whether spatial information was stored and used in an orientation-specific or an orientation-free manner was the size of the display. Small displays were coded in an orientation-specific way, whereas very large displays were coded in a more orientation-free manner. These data support the view that there are distinct spatial representations, one more perceptual and episodic and one more integrated and model-like, that have developed to meet different demands faced by mobile organisms.  相似文献   

19.
We studied the development of spatial frames of reference in children aged 3-6 years, who retrieved hidden toys from an array of identical containers bordered by landmarks under four conditions. By moving the child and/or the array between presentation and test, we varied the consistency of the hidden toy with (i) the body, and (ii) the testing room. The toy's position always remained consistent with (iii) the array and bordering landmarks. We found separate, additive performance advantages for consistency with body and room. These effects were already present at 3 years. A striking finding was that the room effect, which implies allocentric representations of the room and/or egocentric representations updated by self-motion, was much stronger in the youngest children than the body effect, which implies purely egocentric representations. Children as young as 3 years therefore had, and greatly favoured, spatial representations that were not purely egocentric. Viewpoint-independent recall based only on the array and bordering landmarks emerged at 5 years. There was no evidence that this later-developing ability, which implies object-referenced (intrinsic) representations, depended on verbal encodings. These findings indicate that core components of adult spatial competence, including parallel egocentric and nonegocentric representations of space, are present as early as 3 years. These are supplemented by later-developing object-referenced representations.  相似文献   

20.
Four experiments investigated the conditions contributing to sensorimotor alignment effects (i.e., the advantage for spatial judgments from imagined perspectives aligned with the body). Through virtual reality technology, participants learned object locations around a room (learning room) and made spatial judgments from imagined perspectives aligned or misaligned with their actual facing direction. Sensorimotor alignment effects were found when testing occurred in the learning room but not after walking 3 m into a neighboring (novel) room. Sensorimotor alignment effects returned after returning to the learning room or after providing participants with egocentric imagery instructions in the novel room. Additionally, visual and spatial similarities between the test and learning environments were independently sufficient to cause sensorimotor alignment effects. Memory alignment effects, independent from sensorimotor alignment effects, occurred in all testing conditions. Results are interpreted in the context of two-system spatial memory theories positing separate representations to account for sensorimotor and memory alignment effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号