首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
空间相对位置效应的时间特征   总被引:3,自引:2,他引:1  
用方位箭头作探测刺激,研究故事阅读产生的想象空间中物体搜索任务的反应时模式。结果发现:(1)方位箭头的指向对物体搜索有影响,反应时模式为:左=右=前<后,说明方位箭头作探测刺激时的物体搜索过程不涉及人一物空间关系的转换;(2)目标物体与注意物体的相对位置对物体搜索有影响,反应时模式为注意点<注意点对面<注意点左侧=注意点右侧,说明相对位置效应与人物空间关系的转换无关。本研究结果支持了两阶段理论。  相似文献   

2.
Research on visuospatial memory has shown that egocentric (subject-to-object) and allocentric (object-to-object) reference frames are connected to categorical (non-metric) and coordinate (metric) spatial relations, and that motor resources are recruited especially when processing spatial information in peripersonal (within arm reaching) than extrapersonal (outside arm reaching) space. In order to perform our daily-life activities, these spatial components cooperate along a continuum from recognition-related (e.g., recognizing stimuli) to action-related (e.g., reaching stimuli) purposes. Therefore, it is possible that some types of spatial representations rely more on action/motor processes than others. Here, we explored the role of motor resources in the combinations of these visuospatial memory components. A motor interference paradigm was adopted in which participants had their arms bent behind their back or free during a spatial memory task. This task consisted in memorizing triads of objects and then verbally judging what was the object: (1) closest to/farthest from the participant (egocentric coordinate); (2) to the right/left of the participant (egocentric categorical); (3) closest to/farthest from a target object (allocentric coordinate); and (4) on the right/left of a target object (allocentric categorical). The triads appeared in participants' peripersonal (Experiment 1) or extrapersonal (Experiment 2) space. The results of Experiment 1 showed that motor interference selectively damaged egocentric-coordinate judgements but not the other spatial combinations. The results of Experiment 2 showed that the interference effect disappeared when the objects were in the extrapersonal space. A third follow-up study using a within-subject design confirmed the overall pattern of results. Our findings provide evidence that motor resources play an important role in the combination of coordinate spatial relations and egocentric representations in peripersonal space.  相似文献   

3.
The representation of uniform motion in vision   总被引:3,自引:0,他引:3  
M T Swanston  N J Wade  R H Day 《Perception》1987,16(2):143-159
For veridical detection of object motion any moving detecting system must allocate motion appropriately between itself and objects in space. A model for such allocation is developed for simplified situations (points of light in uniform motion in a frontoparallel plane). It is proposed that motion of objects is registered and represented successively at four levels within frames of reference that are defined by the detectors themselves or by their movements. The four levels are referred to as retinocentric, orbitocentric, egocentric, and geocentric. Thus the retinocentric signal is combined with that for eye rotation to give an orbitocentric signal, and the left and right orbitocentric signals are combined to give an egocentric representation. Up to the egocentric level, motion representation is angular rather than three-dimensional. The egocentric signal is combined with signals for head and body movement and for egocentric distance to give a geocentric representation. It is argued that although motion perception is always geocentric, relevant registrations also occur at the three earlier levels. The model is applied to various veridical and nonveridical motion phenomena.  相似文献   

4.
Learning a novel environment involves integrating first‐person perceptual and motoric experiences with developing knowledge about the overall structure of the surroundings. The present experiments provide insights into the parallel development of these egocentric and allocentric memories by intentionally conflicting body‐ and world‐centered frames of reference during learning, and measuring outcomes via online and offline measures. Results of two experiments demonstrate faster learning and increased memory flexibility following route perspective reading (Experiment 1) and virtual navigation (Experiment 2) when participants begin exploring the environment on a northward (vs. any other direction) allocentric heading. We suggest that learning advantages due to aligning body‐centered (left/right/forward/back) with world‐centered (NSEW) reference frames are indicative of three features of spatial memory development and representation. First, memories for egocentric and allocentric information develop in parallel during novel environment learning. Second, cognitive maps have a preferred orientation relative to world‐centered coordinates. Finally, this preferred orientation corresponds to traditional orientation of physical maps (i.e., north is upward), suggesting strong associations between daily perceptual and motor experiences and the manner in which we preferentially represent spatial knowledge.  相似文献   

5.
Four experiments examined reference systems in spatial memories acquired from language. Participants read narratives that located 4 objects in canonical (front, back, left, right) or noncanonical (left front, right front, left back, right back) positions around them. Participants' focus of attention was first set on each of the 4 objects, and then they were asked to report the name of the object at the location indicated by a direction word or an iconic arrow. The results indicated that spatial memories were represented in terms of intrinsic (object-to-object) reference systems, which were selected using egocentric cues (e.g., alignment with body axes). Results also indicated that linguistic direction cues were comprehended in terms of egocentric reference systems, whereas iconic arrows were not.  相似文献   

6.
Right-handers tend to associate “good” with the right side of space and “bad” with the left. This implicit association appears to arise from the way people perform actions, more or less fluently, with their right and left hands. Here we tested whether observing manual actions performed with greater or lesser fluency can affect observers' space–valence associations. In two experiments, we assigned one participant (the actor) to perform a bimanual fine motor task while another participant (the observer) watched. Actors were assigned to wear a ski glove on either the right or left hand, which made performing the actions on this side of space disfluent. In Experiment 1, observers stood behind the actors, sharing their spatial perspective. After motor training, both actors and observers tended to associate “good” with the side of the actors' free hand and “bad” with the side of the gloved hand. To determine whether observers' space–valence associations were computed from their own perspectives or the actors', in Experiment 2 we asked the observer to stand face-to-face with the actor, reversing their spatial perspectives. After motor training, both actors and observers associated “good” with the side of space where disfluent actions had occurred from their own egocentric spatial perspectives; if “good” was associated with the actor's right-hand side it was likely to be associated with the observer's left-hand side. Results show that vicarious experiences of motor fluency can shape valence judgments, and that observers spontaneously encode the locations of fluent and disfluent actions in egocentric spatial coordinates.  相似文献   

7.
Clinical signs of damage to the egocentric reference system range from the inability to detect stimuli in the real environment to a defect in recovering items from an internal representation. Despite clinical dissociations, current interpretations consider all symptoms as due to a single perturbation, differentially expressed according to the medium explored (perceptual or representational). We propose an alternative account based on the functional distinction between two separate egocentric mechanisms: one allowing construction of the immediate point of view, the other extracting a required perspective within a mental representation. Support to this claim comes from recent results in the domain of navigation, showing that separate cognitive mechanisms maintain the egocentric reference when actively exploring the visual space as opposed to moving according to an internal map. These mechanisms likely follow separate developmental pathways, seemingly depend on distinct neural pathways and are used independently by healthy adults, reflecting task demands and individual cognitive style. Implications for spatial cognition and social skills are discussed.  相似文献   

8.
《Acta psychologica》2013,143(1):146-156
Previous studies suggest that mental rotation can be accomplished by using different mental spatial transformations. When adopting the allocentric transformation, individuals imagine the stimulus rotation referring to its intrinsic coordinate frame, while when adopting the egocentric transformation they rely on multisensory and sensory-motor mechanisms. However, how these mental transformations evolve during healthy aging has received little attention. Here we investigated how visual, multisensory, and sensory-motor components of mental imagery change with normal aging. Fifteen elderly and 15 young participants were asked to perform two different laterality tasks within either an allocentric or an egocentric frame of reference. Participants had to judge either the handedness of a visual hand (egocentric task) or the location of a marker placed on the left or right side of the same visual hand (allocentric task). Both left and right hands were presented at various angular departures to the left, the right, or to the center of the screen. When performing the egocentric task, elderly participants were less accurate and slower for biomechanically awkward hand postures (i.e., lateral hand orientations). Their performance also decreased when stimuli were presented laterally. The findings revealed that healthy aging is associated with a specific degradation of sensory-motor mechanisms necessary to accomplish complex effector-centered mental transformations. Moreover, failure to find a difference in judging left or right hand laterality suggests that aging does not necessarily impair non-dominant hand sensory-motor programs.  相似文献   

9.
Recent findings suggest that difficulties on small‐scale visuospatial tasks documented in Williams syndrome (WS) also extend to large‐scale space. In particular, individuals with WS often present with difficulties in allocentric spatial coding (encoding relationships between items within an environment or array). This study examined the effect of atypical spatial processing in WS on large‐scale navigational strategies, using a novel 3D virtual environment. During navigation of recently learnt large‐scale space, typically developing (TD) children predominantly rely on the use of a sequential egocentric strategy (recalling the sequence of left–right body turns throughout a route), but become more able to use an allocentric strategy between 5 and 10 years of age. The navigation strategies spontaneously employed by TD children between 5 and 10 years of age and individuals with WS were analysed. The ability to use an allocentric strategy on trials where spatial relational knowledge was required to find the shortest route was also examined. Results showed that, unlike TD children, during spontaneous navigation the WS group did not predominantly employ a sequential egocentric strategy. Instead, individuals with WS followed the path until the correct environmental landmarks were found, suggesting the use of a time‐consuming and inefficient view‐matching strategy for wayfinding. Individuals with WS also presented with deficits in allocentric spatial coding, demonstrated by difficulties in determining short‐cuts when required and difficulties developing a mental representation of the environment layout. This was found even following extensive experience in an environment, suggesting that – unlike in typical development – experience cannot contribute to the development of spatial relational processing in WS. This atypical presentation of both egocentric and allocentric spatial encoding is discussed in relation to specific difficulties on small‐scale spatial tasks and known atypical cortical development in WS.  相似文献   

10.
This paper reports a series of experiments of the perceived position of the hand in egocentric space. The experiments focused on the bias in the proprioceptively perceived position of the hand at a series of locations spanning the midline from left to right. Perceived position was tested in a matching paradigm, in which subjects indicated the perceived position of a target, which could have been either a visual stimulus or their own fingertip, by placing the index finger of the other hand in the corresponding location on the other side of a fixed surface. Both the constant error, or bias, and the variable error, or consistency of matching attempts, were measured. Experiment 1 showed that (1) there is a far-left advantage in matching tasks, such that errors in perceived position are significantly lower in extreme-left positions than in extreme-right positions, and (2) there is a strong hand-bias effect in the absence of vision, such that the perceived positions of the left and right index fingertips held in the same actual target position in fact differ significantly. Experiments 2 and 3 demonstrated that this hand-bias effect is genuinely due to errors in the perceived position of the matched hand, and not to the attempt at matching it with the other hand. These results suggest that there is no unifying representation of egocentric, proprioceptive space. Rather, separate representations appear to be maintained for each effector. The bias of these representations may reflect the motor function of that effector.  相似文献   

11.
After a 5-minute inspection of 7 objects laid out on a shelf, subjects were seated with the objects behind them and answered questions about the locations and orientations of objects by throwing a switch left or right. The "visual image" subjects were told to imagine that the objects were still in front of them and to respond accordingly. The "real space" (RS) subjects were told to respond in terms of the positions of the objects in real space behind them. Thus correct responses (left vs. right) were completely opposite for the 2 groups. A control group responded while facing a curtain concealing the objects. The task was harder, by time and error criteria, for group RS than for the other 2 groups, but not dramatically so. All RS subjects denied using a response-reversal strategy. Some reported translating the objects from back to front and thus responding as to a mirror-image of the array. When this evasion was discouraged, RS subjects typically reported responding in terms of visual images located behind them and viewed as if by "eyes in the back of the head." The paradox of a visual image that corresponds to no possible visual input is discussed.  相似文献   

12.
Surrounding space is not inherently organized, but we tend to treat it as though it consisted of regions (e.g., front, back, right, and left). The current studies show that these conceptual regions have characteristics that reflect our typical interactions with space. Three experiments examined the relative sizes and resolutions of front, back, left, and right around oneself. Front, argued to be the most important horizontal region, was found to be (a) largest, (b) recalled with the greatest precision, and (c) described with the greatest degree of detail. Our findings suggest that some of the characteristics of the category model proposed by Huttenlocher, Hedges, and Duncan (1991) regarding memory for pictured circular displays may be generalized to space around oneself. More broadly, our results support and extend thespatial framework analysis of representation of surrounding space (Franklin & Tversky, 1990).  相似文献   

13.
A substantial amount of empirical and theoretical debate remains concerning the extent to which an ability to orient with respect to the environment is determined by global (i.e., principal axis of space), local (i.e., wall lengths, angles), and/or view-based (i.e., stored representation) accounts. We developed an orientation task that allowed the manipulation of the reliability of the principal axis of space (i.e., searching at the egocentric left- and/or right-hand side of the principal axis) between groups while maintaining goal distance from the principal axis, local cues specifying the goal location (i.e., short wall left, short wall right, and obtuse angle), and visual aspects of the goal location consistent across groups. Control and test trials revealed that participants trained with a reliable principal axis of space utilized both global and local geometric cues, whereas those trained with an unreliable principal axis of space utilized only local geometric cues. Results suggest that both global and local geometric cues are utilized for reorientation and that the reliability of the principal axis of an enclosure differentially influences the use of geometric cues. Such results have implications for purely global-based, purely local-based, and purely view-based matching theoretical accounts of geometry learning and provide evidence for a unified orientation process.  相似文献   

14.
The purpose of this paper was to verify whether left and right parietal brain lesions may selectively impair egocentric and allocentric processing of spatial information in near/far spaces. Two Right-Brain-Damaged (RBD), 2 Left-Brain-Damaged (LBD) patients (not affected by neglect or language disturbances) and eight normal controls were submitted to the Ego-Allo Task requiring distance judgments computed according to egocentric or allocentric frames of reference in near/far spaces. Subjects also completed a general neuropsychological assessment and the following visuospatial tasks: reproduction of the Rey-Osterreith figure, line length judgement, point position identification, mental rotation, mental construction, line length memory, line length inference, Corsi block-tapping task. LBD patients presented difficulties in both egocentric and allocentric processing, whereas RBD patients dropped in egocentric but not in allocentric judgements, and in near but not far space. Further, RBD patients dropped in perceptually comparing linear distances, whereas LBD patients failed in memory for distances. The overall pattern of results suggests that the right hemisphere is specialized in processing metric information according to egocentric frames of reference. The data are interpreted according to a theoretical model that highlights the close link between egocentric processing and perceptual control of action.  相似文献   

15.
In the model of motion perception proposed by Swanston, Wade, and Day (1987, Perception 16 143-159) it was suggested that retinocentric motion and eye movement information are combined independently for each eye, to give left and right orbitocentric representations of movement. The weighted orbitocentric values are then added, to give a single agocentric representation. It is shown that for a physical motion observed without pursuit eye movements this formulation predicts a reduction in the perceived extent of motion with monocular as opposed to binocular viewing. This prediction was tested, and shown to be incorrect. Accordingly, a modification of the model is proposed, in which the left and right retinocentric signals are weighted according to the presence or absence of stimulation, and combined to give a binocular retinocentric representation. In a similar way left-eye and right-eye position signals are combined to give a single binocular eye movement signal for version. This is then added to the binocular retinocentric signal to give the egocentric representation. This modification provides a unified account of both static visual direction and movement perception.  相似文献   

16.
Much evidence has suggested that people conceive of time as flowing directionally in transverse space (e.g., from left to right for English speakers). However, this phenomenon has never been tested in a fully nonlinguistic paradigm where neither stimuli nor task use linguistic labels, which raises the possibility that time is directional only when reading/writing direction has been evoked. In the present study, English-speaking participants viewed a video where an actor sang a note while gesturing and reproduced the duration of the sung note by pressing a button. Results showed that the perceived duration of the note was increased by a long-distance gesture, relative to a short-distance gesture. This effect was equally strong for gestures moving from left to right and from right to left and was not dependent on gestures depicting movement through space; a weaker version of the effect emerged with static gestures depicting spatial distance. Since both our gesture stimuli and temporal reproduction task were nonlinguistic, we conclude that the spatial representation of time is nondirectional: Movement contributes, but is not necessary, to the representation of temporal information in a transverse timeline.  相似文献   

17.
Determined by anatomical characteristics, the binocular visual field has an elliptical shape. Whereas the focus of attention – that area of the visual field in which objects can be consciously perceived – is significantly smaller, its shape is not necessarily dependent on the same factors. Indeed, it is yet unknown if the maximum extents of the attentional focus’ meridians are dimensionally stable or adapt to the egocentric perspective. In this study, we intended to measure the expansion up to which peripheral stimuli can still be perceived while participants were sitting and lying on the left and on the right body side. Results demonstrate that contrary to the visual field the maximum extents along the attentional focus’ meridians remain stable across different head positions with a greater horizontal than vertical alignment. This indicates that the expansion ratios of the meridians of the attentional focus are not dependent on the egocentric perspective. Rather, they appear stable in space and gravity-based. Findings are discussed in terms of our visual environment and current as well as alternative interpretations are weighed against each other.  相似文献   

18.
联合颜色−标签匹配任务和空间参照框架判断任务,考察自我优势效应对远近空间中空间参照框架表征的影响。颜色−标签匹配任务要求被试对颜色(黑色/白色叉子)与标签词(自我/他人)之间建立稳定的联结,被试随机分为自我联结组和他人联结组,两组被试均需在远近空间中完成空间参照表征任务。结果发现:(1)与他人联结组相比,自我联结组表现出显著的自我优势效应;(2)自我优势效应对空间参照表征的影响仅体现在近处空间且对自我中心表征任务的影响更大。研究表明,自我优势效应优先影响近处空间表征,表现出近处优先性。  相似文献   

19.
Everyday visual experience constantly confronts us with things we can interact with in the real world. We literally feel the outside presence of physical objects in our environment via visual perceptual experience. The visual feeling of presence is a crucial feature of vision that is largely unexplored in the philosophy of perception, and poorly debated in vision neuroscience. The aim of this article is to investigate the feeling of presence. I suggest that visual feeling of presence depends on the visual representation of a very particular spatial relation with the object we interact with: the visual representation of absolute egocentric depth, which is due to stereoscopic vision.  相似文献   

20.
People implicitly associate different emotions with different locations in left‐right space. Which aspects of emotion do they spatialize, and why? Across many studies people spatialize emotional valence, mapping positive emotions onto their dominant side of space and negative emotions onto their non‐dominant side, consistent with theories of metaphorical mental representation. Yet other results suggest a conflicting mapping of emotional intensity (a.k.a., emotional magnitude), according to which people associate more intense emotions with the right and less intense emotions with the left — regardless of their valence; this pattern has been interpreted as support for a domain‐general system for representing magnitudes. To resolve the apparent contradiction between these mappings, we first tested whether people implicitly map either valence or intensity onto left‐right space, depending on which dimension of emotion they attend to (Experiments 1a, b). When asked to judge emotional valence, participants showed the predicted valence mapping. However, when asked to judge emotional intensity, participants showed no systematic intensity mapping. We then tested an alternative explanation of findings previously interpreted as evidence for an intensity mapping (Experiments 2a, b). These results suggest that previous findings may reflect a left‐right mapping of spatial magnitude (i.e., the size of a salient feature of the stimuli) rather than emotion. People implicitly spatialize emotional valence, but, at present, there is no clear evidence for an implicit lateral mapping of emotional intensity. These findings support metaphor theory and challenge the proposal that mental magnitudes are represented by a domain‐general metric that extends to the domain of emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号