首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous research suggested that perception of spatial location is biased towards spatial goals of planned hand movements. In the present study I show that an analogous perceptual distortion can be observed if attention is paid to a spatial location in the absence of planning a hand movement. Participants judged the position of a target during preparation of a mouse movement, the end point of which could deviate from the target by a varying degree in Exp. 1. Judgments of target position were systematically affected by movement characteristics consistent with perceptual assimilation between the target and the planned movement goal. This effect was neither due to an impact of motor execution on judgments (Exp. 2) nor due to characteristics of the movement cues or of certain target positions (Exp. 3, Exp. 5A). When the task included deployment of attention to spatial positions (former movement goals) in preparation for a secondary perceptual task, an effect emerged that was comparable with the bias associated with movement planning (Exp. 4, Exp. 5B). These results indicate that visual distortions accompanying manipulations of variables related to action could be mediated by attentional mechanisms.  相似文献   

2.
The present study examined if and how the direction of planned hand movements affects the perceived direction of visual stimuli. In three experiments participants prepared hand movements that deviated regarding direction (“Experiment 1” and “2”) or distance relative to a visual target position (“Experiment 3”). Before actual execution of the movement, the direction of the visual stimulus had to be estimated by means of a method of adjustment. The perception of stimulus direction was biased away from planned movement direction, such that with leftward movements stimuli appeared somewhat more rightward than with rightward movements. Control conditions revealed that this effect was neither a mere response bias, nor a result of processing or memorizing movement cues. Also, shifting the focus of attention toward a cued location in space was not sufficient to induce the perceptual bias observed under conditions of movement preparation (“Experiment 4”). These results confirm that characteristics of planned actions bias visual perception, with the direction of bias (contrast or assimilation) possibly depending on the type of the representations (categorical or metric) involved.  相似文献   

3.
Boundary extension (BE) refers to the tendency to remember a previously perceived scene with a greater spatial expanse. This phenomenon is described as resulting from different sources of information: external (i.e., visual) and internally driven (i.e., amodal, conceptual, and contextual) information. Although the literature has emphasized the role of top-down expectations to account for layout extrapolation, their effect has rarely been tested experimentally. In this research, we attempted to determine how visual context affects BE, as a function of scene exposure duration (long, short). To induce knowledge about visual context, the memorization phase of the camera distance paradigm was preceded by a preexposure phase, during which each of the to-be-memorized scenes was presented in a larger spatial framework. In an initial experiment, we examined the effect of contextual knowledge with presentation duration, allowing for in-depth processing of visual information during encoding (i.e., 15 s). The results indicated that participants exposed to the preexposure showed decreased BE, and displayed no directional memory error in some conditions. Because the effect of context is known to occur at an early stage of scene perception, in a second experiment we sought to determine whether the effect of a preview occurs during the first fixation on a visual scene. The results indicated that BE seems not to be modulated by this factor at very brief presentation durations. These results are discussed in light of current visual scene representation theories.  相似文献   

4.
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.  相似文献   

5.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

6.
In agreement with the hypothesis of differential treatment of the intrinsic (color, shape, category…) and extrinsic (position, orientation…) visual characteristics along the ventral and dorsal pathways of the visual system (Milner & Goodale, 1995), the study of temporal order judgment (TOJ) of the variation of two visual attributes highlighted the perceptual asynchrony even though these changes occur synchronously. In this context, we investigated the role of action in the perception of perceptual asynchrony, especially the effect of a reaching movement on the TOJ of position and color changes of a target occurring at different times of movement execution. In the absence of voluntary action, the point of subjective simultaneity (PSS) shows that the color changes must occurr 46.6 ms before the change of position to give rise to a synchronous perception of these two changes. Performing a reaching movement reduced significantly the PSS (12.4 ms), but only if the changes occur near movement end-point. If changes occur during movement execution, the PSS (40.2 ms) was not different from that obtained in the perceptual condition. These results suggest that endogenous signals associated with voluntary motor action contribute to the reduction of perceptual asynchrony in relation to the goal of the action. We discuss the possibility that, in the context of the action, the motor system contributes to the binding of objects sensory attributes as well as to the sense of agency.  相似文献   

7.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

8.
Recent studies in scene perception suggest that much of what observers believe they see is not retained in visual memory. Depending on the roles they play in organizing the perception of a scene, different visual properties may require different amounts of attention to be incorporated into a mental representation of the scene. The goal of this study was to compare how three visual properties of scenes, colour, object position, and object presence, are encoded in visual memory. We used a variation on the change detection “flicker” task and measured the time to detect scene changes when: (1) a cue was provided regarding the type of change; and, (2) no cue was provided. We hypothesized that cueing would enhance the processing of visual properties that require more attention to be encoded into scene representations, whereas cueing would not have an effect for properties that are readily or automatically encoded in visual memory. In Experiment 1, we found that there was a cueing advantage for colour changes, but not for position or presence changes. In Experiment 2, we found the same cueing effect regardless of whether the colour change altered the configuration of the scene or not. These results are consistent with the idea that properties that typically help determine the configuration of the scene, for example, position and presence, are better encoded in scene representations than are surface properties such as colour.  相似文献   

9.
All elements of the visual field are known to influence the perception of the egocentric distances of objects. Not only the ground surface of a scene, but also the surface at the back or other objects in the scene can affect an observer's egocentric distance estimation of an object. We tested whether this is also true for exocentric direction estimations. We used an exocentric pointing task to test whether the presence of poster-boards in the visual scene would influence the perception of the exocentric direction between two test-objects. In this task the observer has to direct a pointer, with a remote control, to a target. We placed the poster-boards at various positions in the visual field to test whether these boards would affect the settings of the observer. We found that they only affected the settings when they directly served as a reference for orienting the pointer to the target.  相似文献   

10.
The preparation of eye or hand movements enhances visual perception at the upcoming movement end position. The spatial location of this influence of action on perception could be determined either by goal selection or by motor planning. We employed a tool use task to dissociate these two alternatives. The instructed goal location was a visual target to which participants pointed with the tip of a triangular hand-held tool. The motor endpoint was defined by the final fingertip position necessary to bring the tool tip onto the goal. We tested perceptual performance at both locations (tool tip endpoint, motor endpoint) with a visual discrimination task. Discrimination performance was enhanced in parallel at both spatial locations, but not at nearby and intermediate locations, suggesting that both action goal selection and motor planning contribute to visual perception. In addition, our results challenge the widely held view that tools extend the body schema and suggest instead that tool use enhances perception at those precise locations which are most relevant during tool action: the body part used to manipulate the tool, and the active tool tip.  相似文献   

11.
Objects likely to appear in a given real-world scene are frequently found to be easier to recognize. Two different sources of contextual information have been proposed as the basis for this effect: global scene background and individual companion objects. The present paper examines the relative importance of these two elements in explaining the context-sensitivity of object identification in full scenes. Specific sequences of object fixations were elicited during free scene exploration, while fixation times on designated target objects were recorded as a measure of ease of target identification. Episodic consistency between the target, the global scene background, and the object fixated just prior to the target (the prime), were manipulated orthogonally. Target fixation times were examined for effects of prime and background. Analyses show effects of both factors, which are modulated by the chronology and spatial extent of scene exploration. The results are discussed in terms of their implications for a model of visual object recognition in the context of real-world scenes.  相似文献   

12.
Over the last decade, there has been an interest in the impact of visual illusions on the control of action. Much of this work has been motivated by Milner and Goodale's two visual system model of visual processing. This model is based on a hypothesized dissociation between cognitive judgments and the visual control of action. It holds that action is immune to the visual context that provides the basis for the illusion-induced bias associated with cognitive judgments. Recently, Glover has challenged this position and has suggested that movement planning, but not movement execution is susceptible to visual illusions. Research from our lab is inconsistent with both models of visual-motor processing. With respect to the planning and control model, kinematic evidence shows that the impact of an illusion on manual aiming increases as the limb approaches the target. For the Ebbinghaus illusion, this involved a decrease in the time after peak velocity to accommodate the 'perceived' size of the target. For the Müller-Lyer illusion, the influence of the figure's tails increased from peak velocity to the end of the movement. Although our findings contradict a strong version of the two visual systems hypothesis, we did find dissociations between perception and action in another experiment. In this Müller-Lyer study, perceptual decisions were influenced by misjudgment of extent, while action was influenced by misjudgment of target position. Overall, our findings are consistent with the idea that it is often necessary to use visual context to make adjustments to ongoing movements.  相似文献   

13.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

14.
Egocentric distance perception is a psychological process in which observers use various depth cues to estimate the distance between a target and themselves. The impairment of basic visual function and treatment of amblyopia have been well documented. However, the disorder of egocentric distance perception of amblyopes is poorly understood. In this review, we describe the cognitive mechanism of egocentric distance perception, and then, we focus on empirical evidence for disorders in egocentric distance perception for amblyopes in the whole visual space. In the personal space (within 2 m), it is difficult for amblyopes to show normal hand-eye coordination; in the action space (within 2 m~30 m), amblyopes cannot accurately judge the distance of a target suspended in the air. Few studies have focused on the performance of amblyopes in the vista space (more than 30 m). Finally, five critical topics for future research are discussed: 1) it is necessary to systematically explore the mechanism of egocentric distance perception in all three spaces; 2) the laws of egocentric distance perception in moving objects for amblyopes should be explored; and 3) the comparison of three subtypes of amblyopia is still insufficient; 4) study the perception of distance under another theoretical framework; 5) explore the mechanisms of amblyopia by Virtual Reality.  相似文献   

15.
以表象看到一个运动员完成三级跳远项目为实验任务,对表象任务的信息通达水平、眼动注视点的活动位置和被试对三级跳远项目的知识水平和技能水平进行系统的操纵,通过2个实验探讨了视觉表象眼动的变化是基于知识学习表征差异还是技能训练表征差异的问题。实验1以没有三级跳远运动专业技能知识且对该运动的认知水平也较低的大学生为被试,结果表明,在完成高信息通达水平的表象任务时,注视点需要较短的持续时间,但眼跳距离会增大,眼跳频率会变低;实验2对表象任务的知识学习表征水平和技能训练表征水平进行操纵,分别以对实验任务进行过知识学习和专业技能训练的人为被试,结果表明,随着被试知识习得水平和技能水平表征能力的提高,不同表象任务信息通达水平间的眼动差异将消失,但知识学习和技能表征的差异在平均眼跳时间上有差异,技能训练型的被试其平均眼跳时间要短于知识学习型被试,达到临界水平显著,注视点平均持续时间和平均眼跳距离等均没有差异。  相似文献   

16.
The goal of this study was to determine whether a sensorimotor or cognitive encoding is used to encode a target position and save it into iconic memory. The methodology consisted of disrupting a manual aiming movement to a memorized visual target by displacing the visual field containing the target. The nature of the encoding was inferred from the nature and the size of the errors relative to a control. The target was presented either centrally or in the right periphery. Participants moved their hand from the left to the right of fixation. Black and white vertical stripes covered the whole visual field. The visual field was either stationary throughout the trial or was displaced to the right or left at the extinction of the target or at the start of the hand movement. In the latter case, the displacement of the visual field obviously could only be taken into account by the participant during the gesture. In this condition, our hypothesis was that the aiming error would follow the direction of visual field displacement. Results showed three major effects: (1) Vision of the hand during the gesture improved the final accuracy; (2) visual field displacement produced an underestimation of the target distance only when the hand was not visible during the gesture and was always in the same direction displacement; and (3) the effect of the stationary structured visual field on aiming precision when the hand was not visible depended on the distance to the target. These results suggest that a stationary structured visual field is used to support the memory of the target position. The structured visual field is more critical when the hand is not visible and when the target appears in peripheral rather than central vision. This suggests that aiming depends on memory of the relative peripheral position of the target (allocentric reference). However, in the present task, cognitive encoding does not maintain the "position" of the target in memory without reference to the environment. The systematic effect of the visual field displacement on the manual aiming suggests that the role of environmental reference frames in memory for position is not well understood. Some studies, in particular those of Giesbrecht and Dixon (1999) and Glover and Dixon (2001), suggested differing roles of the environment in the retention of the target position and the control of aiming movements toward the target. The present observations contribute to understanding the mechanism involved in locating and grasping objects with the hand.  相似文献   

17.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

18.
Past research has revealed that central vision is more important than peripheral vision in controlling the amplitude of target-directed aiming movements. However, the extent to which central vision contributes to movement planning versus online control is unclear. Since participants usually fixate the target very early in the limb trajectory, the limb enters the central visual field during the late stages of movement. Hence, there may be insufficient time for central vision to be processed online to correct errors during movement execution. Instead, information from central vision may be processed offline and utilised as a form of knowledge of results, enhancing the programming of subsequent trials. In the present research, variability in limb trajectories was analysed to determine the extent to which peripheral and central vision is used to detect and correct errors during movement execution. Participants performed manual aiming movements of 450 ms under four different visual conditions: full vision, peripheral vision, central vision, no vision. The results revealed that participants utilised visual information from both the central and peripheral visual fields to adjust limb trajectories during movement execution. However, visual information from the central visual field was used more effectively to correct errors online compared to visual information from the peripheral visual field.  相似文献   

19.
White (1976) reported that presentation of a masking stimulus during a pursuit eye movement interfered with the perception of a target stimulus that shared the same spatial, rather than retinal, coordinates as the mask. This finding has been interpreted as evidence for the existence of spatiotopic visual persistence. We doubted White's results because they implied a high degree of position constancy during pursuit eye movements, contrary to previous research, and because White did not monitor subjects' eye position during pursuit; if White's subjects did not make continuous pursuit eye movements, it might appear that masking was spatial when in fact it was retinal. We attempted to replicate White's results and found that when eye position was monitored to ensure that subjects made continuous pursuit movements, masking was retinal rather than spatial. Subjects' phenomenal impressions also indicated that retinal, rather than spatial, factors underlay performance in this task. The implications of these and other results regarding the existence of spatiotopic visual persistence are discussed.  相似文献   

20.
S Mateeff  J Hohnsbein 《Perception》1989,18(1):93-104
Subjects used eye movements to pursue a light target that moved from left to right with a velocity of 15 deg s-1. The stimulus was a sudden five-fold decrease in target intensity during the movement. The subject's task was to localize the stimulus relative to either a single stationary background point or the midpoint between two points (28 deg apart) placed 0.5 deg above the target path. The stimulus was usually mislocated in the direction of eye movement; the mislocation was affected by the spatial adjacency between background and stimulus. When an auditory, rather than a visual, stimulus was presented during tracking, target position at the time of stimulus presentation was visually mislocated in the direction opposite to that of eye movement. The effect of adjacency between background and target remained the same. The involvement of processes of subject-relative and object-relative visual perception is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号