首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tatler BW  Wade NJ 《Perception》2003,32(2):167-184
Investigations of the ways in which the eyes move came to prominence in the 19th century, but techniques for measuring them more precisely emerged in the 20th century. When scanning a scene or text the eyes engage in periods of relative stability (fixations) interspersed with ballistic rotations (saccades). The saccade-and-fixate strategy, associated with voluntary eye movements, was first uncovered in the context of involuntary eye movements following body rotation. This pattern of eye movements is now referred to as nystagmus, and involves periods of slow eye movements, during which objects are visible, and rapid returns, when they are not; it is based on a vestibular reflex which attempts to achieve image stabilisation. Post-rotational nystagmus was reported in the late 18th century (by Wells), with afterimages used as a means of retinal stabilisation to distinguish between movement of the eyes and of the environment. Nystagmus was linked to vestibular stimulation in the 19th century, and Mach, Breuer, and Crum Brown all described its fast and slow phases. Wells and Breuer proposed that there was no visual awareness during the ballistic phase (saccadic suppression). The saccade-and-fixate strategy highlighted by studies of nystagmus was shown to apply to tasks like reading by Dodge, who used more sophisticated photographic techniques to examine oculomotor kinematics. The relationship between eye movements and perception, following earlier intuitions by Wells and Breuer, was explored by Dodge, and has been of fundamental importance in the direction of vision research over the last century.  相似文献   

2.
《Acta psychologica》1986,63(3):281-295
In comitant strabismus many perceptual adaptive processes take place, which involve perception of space. Suppression of the image of the deviated eye and anomalous retinal correspondence (ARC) are the two main antidiplopic mechanisms. ARC may be present without suppression in small-angle strabismus (up to 10 degrees), supporting an anomalous binocular cooperation in spite of the deviation. Both psychophysical and electrophysiological evidence for anomalous binocular vision in strabismus are provided. Sensori-motor adaptations in strabismus develop as well. They are represented by vergence eye movements which, although not identical to, have similar characteristics as normal fusional vergences. These anomalous fusional eye movements tend to return the eyes to their original deviation when elements are introduced to change the position of the eyes, e.g., prisms or surgery. In conjunction with ARC, these movements serve to maintain binocular visual perception despite the strabismus.  相似文献   

3.
It is not known why people move their eyes when engaged in non-visual cognition. The current study tested the hypothesis that differences in saccadic eye movement rate (EMR) during non-visual cognitive tasks reflect different requirements for searching long-term memory. Participants performed non-visual tasks requiring relatively low or high long-term memory retrieval while eye movements were recorded. In three experiments, EMR was substantially lower for low-retrieval than for high-retrieval tasks, including in an eyes closed condition in Experiment 3. Neither visual imagery nor between-task difficulty was related to EMR, although there was some evidence for a minor effect of within-task difficulty. Comparison of task-related EMRs to EMR during a no-task waiting period suggests that eye movements may be suppressed or activated depending on task requirements. We discuss a number of possible interpretations of saccadic eye movements during non-visual cognition and propose an evolutionary model that links these eye movements to memory search through an elaboration of circuitry involved in visual perception.  相似文献   

4.
In the present study, we examined whether eye movements facilitate retention of visuo-spatial information in working memory. In two experiments, participants memorised the sequence of the spatial locations of six digits across a retention interval. In some conditions, participants were free to move their eyes during the retention interval, but in others they either were required to remain fixated or were instructed to move their eyes exclusively to a selection of the memorised locations. Memory performance was no better when participants were free to move their eyes during the memory interval than when they fixated a single location. Furthermore, the results demonstrated a primacy effect in the eye movement behaviour that corresponded with the memory performance. We conclude that overt eye movements do not provide a benefit over covert attention for rehearsing visuo-spatial information in working memory.  相似文献   

5.
How crucial a role do reason, memory and planning play alongside lower-level sensorimotor skills in our visual activity? Andy Clark has recently proposed a new skill-based account of seeing. This suggests there is more to perception than only the mastery of sensorimotor contingencies, as O'Regan, No? and others have stressed in recent influential papers.  相似文献   

6.
Eye movements are functional during face learning   总被引:4,自引:0,他引:4  
In a free viewing learning condition, participants were allowed to move their eyes naturally as they learned a set of new faces. In a restricted viewing learning condition, participants remained fixated in a single central location as they learned the new faces. Recognition of the learned faces was then tested following the two learning conditions. Eye movements were recorded during the free viewing learning condition, as well as during recognition. The recognition results showed a clear deficit following the restricted viewing condition, compared with the free viewing condition, demonstrating that eye movements play a functional role during human face learning. Furthermore, the features selected for fixation during recognition were similar following free viewing and restricted viewing learning, suggesting that the eye movements generated during recognition are not simply a recapitulation of those produced during learning.  相似文献   

7.
There is considerable evidence that covert visual attention precedes voluntary eye movements to an intended location. What happens to covert attention when an involuntary saccadic eye movement is made? In agreement with other researchers, we found that attention and voluntary eye movements are tightly coupled in such a way that attention always shifts to the intended location before the eyes begin to move. However, we found that when an involuntary eye movement is made, attention first precedes the eyes to the unintended location and then switches to the intended location, with the eyes following this pattern a short time later. These results support the notion that attention and saccade programming are tightly coupled.  相似文献   

8.
Eye-hand coordination: oculomotor control in rapid aimed limb movements   总被引:7,自引:0,他引:7  
Three experiments are reported in which Ss produced rapid wrist rotations to a target while the position of their eyes was being monitored. In Experiment 1, Ss spontaneously executed a saccadic eye movement to the target around the same time as the wrist began to move. Experiment 2 revealed that wrist-rotation accuracy suffered if Ss were not allowed to move their eyes to the target, even when visual feedback about the moving wrist was unavailable. In Experiment 3, wrist rotations were equally accurate when Ss produced either a saccadic or a smooth-pursuit eye movement to the target. However, differences were observed in the initial-impulse and error-correction phases of the wrist rotations, depending on the type of eye movement involved. The results suggest that aimed limb movements use information from the oculomotor system about both the static position of the eyes and the dynamic characteristics of eye movements. Furthermore, the information that governs the initial impulse is different from that which guides final error corrections.  相似文献   

9.
Eye movements and scene perception.   总被引:11,自引:0,他引:11  
Research on eye movements and scene perception is reviewed. Following an initial discussion of some basic facts about eye movements and perception, the following topics are discussed: (I) the span of effective vision during scene perception, (2) the role of eye movements in scene perception, (3) integration of information across saccades, (4) scene context, object identification and eye movements, and (5) the control of eye movements. The relationship of eye movements during reading to eye movements during scene perception is considered. A preliminary model of eye movement control in scene perception is described and directions for future research are suggested.  相似文献   

10.
Wade NJ 《Perception》2000,29(2):221-239
William Porterfield (ca 1969-1771) and William Charles Wells (1757-1817) conducted experimental investigations on eye movements related to accommodation, binocular vision, and vertigo. Porterfield gave a correct interpretation of Scheiner's experiment and invented an optometer to measure the near and far points of distinct vision. He also demonstrated the involvement of the crystalline lens in accommodation by examining vision in an aphakic person. Wells devised an alternative means of measuring the limits of vision and noted his own deterioration of sight with age; he studied the effects of belladonna on pupil size and accommodation. Their analyses of binocular visual direction contrasted Porterfield's view that perceived location was innately determined with Well's argument that visual direction was innate whereas visual distance was learned. Both Porterfield and Wells investigated the involvement of eye movements in binocular vision and in postrotary visual motion. Porterfield maintained that the eyes did not move following body rotation, whereas Wells, using an afterimage as stabilised retinal image, described the characteristics of postrotary nystagmus and their dependence on head orientation. Despite the neglect of Well's work, he should be considered as laying the foundations for the study of vestibular-visual interaction, even though the function of the vestibular system was not known at that time.  相似文献   

11.
Experiments are described in which each eye is presented with a target whose image remains on the same part of the retina when the eye moves. The patterns presented to each eye may be similar and may be placed on corresponding parts of the retina or may be placed in non-corresponding positions: alternatively, different targets may be presented to the two eyes. Each pattern fades intermittently. Sometimes both are seen together and sometimes both fields are dark at once. There is a small negative correlation between the times of clear vision with the two eyes. When corresponding areas of the two retinas are illuminated with red and green light respectively, the composite colour (yellow) is never perceived with steady illumination. When two similar patterns are in nearly corresponding positions there may be subjective fusion. With two different targets there is sometimes a subjective impression that the two patterns move with respect to one another even though their positions on the retina are fixed.  相似文献   

12.
Experience with inverting glasses reveals key factors of spatial vision. Interpretations of the literature based on the metaphor of a “visual image” have raised the question whether visual experience with inverting glasses remains inverted or whether it may turn back to normal after adaptation to the glasses. Here, I report on my experience with left/right inverting glasses and argue that a more fine-grained sensorimotor analysis can resolve the issue. Crucially, inverting glasses introduce a conflict at the very heart of spatial vision. At first, the experience of visual direction grounded in head movements differs from visual experience grounded in eye movements. During adaptation, this difference disappears, and one may learn to see without conflict where objects are located (this took me 123 h of practice). The momentary experience became once again integrated within the larger flow of visual exploration involving head movements, a change of experience that was abrupt and comparable to a Gestalt switch. The resulting experience remains different from normal vision, and I argue that this difference can be understood in sensorimotor terms. I describe how adaptation to inverting glasses is further reflected in mental imagery, supporting the idea that imagery is grounded in sensorimotor engagement with the environment as well.  相似文献   

13.
This study investigated how infants perceive and interpret human body movement. We recorded the eye movements and pupil sizes of 9- and 12-month-old infants and of adults (N=14 per group) as they observed animation clips of biomechanically possible and impossible arm movements performed by a human and by a humanoid robot. Both 12-month-old infants and adults spent more time looking at the elbows during impossible compared with possible arm movements, irrespective of the appearance of the actor. These results suggest that by 12months of age, infants recognize biomechanical constraints on how arms move, and they extend this knowledge to humanoid robots. Adults exhibited more pupil dilation in response to the human's impossible arm movements compared with the possible ones, but 9- and 12-month-old infants showed no differential pupil dilation to the same actions. This finding suggests that the processing of human body movements might still be immature in 12-month-olds, as they did not show an emotional response to biomechanically impossible body movements. We discuss these findings in relation to the hypothesis that perception of others' body movements relies upon the infant's own sensorimotor experience.  相似文献   

14.
In three experiments, subjects read text as their eye movements were monitored and display changes in the text were made contingent upon the eye movements. In one experiment, a window of text moved in synchrony with the eyes. In one condition, the size of the window was constant from fixation to fixation, while in the other condition the size of the window varied from fixation to fixation. In the other experiments, a visual mask was presented at the end of each saccade which delayed the onset of the text, and the length of the delay was varied. The pattern of eye movements was influenced by both the size of the window and the delay of the onset of the text, even when the window size or text delay was varying from fixation to fixation. However, there was also evidence that saccade length was affected by the size of the window on the prior fixation and that certain decisions to move the eye are programmed either before the fixation begins or are programmed during the fixation but without regard to the text fixated on. The results thus provide support for a mixed control model of eye movements in reading, in which decisions about when and where to move the eyes are based on information from the current fixation, the prior fixations, and possibly, other sources as well.  相似文献   

15.
Four dual-task experiments required a speeded manual choice response to a tone in a close temporal proximity to a saccadic eye movement task. In Experiment 1, subjects made a saccade towards a single transient; in Experiment 2, a red and a green colour patch were presented to left and right, and the saccade was to which ever patch was the pre-specified target colour. There was some slowing of the eye movement, but neither task combination showed typical dual-task interference (the “psychological refractory effect”). However, more interference was observed when the direction of the saccade depended on whether a central colour patch was red or green, or when the saccade was directed towards the numerically higher of two large digits presented to the left and the right. Experiment 5 examined a vocal second task, for comparison. The findings might reflect the fact that eye movements can be directed by two separate brain systems--the superior colliculus and the frontal eye fields; commands from the latter but not the former may be delayed by simultaneous unrelated sensorimotor tasks.  相似文献   

16.
The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts.  相似文献   

17.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

18.
A smile is visually highly salient and grabs attention automatically. We investigated how extrafoveally seen smiles influence the viewers' perception of non-happy eyes in a face. A smiling mouth appeared in composite faces with incongruent non-happy (fearful, neutral, etc.) eyes, thus producing blended expressions, or it appeared in intact faces with genuine expressions. Attention to the eye region was spatially cued while foveal vision of the mouth was blocked by gaze-contingent masking. Participants judged whether the eyes were happy or not. Results indicated that the smile biased the evaluation of the eye expression: The same non-happy eyes were more likely to be judged as happy and categorized more slowly as not happy in a face with a smiling mouth than in a face with a non-smiling mouth or with no mouth. This bias occurred when the mouth and the eyes appeared simultaneously and aligned, but also to some extent when they were misaligned and when the mouth appeared after the eyes. We conclude that the highly salient smile projects to other facial regions, thus influencing the perception of the eye expression. Projection serves spatial and temporal integration of face parts and changes.  相似文献   

19.
The question investigated was whether or not eye movements accompanied by abnormal retinal image movements, movements that are either or both at a different rate or in a different direction than the eye movement, predictably lead to perceived movement. Os reported whether or not they saw a visual target move when the movement of the target was either dependent on and simultaneous with their eye movements or when the target movement was independent of their eye movements. In the main experiment, observations were made when the ratio between eye and target movement fem/tm) was 2/5, 1/5, 1/10, 1/20, and 0. All these ratios were tested when the direction of the target movement was in the same (H+), opposite (H?), and at right angles to (V+, V?) the movement of the eyes. Eye movements, target movements, and reports of target movement were recorded. Results indicate that a discrepancy between eye and target movement greater than 20% predictably leads to perceived target movement, whereas a discrepancy of 5% or less rarely leads to perceived movement. The results are interpreted as support for the operation of a compensatory mechanism during eye movements.  相似文献   

20.
Previous research has shown that visual perception is affected by sensory information from other modalities. For example, sound can alter the visual intensity or the number of visual objects perceived. However, when touch and vision are combined, vision normally dominates—a phenomenon known asvisual capture. Here we report a cross-modal interaction between active touch and vision: The perceived number of brief visual events (flashes) is affected by the number of concurrently performed finger movements (keypresses). This sensorimotor illusion occurred despite little ambiguity in the visual stimuli themselves and depended on a close temporal proximity between movement execution and vision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号