首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

2.
3.
Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g., a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in common exhibit language-mediated eye movements like those made by older children and adults? That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-month-olds lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality between named and seen objects. This indicates that language-mediated visual search need not depend on stored labels for concepts.  相似文献   

4.
If S is instructed to look straight ahead before adapting to laterally displaced vision, he does so without noticeable error. After adapting, however, in response to the same instruction, he may rotate his eyes as much as 8° toward the the displaced visual target. This is the change in judgment of the direction of gaze which Helmholtz identified in 1867 as an important physiological mechanism in adaptation to prisms. It leads to more accurate reaching behavior by causing S to make a visual judgment that the target is closer to straight ahead than it was when he first looked through the prisms. This type of adaptive change (change in judgment of the direction of gaze, oculomotor change) can be measured either by manual judgments (difference between successive “straight ahead” and “visual target” judgments) or by changes in straight-ahead eye position. It may be described as a parametric adjustment in the oculomotor control system, and is closely analogous to the eye movement which subserves the recovery of binocular fusion in prism vergence.  相似文献   

5.
In this paper we reconsider the question of vision during eye movements using a novel display procedure which guarantees that the eye was not stopped at any time during the eye movement. The results of our experiment lead us to conclude that true “saccadic suppression,” is a most elusive phenomenon. Furthermore, a brief analysis of the optics of the eye movement suggests that a substantial amount the elevation of visual threshold during eye movements can be attributed to simple retinal smear if one acknowledges the dominating importance of edge effects in visual threshold measurements.  相似文献   

6.
During a change-of-fixation eye movement, the target toward which S was shifting his gaze was displaced 1° toward the original point of fixation so that the eye made an overshoot with respect to the new target position. When this was repeated several times in succession, the eye movement control system made an adjustment such that the overshoot gradually diminished. Ihe end-result of this “parametric adjustment” was that a visual target 10° from the fovea elicited an eye movement of only 9.1°.  相似文献   

7.
通过两个实验探讨了视觉表象任务信息的通达对表象加工眼动的影响。结果表明, 在低通达条件下, 表象任务的眼动复制了知觉任务的眼动; 随着表象任务信息通达水平的提高, 眼动的注视点平均持续时间、平均眼跳距离和平均眼跳时间会发生规律性变化; 眼动控制与任务信息通达水平对表象眼动的影响存在不同的机制。实验结果佐证了眼动在视觉表象中起机能性作用的观点。  相似文献   

8.
The effect of a salient visual feature in orienting spatial attention was examined as a function of the learned association between the visual feature and the observer’s action. During an initial acquisition phase, participants learned that two keypress actions consistently produced red and green visual cues. Next, in a test phase, participants’ actions continued to result in singletons, but their color could be either congruent or incongruent with the learned action–color associations. Furthermore, the color singletons now functioned as valid or invalid spatial cues in a visual search, in which participants looked for a tilted line (“/” or “\”) among distractors (“X”s). The results showed that an action-congruent color was more effective as a valid cue in the search task (increased benefit), but less effective as an invalid cue (reduced cost). We discuss our findings in terms of both an inhibition account and a preactivation account of action-driven sensory bias, and argue in favor of the preactivation account.  相似文献   

9.
The question of whether an afterimage viewed in a dark field appears to move during eye movement was studied by comparing recordings of eye movements with recordings of reports of perceived movement. The correlation was found to be quite good even under conditions where the eye movements were spontaneous rather than specifically directed. The results were taken to support the hypothesis that the behavior of the retinal image is “interpreted” by taking into account information concerning what the eyes are doing.  相似文献   

10.
Eye movements and the span of the effective stimulus in visual search   总被引:6,自引:0,他引:6  
The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects' eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5 degrees. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.  相似文献   

11.
This paper presents the case for a functional account of vision. A variety of studies have consistently revealed “change blindness” or insensitivity to changes in the visual scene during an eye movement. These studies indicate that only a small part of the information in the scene is represented in the brain from moment to moment. It is still unclear, however, exactly what is included in visual representations. This paper reviews experiments using an extended visuo-motor task, showing that display changes affect performance differently depending on the observer's place in the task. These effects are revealed by increases in fixation duration following a change. Different task-dependent increases suggest that the visual system represents only the information that is necessary for the immediate visual task. This allows a principled exploration of the stimulus properties that are included in the internal visual representation. The task specificity also has a more general implication that vision should be conceptualized as an active process executing special purpose “routines” that compute only the currently necessary information. Evidence for this view and its implications for visual representations are discussed. Comparison of the change blindness phenomenon and fixation durations shows that conscious report does not reveal the extent of the representations computed by the routines.  相似文献   

12.
Recent advances in the field of metacognition have shown that human participants are introspectively aware of many different cognitive states, such as confidence in a decision. Here we set out to expand the range of experimental introspection by asking whether participants could access, through pure mental monitoring, the nature of the cognitive processes that underlie two visual search tasks: an effortless “pop-out” search, and a difficult, effortful, conjunction search. To this aim, in addition to traditional first order performance measures, we instructed participants to give, on a trial-by-trial basis, an estimate of the number of items scanned before a decision was reached. By controlling response times and eye movements, we assessed the contribution of self-observation of behavior in these subjective estimates. Results showed that introspection is a flexible mechanism and that pure mental monitoring of cognitive processes is possible in elementary tasks.  相似文献   

13.
Eyes move over visual scenes to gather visual information. Studies have found heavy-tailed distributions in measures of eye movements during visual search, which raises questions about whether these distributions are pervasive to eye movements, and whether they arise from intrinsic or extrinsic factors. Three different measures of eye movement trajectories were examined during visual foraging of complex images, and all three were found to exhibit heavy tails: Spatial clustering of eye movements followed a power law distribution, saccade length distributions were lognormally distributed, and the speeds of slow, small amplitude movements occurring during fixations followed a 1/f spectral power law relation. Images were varied to test whether the spatial clustering of visual scene information is responsible for heavy tails in eye movements. Spatial clustering of eye movements and saccade length distributions were found to vary with image type and task demands, but no such effects were found for eye movement speeds during fixations. Results showed that heavy-tailed distributions are general and intrinsic to visual foraging, but some of them become aligned with visual stimuli when required by task demands. The potentially adaptive value of heavy-tailed distributions in visual foraging is discussed.  相似文献   

14.
Warren (1970) has claimed that there are visual facilitation effects on auditory localization in adults but not in children. He suggests that a “visual map” organizes spatial information and that considerable experience of correlated auditory and visual events is necessary before normal spatial perception is developed. In the present experiment, children in Grades 1, 4, and 7 had to identify the position, right or left, of a single tone either blindfolded or with their eyes open. Analysis of the proportion of area under the ROC curve (obtained using reaction times) in the respective conditions showed that Ss were more sensitive to auditory position when vision was available. Reaction time was also generally faster in the light. I argue that the increase in sensitivity in the light represents updating of auditory position memory by voluntary eye movement. In the dark, eye movements are subject to involuntary and unperceived drift, which would introduce noise into the eye control mechanism and hence into auditory spatial memory.  相似文献   

15.
In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene.  相似文献   

16.
We investigated attentional demands in visual rhythm perception of periodically moving stimuli using a visual search paradigm. A dynamic search display consisted of vertically “bouncing dots” with regular rhythms. The search target was defined by a unique visual rhythm (i.e., a shorter or longer period) among rhythmic distractors with identical periods. We found that search efficiency for a faster or a slower periodically moving target decreased as the number of distractors increased, although searching for a faster target was about one second faster than searching for a slower target. We conclude that perception of a visual rhythm defined by a unique period is not a “pop-out” process, but a serial one that demands considerable attention.  相似文献   

17.
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating – a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the “spatial congruency bias,” to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the spatial congruency bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic congruency bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.  相似文献   

18.
When watching someone reaching to grasp an object, we typically gaze at the object before the agent’s hand reaches it—that is, we make a “predictive eye movement” to the object. The received explanation is that predictive eye movements rely on a direct matching process, by which the observed action is mapped onto the motor representation of the same body movements in the observer’s brain. In this article, we report evidence that calls for a reexamination of this account. We recorded the eye movements of an individual born without arms (D.C.) while he watched an actor reaching for one of two different-sized objects with a power grasp, a precision grasp, or a closed fist. D.C. showed typical predictive eye movements modulated by the actor’s hand shape. This finding constitutes proof of concept that predictive eye movements during action observation can rely on visual and inferential processes, unaided by effector-specific motor simulation.  相似文献   

19.
Observers were adapted to simulated auditory movement produced by dynamically varying the interaural time and intensity differences of tones (500 or 2,000 Hz) presented through headphones. At lO-sec intervals during adaptation, various probe tones were presented for 1 sec (the frequency of the probe was always the same as that of the adaptation stimulus). Observers judged the direction of apparent movement (“left” or “right”) of each probe tone. At 500 Hz, with a 200-deg/sec adaptation velocity, “stationary” probe tones were consistently judged to move in the direction opposite to that of the adaptation stimulus. We call this result an auditory motion aftereffect. In slower velocity adaptation conditions, progressively less aftereffect was demonstrated. In the higher frequency condition (2,000 Hz, 200-deg/sec adaptation velocity), we found no evidence of motion aftereffect. The data are discussed in relation to the well-known visual analog-the “waterfall effect.” Although the auditory aftereffect is weaker than the visual analog, the data suggest that auditory motion perception might be mediated, as is generally believed for the visual system, by direction-specific movement analyzers.  相似文献   

20.
A method is described for constructing visual displays in which statistical constraints are encoded within two spatial dimensions without introducing one-dimensional linear constraints. Within each local group of four elements, the state of one element was determined with a given probability by the previously generated states of the other three. Ss rated such displays on a scale from “lumpy” (or crude texture) to “lacy” (or even texture). The consistency of classification obtained for displays with strong aggregated (“lumpy”) properties was substantially higher than that obtained for displays with strong distributed (“lacy”) properties. An incidental feature of the Ss’ behavior was their deliberate degrading of the visual quality of the displays. Comparison is made with one-dimensional displays concatenated in two dimensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号