首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An inexpensive PC-based moving window generator with an eye movement recording system is described. The moving window technique, activated by current eye movements, has an advantage over the fixed window technique in measuring the effective visual field size during reading. A variable rectangular window, through which the subject observes the text, is generated on a PC-controlled CRT screen. The system includes a frame buffer memory, an analog-to-digital conversion unit, and an eye movement recording system. The system works well for measuring approximate field size during reading.  相似文献   

2.
目前, 阅读的眼动研究中常用的实验研究范式包括移动窗口范式、移动掩蔽范式、边界范式、快速启动范式、消失文本范式和视觉—情境范式等。本文详细介绍了这些实验范式的具体实验操作程序、内在的实验逻辑关系及相应的研究成果, 同时总结了在应用各种眼动范式时需要注意的问题。另外, 本文对阅读的眼动研究范式的应用前景进行了展望:(1)眼动研究范式在验证当前阅读的眼动理论模型中的作用; (2)不同眼动研究范式的有效结合及其在场景知觉等研究领域的迁移; (3)眼动研究范式和电生理/脑成像技术的结合; (4)眼动研究范式在中文阅读研究中的应用。  相似文献   

3.
Gaze-contingent multiresolutional displays (GCMRDs) have been proposed to solve the processing and bandwidth bottleneck in many single-user displays, by dynamically placing high-resolution in awindow at the center of gaze, with lower resolution everywhere else. The three experiments reported here document a slowing of peripheral target acquisition associated with the presence of a gaze-contingent window. This window effect was shown for displays using either moving video or still images. The window effectwas similar across a resolution-defined window condition and a luminance-defined window condition, suggesting that peripheral image degradation is not a prerequisite of this effect. The window effect was also unaffected by the type of window boundary used (sharp or blended). These results are interpreted in terms of an attentional bias resulting in a reduced saliency of peripheral targets due to increased competition from items within the window. We discuss the implications of the window effect for the study of natural scene perception and for human factors research related to GCMRDs.  相似文献   

4.
Readers utilize parafoveal information about upcoming words and read less well when this information is denied. McConkie and Rayner (1975) enabled this issue to be explored by developing the moving window paradigm in which the experimenter varies the amount or the quality of the parafoveal information available around the current fixation point. We present a novel binocular version of the moving window technique to study the roles of the two eyes in reading, and we describe a basic experiment allowed by this technique. In the binocular moving window paradigm, each eye contributes its own window to a composite binocular window onto the text. We studied the reading of single lines of text in three conditions: no windows, a symmetrical 8-letters-left and 8-letters-right window for each eye, and a leftward-skewed 14-letters-left and 2-letters-right window for each eye. Note that both eyes saw the composite window onto the text. We tested the hypothesis that readers could be encouraged to generate a greater binocular disparity to augment their window onto the text and to provide a greater preview for one eye. The data offered limited support for this prediction. We observed considerable individual differences in both baseline fixation disparity and in readers' response to the critical asymmetric [14,2] window.  相似文献   

5.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.  相似文献   

6.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   

7.
Masking of foveal and parafoveal vision during eye fixations in reading   总被引:5,自引:0,他引:5  
A window or visual mask as moved across text in synchrony with the reader's eye movements. The size of the window or mask was varied so that either information in foveal or parafoveal vision was masked on each fixation. In another experiment, the onset of the mask was delayed for a certain amount of time following the end of the saccade. The results of the experiments point out the relative importance of foveal and parafoveal vision for reading and further indicate that most of the visual information necessary for reading can be acquired during the first 50 msec that information is available during an eye fixation.  相似文献   

8.
ABSTRACT— The amount of time viewers could process a scene during eye fixations was varied by a mask that appeared at a certain point in each eye fixation. The scene did not reappear until the viewer made an eye movement. The main finding in the studies was that in order to normally process a scene, viewers needed to see the scene for at least 150 ms during each eye fixation. This result is surprising because viewers can extract the gist of a scene from a brief 40- to 100-ms exposure. It also stands in marked contrast to reading, as readers need only to view the words in the text for 50 to 60 ms to read normally. Thus, although the same neural mechanisms control eye movements in scene perception and reading, the cognitive processes associated with each task drive processing in different ways.  相似文献   

9.
Two applications for the Macintosh that permit students to read the sentences in a text in any order also provide a record of reading behavior from which processing time and reading strategies can be determined. The applications differ: READIT! presents the sentences one at a time, whereas SELECT THE TEXT presents the entire passage with a mask, which resembles the moving window technique. Unique to these applications is that students may return to sentences to reread them any number of times and in any order. Because these applications allow students to reexamine parts of the text, the reading that they enable is more similar to normal reading than has been the case with previous methodologies for tracking student reading behavior. The applications are described, and a summary of the major results of the work in which we have used the applications is provided.  相似文献   

10.
康廷虎  张会 《心理科学》2020,(6):1312-1318
眼动范式是场景知觉研究的重要方法之一,可以通过对场景知觉过程中眼动变化的实时记录,真实地反映场景信息加工的内在心理机制;然而,人们的心理活动是极其复杂的,与之相应的眼动指标也具有多样性和复杂性。对眼动指标的分析,有不同的分析维度,比如整体和局部、时间和空间等。本文回顾了已有的眼动指标分类,并尝试从注视和眼跳的视角对场景知觉过程中的眼动指标进行分类。在此基础上,从界定标准、心理学意义、研究应用等方面对相应的眼动指标进行分析和介绍,最后提出了眼动指标分析和应用可能存在的问题,以及未来研究可能拓展的领域。  相似文献   

11.
B Pavard  A Berthoz 《Perception》1977,6(5):529-540
In the present work, we have shown the effect of a vestibular stimulation on the velocity perception of a moving scene. The intensity of this effect is related to the amplitude of the cart acceleration, image velocity, spatial frequency of the visual stimulus, and the angle between the directions of cart and image movement. A simple model has been developed to determine whether the perception of visual movement is due to the geometric projection of the vestibular evaluation on the visual vector, or the inverse.  相似文献   

12.
Rushton SK  Bradshaw MF  Warren PA 《Cognition》2007,105(1):237-245
An object that moves is spotted almost effortlessly; it "pops out". When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion. Without the unique identifier of retinal motion an object moving relative to the scene should be difficult to locate. Using a search task, we investigated this proposition. Computer-rendered objects were moved and transformed in a manner consistent with movement of the observer. Despite the complex pattern of retinal motion, objects moving relative to the scene were found to pop out. We suggest the brain uses its sensitivity to optic flow to "stabilise" the scene, allowing the scene-relative movement of an object to be identified.  相似文献   

13.
In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene.  相似文献   

14.
Change blindness is the relative inability of normally sighted observers to detect large changes in scenes when the low-level signals associated with those changes are either masked or of extremely low magnitude. Change detection can be inhibited by saccadic eye movements, artificial saccades or blinks, and 'mud splashes'. We now show that change detection is also inhibited by whole image motion in the form of sinusoidal oscillations. The degree of disruption depends upon the frequency of oscillation, which at 3 Hz is equivalent to that produced by artificial blinks. Image motion causes the retinal image to be blurred and this is known to affect object recognition. However, our results are inconsistent with good change detection followed by a delay due to poor recognition of the changing object. Oscillatory motion can induce eye movements that potentially mask or inhibit the low-level signals related to changes in the scene, but we show that eye movements promote rather than inhibit change detection when the image is moving.  相似文献   

15.
The CHRNA4 gene is known to be associated with individual differences in attention. However, its associations with other cognitive functions remain to be elucidated. In the present study, we investigated the effects of genetic variations in CHRNA4 on rapid scene categorization by 100 healthy human participants. In Experiment 1, we also conducted the Attention Network Test (ANT) in order to examine whether the genetic effects could be accounted for by attention. CHRNA4 was genotyped as carrying the TT, CT, or CC allele. The scene categorization task required participants to judge whether the category of a scene image (natural or man-made) was consistent with a cue word displayed at the response phase. The target–mask stimulus onset asynchrony (SOA) ranged from 13 to 93 ms. In comparison with CC-allele carriers, CT- and TT-allele carriers responded more accurately at the long SOA (93 ms) only during natural-scene categorization. In contrast, we observed no consistent association between CHRNA4 and the ANT, and no intertask correlation between scene categorization and the ANT. To validate our natural-scene categorization results, Experiment 2, carried out with an independent sample of 100 participants and a different stimulus set, successfully replicated the association between CHRNA4 genotypes and natural-scene categorization accuracy at long SOAs (67 and 93 ms). Our findings demonstrate, for the first time, that genetic variations in CHRNA4 can moderately contribute to individual differences in natural-scene categorization performance.  相似文献   

16.
People are highly skilled at intercepting moving objects and are capable of remarkably accurate timing. The timing accuracy required depends upon the period of time for which contact with a moving target is possible--the "time window" for successful interception. Studies of performance in an experimental interception task that allows this time window to be manipulated suggest that people change aspects of their performance (movement time, MT, and movement speed) in response to changes in the time window. However, this research did not establish whether the observed changes in performance were the results of a response to the time window per se or of independent responses to the quantities defining the time window (the size and speed of a moving target). Experiment 1 was designed to resolve this issue. The speed and size of the target were both varied, resulting in variations in the time window; MT was the primary dependent measure. Predictions of the hypothesis that people respond directly to changes in the time window were verified. Predictions of the alternative hypothesis that responses to changes in target speed and size are independent of one another were not supported. Experiment 2 examined how the type of performance change observed in Experiment 1 was affected by changing the time available for executing the interception. The time available and the target speed were varied, and MT was again the primary dependent measure. MT was smaller when there was less time available, and the effect of target speed (and hence the time window) on MT was also smaller, becoming undetectable at the shortest available time (0.4 s). The results of the two experiments are interpreted as providing information about the "rule" used to preprogramme movement parameters in anticipatory interceptive actions.  相似文献   

17.
移动窗口条件下阅读过程中字词识别特点的研究   总被引:4,自引:1,他引:3  
采用“移动窗口”技术对中文阅读过程中的字词识别进形了初步的探讨。被试是8名大学生,阅读材料为8篇科学文章。结果发现,在移动窗口条件下,词的注视时间受词频、词中字数等词本身特性,词在理解文章中的重要性以及句子界限效应、文章难度等来自阅读理解的高级过程的影响。移动窗口技术可以在一定程度上较真实地探讨阅读中字词的连续加工过程。  相似文献   

18.
When subjects are asked to determine where a fast-moving stimulus enters a window, they typically do not localize the stimulus at the edge, but at some later position within that window (Fröhlich effect). We report five experiments that explored this illusion. An attentional account is tested, assuming that the entrance of the stimulus in the window initiates a focus shift toward it. While this shift is under way, the stimulus moves into the window. Because the first phenomenal (i.e., explicitly reportable) representation of the stimulus will not be available before the end of the focus shift, the stimulus is perceived at some later position. In Experiment 1, we established the Fröhlich effect and showed that its size depends on stimulus parameters such as movement speed and movement direction. In Experiments 2 and 3, we examined the influence of eye movements and tested whether the effect changed when the stimuli were presented within a structural background or when they started from different eccentricities. In Experiments 4 and 5, specific predictions from the attentional model were tested: In Experiment 4 we showed that the processing of the moving stimulus benefits from a preceding peripheral cue indicating the starting position of the subsequent movement, which induces a preliminary focus shift to the position where the moving stimulus would appear. As a consequence the Fröhlich effect was reduced. Using a detection task in Experiment 5, we showed that feature information about the moving stimulus is lost when it falls into the critical interval of the attention shift. In conclusion, the present attentional account shows that selection mechanisms are not exclusively space based; rather, they can establish a spatial representation that is also used for perceptual judgment—that is, selection mechanisms can bespace establishing as well.  相似文献   

19.
From lisp machine to language lab   总被引:1,自引:0,他引:1  
A general method is described for using a Lisp Machine to study reading as it takes place. The method involves simulating, on the machine’s CRT, a moving window that passes through the text being read. The speed and direction of the window can be controlled by the reader, or test probes can be coordinated within the text viewed. The method allows reading to occur in an easily monitored environment, and the reader can respond to the material being read in a flexible manner. Suggestions for programming the Lisp Machine are provided which lead to improved overall execution speed and constancy of runtime for specific pieces of code in on-line experiments.  相似文献   

20.
Participants saw a standard scene of three objects on a desktop and then judged whether a comparison scene was either thesame, except for the viewpoint of the scene, ordifferent, when one or more of the objects either exchanged places or were rotated around their center. As in Nakatani, Pollatsek, and Johnson (2002), judgment times were longer when the rotation angles of the comparison scene increased, and the size of the rotation effect varied for different axes and was larger forsame judgments than fordifferent judgments. A second experiment, which included trials without the desktop, indicated that removing the desktop frame of reference mainly affected they-axis rotation conditions (the axis going vertically through the desktop plane). In addition, eye movement analyses indicated that the process was far more than a simple analogue rotation of the standard scene. The total response latency was divided into three components: theinitial eye movement latency, the first-pass time, and thesecond-pass time. The only indication of arotation effect in the time to execute the first two components was forz-axis (plane of sight) rotations. Thus, forx- andy-axis rotations, rotation effects occurred only in the probability of there being a second pass and the time to execute it. The data are inconsistent either with an initial rotation of the memory representation of the standard scene to the orientation of the comparison scene or with a holistic alignment of the comparison scene prior to comparing it with the memory representation of the standard scene. Indeed, the eye movement analysis suggests that little of the increased response time for rotated comparison scenes is due to something like a time-consuming analogue process but is, instead, due to more comparisons on individual objects being made (possibly moredouble checking).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号