首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.  相似文献   

2.
To take advantage of the increasing number of in-vehicle devices, automobile drivers must divide their attention between primary (driving) and secondary (operating in-vehicle device) tasks. In dynamic environments such as driving, however, it is not easy to identify and quantify how a driver focuses on the various tasks he/she is simultaneously engaged in, including the distracting tasks. Measures derived from the driver’s scan path have been used as correlates of driver attention. This article presents a methodology for analyzing eye positions, which are discrete samples of a subject’s scan path, in order to categorize driver eye movements. Previous methods of analyzing eye positions recorded in a dynamic environment have relied completely on the manual identification of the focus of visual attention from a point of regard superimposed on a video of a recorded scene, failing to utilize information regarding movement structure in the raw recorded eye positions. Although effective, these methods are too time consuming to be easily used when the large data sets that would be required to identify subtle differences between drivers, under different road conditions, and with different levels of distraction are processed. The aim of the methods presented in this article are to extend the degree of automation in the processing of eye movement data by proposing a methodology for eye movement analysis that extends automated fixation identification to include smooth and saccadic movements. By identifying eye movements in the recorded eye positions, a method of reducing the analysis of scene video to a finite search space is presented. The implementation of a software tool for the eye movement analysis is described, including an example from an on-road test-driving sample.  相似文献   

3.
This paper presents a novel three-dimensional (3-D) eye movement analysis algorithm for binocular eye tracking within virtualreality (VR). The user’s gaze direction, head position, and orientation are tracked in order to allow recording of the user’s fixations within the environment. Although the linear signal analysis approach is itself not new, its application to eye movement analysis in three dimensions advances traditional two-dimensional approaches, since it takes into account the six degrees of freedom of head movements and is resolution independent. Results indicate that the 3-D eye movement analysis algorithm can successfully be used for analysis of visual process measures in VR. Process measures not only can corroborate performance measures, but also can lead to discoveries of the reasons for performance improvements. In particular, analysis of users’ eye movements in VR can potentially lead to further insights into the underlying cognitive processes of VR subjects.  相似文献   

4.
ASTEF: A simple tool for examining fixations   总被引:1,自引:0,他引:1  
In human factors and ergonomics research, the analysis of eye movements has gained popularity as a method for obtaining information concerning the operator's cognitive strategies and for drawing inferences about the cognitive state of an individual. For example, recent studies have shown that the distribution of eye fixations is sensitive to variations in mental workload---dispersed when workload is high, and clustered when workload is low. Spatial statistics algorithms can be used to obtain information about the type of distribution and can be applied over fixations recorded during small epochs of time to assess online changes in the level of mental load experienced by the individuals. In order to ease the computation of the statistical index and to encourage research on the spatial properties of visual scanning, A Simple Tool for Examining Fixations has been developed. The software application implements functions for fixation visualization, management, and analysis, and includes a tool for fixation identification from raw gaze point data. Updated information can be obtained online at www.astef.info, where the installation package is freely downloadable.  相似文献   

5.
EMAS is a software system (written in VAX-11 FORTRAN) for the analysis of eye movement data recorded during the performance of figural tasks. Its main functions are: (1) Calibration of raw coordinates of eye movements to determine their actual position on the stimulus display. Different kinds of measurement distortion may be corrected. (2) Identification of eye fixations and the determination of their locations and durations. (3) Analysis of fixation sequences. The frequency of transitions of fixations among specified sectors of the stimulus display is computed. A sequential list is made of the successive fixations in which the fixated sector and the fixation duration are graphically indicated. (4) Plotting of raw or calibrated eye movement data and fixation points. The sequence of fixations in specified display sectors can also be plotted in real time. Applications of the programs to the Embedded Figures Test and the Hidden Figures Test are illustrated.  相似文献   

6.
The present study aimed to further explore the role of the head for configural body processing by comparing complete bodies with headless bodies and faceless heads (Experiment 1). A second aim was to further explore the role of the eye region in configural face processing (Experiment 2). Due to that, we conducted a second experiment with complete faces, eyeless faces, and eyes. In addition, we used two effects to manipulate configural processing: the effect of stimulus inversion and scrambling. The current data clearly show an inversion effect for intact bodies presented with head and faces including the eye region. Thus, the head and the eye region seem to be central for configural processes that are manipulated by the effect of stimulus inversion. Furthermore, the behavioural and electrophysiological body inversion effect depends on the intact configuration of bodies and is associated with the N170 as the face inversion effect depends on the intact face configuration. Hence, configural body processing depends not only on the presence of the head but rather on a complete representation of human bodies that includes the body and the head. Furthermore, configural face processing relies on intact and complete face representations that include faces and eyes.  相似文献   

7.
This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data.  相似文献   

8.
Previous research has demonstrated that younger adults are surprisingly poor at detecting substantial changes to visual scenes. Little is known, however, about age differences in this phenomenon. In the 2 experiments reported here, older adults were slower than younger adults in detecting changes to simple visual stimuli. This age difference was beyond what would be expected given known age-related changes in processing speed. Examination of eye movement behavior during the search for change suggested that age-related changes in the useful field of view and degree of cautiousness play a significant role. Speed of processing and 3 age-related eye movement behaviors explained 85% of the variance in change detection latency, eliminating the effect of age.  相似文献   

9.
10.
康廷虎  张会 《心理科学》2020,(6):1312-1318
眼动范式是场景知觉研究的重要方法之一,可以通过对场景知觉过程中眼动变化的实时记录,真实地反映场景信息加工的内在心理机制;然而,人们的心理活动是极其复杂的,与之相应的眼动指标也具有多样性和复杂性。对眼动指标的分析,有不同的分析维度,比如整体和局部、时间和空间等。本文回顾了已有的眼动指标分类,并尝试从注视和眼跳的视角对场景知觉过程中的眼动指标进行分类。在此基础上,从界定标准、心理学意义、研究应用等方面对相应的眼动指标进行分析和介绍,最后提出了眼动指标分析和应用可能存在的问题,以及未来研究可能拓展的领域。  相似文献   

11.
12.
13.
A method for measuring horizontal eye movements in the msec range is described. Accurate measurement of horizontal eye movement over a linear range of 12° is achieved by processing the image of the eye illuminated with infrared light and with the head position fixed. The system has given very reliable results, and a resolution of 6 min of visual angle can be achieved with a character space of 45 min of arc. We also describe efficient numerical-data processing which allows the precise determination of the absolute position of the eye.  相似文献   

14.
Writers typically spend a certain proportion of time looking back over the text that they have written. This is likely to serve a number of different functions, which are currently poorly understood. In this article, we present two systems, ScriptLog+TimeLine and EyeWrite, that adopt different and complementary approaches to exploring this activity by collecting and analyzing combined eye movement and keystroke data from writers composing extended texts. ScriptLog+TimeLine is a system that is based on an existing keystroke-logging program and uses heuristic, pattern-matching methods to identify reading episodes within eye movement data. EyeWrite is an integrated editor and analysis system that permits identification of the words that the writer fixates and their location within the developing text. We demonstrate how the methods instantiated within these systems can be used to make sense of the large amount of data generated by eyetracking and keystroke logging in order to inform understanding of the cognitive processes that underlie written text production.  相似文献   

15.
Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.  相似文献   

16.
A novel three-dimensional eye tracker is described and its performance evaluated. In contrast to previous devices based on conventional video standards, the present eye tracker is based on programmable CMOS image sensors, interfaced directly to digital processing circuitry to permit real-time image acquisition and processing. This architecture provides a number of important advantages, including image sampling rates of up to 400/sec measurement, direct pixel addressing for preprocessing and acquisition, and hard-disk storage of relevant image data. The reconfigurable digital processing circuitry also facilitatesin-line optimization of the front-end, time-critical processes. The primary acquisition algorithm for tracking the pupil and other eye features is designed around the generalized Hough transform. The tracker permits comprehensive measurement of eye movement (three degrees of freedom) and head movement (six degrees of freedom), and thus provides the basis for many types of vestibulo-oculomotor and visual research. The device has been qualified by the German Space Agency (DLR) and NASA for deployment on the International Space Station. It is foreseen that the device will be used together with appropriate stimulus generators as a general purpose facility for visual and vestibular experiments. Initial verification studies with an artificial eye demonstrate a measurement resolution of better than 0.1° in all three components (i.e., system noise for each of the components measured as 0.006° H, 0.005° V, and 0.016° T. Over a range of ±20° eye rotation, linearity was found to be <0.5% (H), <0.5% (V), and <2.0% (T). A comparison with the scierai search coil technique yieldednear equivalent values for the systemnoise and the thickness of Listing’s plane.  相似文献   

17.
Video cameras provide a simple, noninvasive method for monitoring a subject’s eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Offline analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.  相似文献   

18.
Two experiments are reported that address the issue of coordination of the eyes, head, and hand during reaching and pointing. Movement initiation of the eyes, head, and hand were monitored in order to make inferences about the type of movement control used. In the first experiment, when subjects pointed with the finger to predictable or unpredictable locations marked by the appearance of a light, no differences between head and eye movement initiation were found. In the second experiment, when subjects pointed very fast with the finger, the head started to move before the eyes did. Conversely, when subjects pointed accurately, and thus more slowly, with the finger, the eyes started to move first, followed by the head and finger. When subjects were instructed to point to the same visual target only with their eyes and head, both fast and accurately, however, eye movement always started before head movement, regardless of speed-accuracy instructions. These results indicate that the behavior of the eye and head system can be altered by introducing arm movements. This, along with the variable movement initiation patterns, contradicts the idea that the eye, head, and hand system is controlled by a single motor program. The time of movement termination was also monitored, and across both experiments, the eyes always reached the target first, followed by the finger, and then the head. This finding suggests that movement termination patterns may be a fundamental control variable.  相似文献   

19.
This paper describes a combined instrument (eye tracker and target generator, both head mounted, with integrated data analysis) that tests parameters of saccadic eye movement and fixation control to give insight into the status of functional brain systems. Using three minilasers, the target generator projects three visual stimuli, a fixation point and two lateral stimuli, with programmable timing. The controller allows the selection of overlap, 200-msec gap, or remembered saccade trials. Size, maximal velocity, and reaction time are determined for each primary saccade. The number of prosaccades and antisaccades are counted. More saccades—for example, the occurrence and latency of corrective saccades—may be evaluated off line by an interactive PC analysis program. The eye position data can be transferred to a PC. Off-line analysis compares each observed variable relative to an age-matched control group (300 healthy control subjects 7–70 years of age, tested in the overlap condition with prosaccade instructions and in the gap condition with antisaccades). The diagnostic results can be used to elaborate an individual optomotor training program.  相似文献   

20.
When the peripheral visual field is restricted or distorted, as occurs with certain spectacle lenses, the identification of objects in the periphery requires a coordinated head and eye movement. Initial experiments on the identification of peripheral images under such restrictions show that the degradation in performance is defined by a consistent additional delay in the time required to identify the image correctly. An analysis of the motor movements shows that performance is solely determined by movements of the head; eye movements are sufficiently precise and fast so they do not limit performance. A quantitative model of the identification task was developed and model simulations confirmed the experimental findings that head movement variables, specifically response latency and movement duration, uniquely determine identification performance. Hence, improved performance under these conditions must come from modifications in head-movement control either through training or adaptive processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号