首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
This article reports a calibration procedure that enables researchers to track movements of the eye while allowing relatively unrestricted head and/or body movement. The eye—head calibration algorithm calculates fixation point based on eye-position data acquired by a head-mounted eyetracker and corresponding head-position data acquired by a 3-D motion-tracking system. In a single experiment, we show that this procedure provides robust eye-position estimates while allowing free head movement. Although several companies offer ready-made systems for this purpose, there is no literature available that makes it possible for researchers to explore the details of the calibration procedures used by these systems. By making such details available, we hope to facilitate the development of cost-effective, nonproprietary eyetracking solutions.  相似文献   

2.
眼动技术在婴幼儿研究中成为一种流行的研究工具。如何合理地选择和使用眼动仪进行数据收集及分析,是婴幼儿眼动研究者需要考虑的重要问题。本文从眼动仪使用的流程出发,主要对婴幼儿眼动研究过程中所涉及的四个方面的问题进行了梳理和分析:(1)正确选择仪器;(2)合理校准;(3)提高数据质量;(4)有效分析和挖掘数据。同时,针对这些方面提出了相应的操作性建议。  相似文献   

3.
In eye movements, saccade trajectory deviation has often been used as a physiological operationalization of visual attention, distraction, or the visual system’s prioritization of different sources of information. However, there are many ways to measure saccade trajectories and to quantify their deviation. This may lead to noncomparable results and poses the problem of choosing a method that will maximize statistical power. Using data from existing studies and from our own experiments, we used principal components analysis to carry out a systematic quantification of the relationships among eight different measures of saccade trajectory deviation and their power to detect the effects of experimental manipulations, as measured by standardized effect size. We concluded that (1) the saccade deviation measure is a good default measure of saccade trajectory deviation, because it is somewhat correlated with all other measures and shows relatively high effect sizes for two well-known experimental effects; (2) more generally, measures made relative to the position of the saccade target are more powerful; and (3) measures of deviation based on the early part of the saccade are made more stable when they are based on data from an eyetracker with a high sampling rate. Our recommendations may be of use to future eye movement researchers seeking to optimize the designs of their studies.  相似文献   

4.
Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments.  相似文献   

5.
In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, orexplicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number ofimplicit RFLs—screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a re-calibration procedure when necessary. This article also demonstrates how the disparity will vary across screen regions and participants and how each participant’s uniqueerror signature can be used to reduce the systematic error in the eye movement data collected for that participant.  相似文献   

6.
Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.  相似文献   

7.
Gaze-contingent displays combine a display device with an eyetracking system to rapidly update an image on the basis of the measured eye position. All such systems have a delay, the system latency, between a change in gaze location and the related change in the display. The system latency is the result of the delays contributed by the eyetracker, the display computer, and the display, and it is affected by the properties of each component, which may include variability. We present a direct, simple, and low-cost method to measure the system latency. The technique uses a device to briefly blind the eyetracker system (e.g., for video-based eyetrackers, a device with infrared light-emitting diodes (LED)), creating an eyetracker event that triggers a change to the display monitor. The time between these two events, as captured by a relatively low-cost consumer camera with high-speed video capability (1,000 Hz), is an accurate measurement of the system latency. With multiple measurements, the distribution of system latencies can be characterized. The same approach can be used to synchronize the eye position time series and a video recording of the visual stimuli that would be displayed in a particular gaze-contingent experiment. We present system latency assessments for several popular types of displays and discuss what values are acceptable for different applications, as well as how system latencies might be improved.  相似文献   

8.
Eye movement data analyses are commonly based on the probability of occurrence of saccades and fixations (and their characteristics) in given regions of interest (ROIs). In this article, we introduce an alternative method for computing statistical fixation maps of eye movements—iMap—based on an approach inspired by methods used in functional magnetic resonance imaging. Importantly, iMap does not require the a priori segmentation of the experimental images into ROIs. With iMap, fixation data are first smoothed by convolving Gaussian kernels to generate three-dimensional fixation maps. This procedure embodies eyetracker accuracy, but the Gaussian kernel can also be flexibly set to represent acuity or attentional constraints. In addition, the smoothed fixation data generated by iMap conform to the assumptions of the robust statistical random field theory (RFT) approach, which is applied thereafter to assess significant fixation spots and differences across the three-dimensional fixation maps. The RFT corrects for the multiple statistical comparisons generated by the numerous pixels constituting the digital images. To illustrate the processing steps of iMap, we provide sample analyses of real eye movement data from face, visual scene, and memory processing. The iMap MATLAB toolbox is editable and freely available for download online ().  相似文献   

9.
Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.  相似文献   

10.
康廷虎  张会 《心理科学》2020,(6):1312-1318
眼动范式是场景知觉研究的重要方法之一,可以通过对场景知觉过程中眼动变化的实时记录,真实地反映场景信息加工的内在心理机制;然而,人们的心理活动是极其复杂的,与之相应的眼动指标也具有多样性和复杂性。对眼动指标的分析,有不同的分析维度,比如整体和局部、时间和空间等。本文回顾了已有的眼动指标分类,并尝试从注视和眼跳的视角对场景知觉过程中的眼动指标进行分类。在此基础上,从界定标准、心理学意义、研究应用等方面对相应的眼动指标进行分析和介绍,最后提出了眼动指标分析和应用可能存在的问题,以及未来研究可能拓展的领域。  相似文献   

11.
Smooth pursuit eye movement (SPEM) abnormalities are some of the most consistently observed neurophysiological deficits associated with genetic risk for schizophrenia. SPEM has been traditionally assessed by infrared or video oculography using laboratory-based fixed-display systems. With growing interest in using SPEM measures to define phenotypes in large-scale genetic studies, there is a need for measurement instruments that can be used in the field. Here we test the reliability of a portable, head-mounted display (HMD) eye movement recording system and compare it with a fixed-display system. We observed comparable, modest calibration changes across trials between the two systems. The between-methods reliability for the most often used measure of pursuit performance, maintenance pursuit gain, was high (ICC = 0.96). This result suggests that the portable device is comparable with a lab-based system, which makes possible the collection of eye movement data in community-based and multicenter familial studies of schizophrenia.  相似文献   

12.
Videobased corneal-reflection-to-pupil-center systems are widely used in eye movement research. In this paper, an artificial eye drawn on a computer screen is presented. The artificial eye provides a way to simulate measurements of eye position in human subjects. The method allows testing videobased systems on the level of the signal and on the level of the calibration algorithm used to map the eye position parameters to stimulus space. In addition, the artificial eye can be used to evaluate specific hypotheses concerning the functioning or malfunctioning of the eye recorder and as a help in developing data analysis programs.  相似文献   

13.
The movements that we make with our body vary continuously along multiple dimensions. However, many of the tools and techniques presently used for coding and analyzing hand gestures and other body movements yield categorical outcome variables. Focusing on categorical variables as the primary quantitative outcomes may mislead researchers or distort conclusions. Moreover, categorical systems may fail to capture the richness present in movement. Variations in body movement may be informative in multiple dimensions. For example, a single hand gesture has a unique size, height of production, trajectory, speed, and handshape. Slight variations in any of these features may alter how both the speaker and the listener are affected by gesture. In this paper, we describe a new method for measuring and visualizing the physical trajectory of movement using video. This method is generally accessible, requiring only video data and freely available computer software. This method allows researchers to examine features of hand gestures, body movement, and other motion, including size, height, curvature, and speed. We offer a detailed account of how to implement this approach, and we also offer some guidelines for situations where this approach may be fruitful in revealing how the body expresses information. Finally, we provide data from a small study on how speakers alter their hand gestures in response to different characteristics of a stimulus to demonstrate the utility of analyzing continuous dimensions of motion. By creating shared methods, we hope to facilitate communication between researchers from varying methodological traditions.  相似文献   

14.
隋雪  任延涛 《心理学报》2007,39(1):64-70
采用眼动记录技术探讨面部表情识别的即时加工过程。基本的面部表情可以分为三种性质:正性、中性和负性,实验一探讨大学生对这三种面部表情识别即时加工过程的基本模式;实验二采用遮蔽技术探讨面部不同部位信息对面部表情识别的影响程度。结果发现:(1)被试对不同性质面部表情识别的即时加工过程存在共性,其眼动轨迹呈“逆V型”;(2)被试对不同性质面部表情的识别存在显著差异,在行为指标和眼动指标上都有体现;(3)对不同部位进行遮蔽,眼动模式发生了根本性改变,遮蔽影响面部表情识别的反应时和正确率;(4)面部表情识别对面部不同部位信息依赖程度不同,眼部信息作用更大。上述结果提示,个体对不同性质面部表情识别的即时加工过程具有共性,但在不同性质面部表情识别上的心理能量消耗不同;并且表情识别对面部不同部位信息依赖程度不同,对眼部信息依赖程度更大  相似文献   

15.
People often behave differently when they know they are being watched. Here, we report the first investigation of whether such social presence effects also influence looking behavior--a popular measure of attention allocation. We demonstrate that wearing an eye tracker, an implied social presence, leads individuals to avoid looking at particular stimuli. These results demonstrate that an implied social presence, here an eye tracker, can alter looking behavior. These data provide a new manipulation of social attention, as well as presenting a methodological challenge to researchers using eye tracking.  相似文献   

16.
In order to judge the degree of confidence one should have in the results of an experiment using eye movement records as data, it is necessary to have information about the quality of the eye movement data themselves. Suggestions are made for ways of assessing and reporting this information. The paper deals with three areas: characteristics of the eye movement signal, algorithms used in reducing the data, and accuracy of the eye position data. It is suggested that all studies involving eye movement data should report such information. Appendices include linear interpolation algorithms for mapping from the eye movement signal to stimulus space and a way of obtaining an index of accuracy for each data point.  相似文献   

17.
Event detection is the conversion of raw eye-tracking data into events—such as fixations, saccades, glissades, blinks, and so forth—that are relevant for researchers. In eye-tracking studies, event detection algorithms can have a serious impact on higher level analyses, although most studies do not accurately report their settings. We developed a data-driven eyeblink detection algorithm (Identification-Artifact Correction [I-AC]) for 50-Hz eye-tracking protocols. I-AC works by first correcting blink-related artifacts within pupil diameter values and then estimating blink onset and offset. Artifact correction is achieved with data-driven thresholds, and more reliable pupil data are output. Blink parameters are defined according to previous studies on blink-related visual suppression. Blink detection performance was tested with experimental data by visually checking the actual correspondence between I-AC output and participants’ eye images, recorded by the eyetracker simultaneously with gaze data. Results showed a 97% correct detection percentage.  相似文献   

18.
In 3 experiments, the author examined how readers' eye movements are influenced by joint manipulations of a word's frequency and the syntactic fit of the word in its context. In the critical conditions of the first 2 experiments, a high- or low-frequency verb was used to disambiguate a garden-path sentence, while in the last experiment, a high- or low-frequency verb constituted a phrase structure violation. The frequency manipulation always influenced the early eye movement measures of first-fixation duration and gaze duration. The context manipulation had a delayed effect in Experiment 1, influencing only the probability of a regressive eye movement from later in the sentence. However, the context manipulation influenced the same early eye movement measures as the frequency effect in Experiments 2 and 3, though there was no statistical interaction between the effects of these variables. The context manipulation also influenced the probability of a regressive eye movement from the verb, though the frequency manipulation did not. These results are shown to confirm predictions emerging from the serial, staged architecture for lexical and integrative processing of the E-Z Reader 10 model of eye movement control in reading (Reichle, Warren, & McConnell, 2009). It is argued, more generally, that the results provide an important constraint on how the relationship between visual word recognition and syntactic attachment is treated in processing models.  相似文献   

19.
Previous research on language comprehension has used the eyes as a window into processing. However, these methods are entirely reliant upon using visual or orthographic stimuli that map onto the linguistic stimuli being used. The potential danger of this method is that the pictures used may not perfectly match the internal aspects of language processing. Thus, a method was developed in which participants listened to stories while wearing a head-mounted eyetracker. Preliminary results demonstrate that this method is uniquely suited to measure responses to stimuli in the absence of visual stimulation.  相似文献   

20.
In several research contexts it is important to obtain eye-tracking measures while presenting visual stimuli independently to each of the two eyes (dichoptic stimulation). However, the hardware that allows dichoptic viewing, such as mirrors, often interferes with high-quality eye tracking, especially when using a video-based eye tracker. Here we detail an approach to combining mirror-based dichoptic stimulation with video-based eye tracking, centered on the fact that some mirrors, although they reflect visible light, are selectively transparent to the infrared wavelength range in which eye trackers record their signal. Although the method we propose is straightforward, affordable (on the order of US$1,000) and easy to implement, for many purposes it makes for an improvement over existing methods, which tend to require specialized equipment and often compromise on the quality of the visual stimulus and/or the eye tracking signal. The proposed method is compatible with standard display screens and eye trackers, and poses no additional limitations on the quality or nature of the stimulus presented or the data obtained. We include an evaluation of the quality of eye tracking data obtained using our method, and a practical guide to building a specific version of the setup used in our laboratories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号