首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
Decision researchers frequently analyze attention to individual objects to test hypotheses about underlying cognitive processes. Generally, fixations are assigned to objects using a method known as area of interest (AOI). Ideally, an AOI includes all fixations belonging to an object while fixations to other objects are excluded. Unfortunately, due to measurement inaccuracy and insufficient distance between objects, the distributions of fixations to objects may overlap, resulting in a signal detection problem. If the AOI is to include all fixations to an object, it will also likely include fixations belonging to other objects (false positives). In a survey, we find that many researchers report testing multiple AOI sizes when performing analyses, presumably trying to balance the proportion of true and false positive fixations. To test whether AOI size influences the measurement of object attention and conclusions drawn about cognitive processes, we reanalyze four published studies and conduct a fifth tailored to our purpose. We find that in studies in which we expected overlapping fixation distributions, analyses benefited from smaller AOI sizes (0° visual angle margin). In studies where we expected no overlap, analyses benefited from larger AOI sizes (>.5° visual angle margins). We conclude with a guideline for the use of AOIs in behavioral eye‐tracking research. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Event detection is a challenging stage in eye movement data analysis. A major drawback of current event detection methods is that parameters have to be adjusted based on eye movement data quality. Here we show that a fully automated classification of raw gaze samples as belonging to fixations, saccades, or other oculomotor events can be achieved using a machine-learning approach. Any already manually or algorithmically detected events can be used to train a classifier to produce similar classification of other data without the need for a user to set parameters. In this study, we explore the application of random forest machine-learning technique for the detection of fixations, saccades, and post-saccadic oscillations (PSOs). In an effort to show practical utility of the proposed method to the applications that employ eye movement classification algorithms, we provide an example where the method is employed in an eye movement-driven biometric application. We conclude that machine-learning techniques lead to superior detection compared to current state-of-the-art event detection algorithms and can reach the performance of manual coding.  相似文献   

3.
彭晓玲  黄丹 《心理科学》2018,(2):498-503
探究自闭症谱系障碍(ASD)儿童视觉搜索优势的显现是否受任务难度影响及优势存在机理。本研究采集ASD和正常发育(TD)儿童完成不同难度等级视觉搜索任务的行为和眼动数据。结果发现:在高难度等级任务中ASD组的准确率显著高于TD组;ASD组在任务中对目标刺激的回视次数、对靶子和外周感兴趣区的注视时间显著少于TD组,且ASD组偏好注视刺激的右侧区域。结果表明ASD儿童视觉搜索优势显现受任务难度影响,且可能与其对干扰刺激增强的知觉能力有关。  相似文献   

4.
The allocation of overt visual attention is investigated in a multi-task and dynamical situation: driving. The Expectancy–Value model of attention allocation stipulates that visual exploration depends on the expectancy and the value of the task-related information available in each Area Of Interest (AOI). We consider the approach to an intersection as a multi-task situation where two subtasks are involved: vehicle control and interactions with other drivers. Each of these subtasks is associated with some specific visual information present in the associated AOIs: the driver’s lane and the intersecting road at the intersection. An experiment was conducted in a driving simulator, coupled with a head-mounted eye-tracker. The intersecting road’s AOI’s Expectancy was manipulated with the traffic density, and its Value was manipulated with the priority rule before the intersection (stop, yield, and priority). The distribution of visual attention and the dynamics of visual exploration were analyzed on 20 participants, taking into account the dwell time in the AOIs associated to the driving subtasks, and the gaze transitions between the AOIs. The results suggest that visual attention to intersecting roads varied with the priority rule, and impacted the visual attention associated with the vehicle control subtask. In addition, a quantitative model was used to improve the understanding of the Expectancy and Value factors. The comparison of the data with the model’s predictions enables quantifying the observed differences between the experimental factors. Finally, the results associated with the traffic density are discussed in relation to the nature of the relevant information while approaching the intersection.  相似文献   

5.
Today, capturing the behavior of a human eye is considered a standard method for measuring the information-gathering process and thereby gaining insights into cognitive processes. Due to the dynamic character of most task environments there is still a lack of a structured and automated approach for analyzing eye movement in combination with moving objects. In this article, we present a guideline for advanced gaze analysis, called IGDAI (Integration Guideline for Dynamic Areas of Interest). The application of IGDAI allows gathering dynamic areas of interest and simplifies its combination with eye movement. The first step of IGDAI defines the basic requirements for the experimental setup including the embedding of an eye tracker. The second step covers the issue of storing the information of task environments for the dynamic AOI analysis. Implementation examples in XML are presented fulfilling the requirements for most dynamic task environments. The last step includes algorithms to combine the captured eye movement and the dynamic areas of interest. A verification study was conducted, presenting an air traffic controller environment to participants. The participants had to distinguish between different types of dynamic objects. The results show that in comparison to static areas of interest, IGDAI allows a faster and more detailed view on the distribution of eye movement.  相似文献   

6.
为探究教学微视频播放速度对学习效果和学习满意度的影响,实验1采用行为测试和眼动追踪技术测查了62名大学生在正常播放、1.5倍和2倍播放条件下的学习效果和视觉注意过程。结果表明随着播放速度加快(1)学习者的学习效果和学习满意度会降低;(2)对图片区的注视时间及图文转换次数减少。加速播放视频不利于学习,可能是因为学习时长不同。从现实情境和理论研究两方面考虑,实验2控制了相等的视频学习时长后发现:(1)与正常速度相比1.5倍播放不会抑制学习,2倍播放的学习效果高于1.5倍;(2)但不同速度的学习满意度和眼动结果无显著差异。该研究为短视频播放速度的相关研究提供了新的研究视角,并对教学微视频设计提供了参照。  相似文献   

7.
孙琪  任衍具 《心理科学》2014,37(2):265-271
以真实场景图像中的物体搜索为实验任务, 操纵场景情境和目标模板, 采用眼动技术将搜索过程分为起始阶段、扫描阶段和确认阶段, 考察场景情境和目标模板对视觉搜索过程的影响机制。结果发现, 场景情境和目标模板的作用方式及时间点不同, 二者交互影响搜索的正确率和反应时, 仅场景情境影响起始阶段的时间, 随后二者交互影响扫描阶段和确认阶段的时间及主要眼动指标。作者由此提出了场景情境和目标模板在视觉搜索中的交互作用模型。  相似文献   

8.
Recording eye movement data with high quality is often a prerequisite for producing valid and replicable results and for drawing well-founded conclusions about the oculomotor system. Today, many aspects of data quality are often informally discussed among researchers but are very seldom measured, quantified, and reported. Here we systematically investigated how the calibration method, aspects of participants’ eye physiologies, the influences of recording time and gaze direction, and the experience of operators affect the quality of data recorded with a common tower-mounted, video-based eyetracker. We quantified accuracy, precision, and the amount of valid data, and found an increase in data quality when the participant indicated that he or she was looking at a calibration target, as compared to leaving this decision to the operator or the eyetracker software. Moreover, our results provide statistical evidence of how factors such as glasses, contact lenses, eye color, eyelashes, and mascara influence data quality. This method and the results provide eye movement researchers with an understanding of what is required to record high-quality data, as well as providing manufacturers with the knowledge to build better eyetrackers.  相似文献   

9.
康廷虎  张会 《心理科学》2020,(6):1312-1318
眼动范式是场景知觉研究的重要方法之一,可以通过对场景知觉过程中眼动变化的实时记录,真实地反映场景信息加工的内在心理机制;然而,人们的心理活动是极其复杂的,与之相应的眼动指标也具有多样性和复杂性。对眼动指标的分析,有不同的分析维度,比如整体和局部、时间和空间等。本文回顾了已有的眼动指标分类,并尝试从注视和眼跳的视角对场景知觉过程中的眼动指标进行分类。在此基础上,从界定标准、心理学意义、研究应用等方面对相应的眼动指标进行分析和介绍,最后提出了眼动指标分析和应用可能存在的问题,以及未来研究可能拓展的领域。  相似文献   

10.
近年来,研究者利用眼动技术具有高时间精度的优势,探明不同年龄群体完成类比推理过程的眼动模式特点并得出其在进行类比推理时所使用的策略。基于类比推理的眼动研究发现了三种典型的类比推理策略——项目优先策略、结构匹配策略和语义限制策略。成人更多表现为项目优先策略,儿童更多表现为语义限制策略。未来研究可以优化类比推理眼动指标,尤其是全局扫视路径的计算方法,并重点关注特殊群体的类比推理眼动模式以及关注类比推理策略与其他认知能力的交互作用。  相似文献   

11.
Abstract— Given that attention precedes an eye movement to a target it becomes possible to use fixation sequences to probe the spatiotemporal dynamics of search Applying this method to a realistic search task, we found eye movements directed to the geometric centers of progressively smaller groups of objects rather than accurate fixations to individual objects in a display Such a binary search strategy is consistent with zoom-lens models panting an initially broad distribution of search, followed by a narrowing of this search region until only the target is selected We also interpret this oculo-motor averaging behavior as evidence for an initially parallel search analysis that becomes increasingly serial as the search process converges on the target.  相似文献   

12.
ProtoMatch is a software tool for integrating and analyzing fixed-location and movement eye gaze and cursor data. It provides a comprehensive collection of protocol analysis tools that support sequential data analyses for eye fixations and scanpaths as well as for cursor “fixations” (dwells at one location) and “cursorpaths” (movements between locations). ProtoMatch is modularized software that integrates both eye gaze and cursor protocols into a unified stream of data and provides an assortment of filters and analyses. ProtoMatch subsumes basic analyses (i.e., fixation duration, number of fixations, etc.) and introduces a method of objectively computing the similarity between scanpaths or cursorpaths using sequence alignment. The combination of filters, basic analyses, and sequence alignment in ProtoMatch provides researchers with a versatile system for performing both confirmatory and exploratory sequential data analyses (Sanderson & Fisher, 1994).  相似文献   

13.
不同方向视觉运动追踪的特性   总被引:1,自引:0,他引:1  
对向左、向右、向上与向下4个方向的平滑运动视觉追踪的眼动特点进行了探讨,并采用了频谱分析的方法对4个方向上视觉追踪的眼动参数进行了分析。结果表明,(1)水平追踪与垂直追踪之间的差异较为普遍,几乎存在于所有的眼动参数上;(2)左右追踪之间、上下追踪之间也分别都存在差异,它们主要表现在数据分布结构上;(3)眼睛跳动距离是视觉追踪的敏感指标。另外,不同方向的差异在不同眼动参数之间并不具有一致性。这反映了视觉追踪眼动的复杂性,不同类型眼动之间存在相互关联,这种关联性还有待于进一步研究。  相似文献   

14.
This article reports a calibration procedure that enables researchers to track movements of the eye while allowing relatively unrestricted head and/or body movement. The eye—head calibration algorithm calculates fixation point based on eye-position data acquired by a head-mounted eyetracker and corresponding head-position data acquired by a 3-D motion-tracking system. In a single experiment, we show that this procedure provides robust eye-position estimates while allowing free head movement. Although several companies offer ready-made systems for this purpose, there is no literature available that makes it possible for researchers to explore the details of the calibration procedures used by these systems. By making such details available, we hope to facilitate the development of cost-effective, nonproprietary eyetracking solutions.  相似文献   

15.
眼动技术在婴幼儿研究中成为一种流行的研究工具。如何合理地选择和使用眼动仪进行数据收集及分析,是婴幼儿眼动研究者需要考虑的重要问题。本文从眼动仪使用的流程出发,主要对婴幼儿眼动研究过程中所涉及的四个方面的问题进行了梳理和分析:(1)正确选择仪器;(2)合理校准;(3)提高数据质量;(4)有效分析和挖掘数据。同时,针对这些方面提出了相应的操作性建议。  相似文献   

16.
17.
18.
To take advantage of the increasing number of in-vehicle devices, automobile drivers must divide their attention between primary (driving) and secondary (operating in-vehicle device) tasks. In dynamic environments such as driving, however, it is not easy to identify and quantify how a driver focuses on the various tasks he/she is simultaneously engaged in, including the distracting tasks. Measures derived from the driver’s scan path have been used as correlates of driver attention. This article presents a methodology for analyzing eye positions, which are discrete samples of a subject’s scan path, in order to categorize driver eye movements. Previous methods of analyzing eye positions recorded in a dynamic environment have relied completely on the manual identification of the focus of visual attention from a point of regard superimposed on a video of a recorded scene, failing to utilize information regarding movement structure in the raw recorded eye positions. Although effective, these methods are too time consuming to be easily used when the large data sets that would be required to identify subtle differences between drivers, under different road conditions, and with different levels of distraction are processed. The aim of the methods presented in this article are to extend the degree of automation in the processing of eye movement data by proposing a methodology for eye movement analysis that extends automated fixation identification to include smooth and saccadic movements. By identifying eye movements in the recorded eye positions, a method of reducing the analysis of scene video to a finite search space is presented. The implementation of a software tool for the eye movement analysis is described, including an example from an on-road test-driving sample.  相似文献   

19.
Our eye movements are driven by a continuous trade-off between the need for detailed examination of objects of interest and the necessity to keep an overview of our surrounding. In consequence, behavioral patterns that are characteristic for our actions and their planning are typically manifested in the way we move our eyes to interact with our environment. Identifying such patterns from individual eye movement measurements is however highly challenging. In this work, we tackle the challenge of quantifying the influence of experimental factors on eye movement sequences. We introduce an algorithm for extracting sequence-sensitive features from eye movements and for the classification of eye movements based on the frequencies of small subsequences. Our approach is evaluated against the state-of-the art on a novel and a very rich collection of eye movements data derived from four experimental settings, from static viewing tasks to highly dynamic outdoor settings. Our results show that the proposed method is able to classify eye movement sequences over a variety of experimental designs. The choice of parameters is discussed in detail with special focus on highlighting different aspects of general scanpath shape. Algorithms and evaluation data are available at: http://www.ti.uni-tuebingen.de/scanpathcomparison.html.  相似文献   

20.
Ternary eye movement classification, which separates fixations, saccades, and smooth pursuit from the raw eye positional data, is extremely challenging. This article develops new and modifies existing eye-tracking algorithms for the purpose of conducting meaningful ternary classification. To this end, a set of qualitative and quantitative behavior scores is introduced to facilitate the assessment of classification performance and to provide means for automated threshold selection. Experimental evaluation of the proposed methods is conducted using eye movement records obtained from 11 subjects at 1000 Hz in response to a step-ramp stimulus eliciting fixations, saccades, and smooth pursuits. Results indicate that a simple hybrid method that incorporates velocity and dispersion thresholding allows producing robust classification performance. It is concluded that behavior scores are able to aid automated threshold selection for the algorithms capable of successful classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号