首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Analyzing gaze behavior with dynamic stimulus material is of growing importance in experimental psychology; however, there is still a lack of efficient analysis tools that are able to handle dynamically changing areas of interest. In this article, we present DynAOI, an open-source tool that allows for the definition of dynamic areas of interest. It works automatically with animations that are based on virtual three-dimensional models. When one is working with videos of real-world scenes, a three-dimensional model of the relevant content needs to be created first. The recorded eye-movement data are matched with the static and dynamic objects in the model underlying the video content, thus creating static and dynamic areas of interest. A validation study asking participants to track particular objects demonstrated that DynAOI is an efficient tool for handling dynamic areas of interest.  相似文献   

2.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

3.
Although eye tracking has been used extensively to assess cognitions for static stimuli, recent research suggests that the link between gaze and cognition may be more tenuous for dynamic stimuli such as videos. Part of the difficulty in convincingly linking gaze with cognition is that in dynamic stimuli, gaze position is strongly influenced by exogenous cues such as object motion. However, tests of the gaze-cognition link in dynamic stimuli have been done on only a limited range of stimuli often characterized by highly organized motion. Also, analyses of cognitive contrasts between participants have been mostly been limited to categorical contrasts among small numbers of participants that may have limited the power to observe more subtle influences. We, therefore, tested for cognitive influences on gaze for screen-captured instructional videos, the contents of which participants were tested on. Between-participant scanpath similarity predicted between-participant similarity in responses on test questions, but with imperfect consistency across videos. We also observed that basic gaze parameters and measures of attention to centers of interest only inconsistently predicted learning, and that correlations between gaze and centers of interest defined by other-participant gaze and cursor movement did not predict learning. It, therefore, appears that the search for eye movement indices of cognition during dynamic naturalistic stimuli may be fruitful, but we also agree that the tyranny of dynamic stimuli is real, and that links between eye movements and cognition are highly dependent on task and stimulus properties.  相似文献   

4.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

5.
The authors examined the ability of younger and older adults to detect changes in dynamic displays. Older and younger adults viewed displays containing numerous moving objects and were asked to respond when a new object was added to the display. Accuracy, response times, and eye movements were recorded. For both younger and older participants, the number of eye movements accounted for a large proportion of variance in transient detection performance. Participants who actively searched for the change performed significantly worse than did participants who employed a passive or covert scan strategy, indicating that passive scanning may be a beneficial strategy in certain dynamic environments. The cost of an active scan strategy was especially high for older participants in terms of both accuracy and response times. However, older adults who employed a passive or covert scan strategy showed greater improvement, relative to older active searchers, than did younger adults. These results highlight the importance of individual differences in scanning strategy in real-world dynamic, cluttered environments.  相似文献   

6.
We study how processing states alternate during information search tasks. Inference is carried out with a discriminative hidden Markov model (dHMM) learned from eye movement data, measured in an experiment consisting of three task types: (i) simple word search, (ii) finding a sentence that answers a question and (iii) choosing a subjectively most interesting title from a list of ten titles. The results show that eye movements contain necessary information for determining the task type. After training, the dHMM predicted the task for test data with 60.2% accuracy (pure chance 33.3%). Word search and subjective interest conditions were easier to predict than the question–answer condition. The dHMM that best fitted our data segmented each task type into three hidden states. The three processing states were identified by comparing the parameters of the dHMM states to literature on eye movement research. A scanning type of eye behavior was observed in the beginning of the tasks. Next, participants tended to shift to states reflecting reading type of eye movements, and finally they ended the tasks in states which we termed as the decision states.  相似文献   

7.
We investigated eye movements during long-term pictorial recall. Participants performed a perceptual encoding task, in which they memorized 16 stimuli that were displayed in different areas on a computer screen. After the encoding phase the participants had to recall and visualize the images and answer to specific questions about visual details of the stimuli. One week later the participants repeated the pictorial recall task. Interestingly, not only in the immediate recall task but also 1 week later participants looked longer at the areas where the stimuli were encoded. The major contribution of this study is that memory for pictorial objects, including their spatial location, is stable and robust over time.  相似文献   

8.
This article presents GazeAlyze, a software package, written as a MATLAB (MathWorks Inc., Natick, MA) toolbox developed for the analysis of eye movement data. GazeAlyze was developed for the batch processing of multiple data files and was designed as a framework with extendable modules. GazeAlyze encompasses the main functions of the entire processing queue of eye movement data to static visual stimuli. This includes detecting and filtering artifacts, detecting events, generating regions of interest, generating spread sheets for further statistical analysis, and providing methods for the visualization of results, such as path plots and fixation heat maps. All functions can be controlled through graphical user interfaces. GazeAlyze includes functions for correcting eye movement data for the displacement of the head relative to the camera after calibration in fixed head mounts. The preprocessing and event detection methods in GazeAlyze are based on the software ILAB 3.6.8 Gitelman (Behav Res Methods Instrum Comput 34(4), 605-612, 2002). GazeAlyze is distributed free of charge under the terms of the GNU public license and allows code modifications to be made so that the program's performance can be adjusted according to a user's scientific requirements.  相似文献   

9.
People are able to judge the current position of occluded moving objects. This operation is known as motion extrapolation. It has previously been suggested that motion extrapolation is independent of the oculomotor system. Here we revisited this question by measuring eye position while participants completed two types of motion extrapolation task. In one task, a moving visual target travelled rightwards, disappeared, then reappeared further along its trajectory. Participants discriminated correct reappearance times from incorrect (too early or too late) with a two-alternative forced-choice button press. In the second task, the target travelled rightwards behind a visible, rectangular occluder, and participants pressed a button at the time when they judged it should reappear. In both tasks, performance was significantly different under fixation as compared to free eye movement conditions. When eye movements were permitted, eye movements during occlusion were related to participants' judgements. Finally, even when participants were required to fixate, small changes in eye position around fixation (<2°) were influenced by occluded target motion. These results all indicate that overlapping systems control eye movements and judgements on motion extrapolation tasks. This has implications for understanding the mechanism underlying motion extrapolation.  相似文献   

10.
Grant and Spivey (2003) proposed that eye movement trajectories can influence spatial reasoning by way of an implicit eye-movement-to-cognition link. We tested this proposal and investigated the nature of this link by continuously monitoring eye movements and asking participants to perform a problem-solving task under free-viewing conditions while occasionally guiding their eye movements (via an unrelated tracking task), either in a pattern related to the problem’s solution or in unrelated patterns. Although participants reported that they were not aware of any relationship between the tracking task and the problem, those who moved their eyes in a pattern related to the problem’s solution were the most successful problem solvers. Our results support the existence of an implicit compatibility between spatial cognition and the eye movement patterns that people use to examine a scene.  相似文献   

11.
People are able to judge the current position of occluded moving objects. This operation is known as motion extrapolation. It has previously been suggested that motion extrapolation is independent of the oculomotor system. Here we revisited this question by measuring eye position while participants completed two types of motion extrapolation task. In one task, a moving visual target travelled rightwards, disappeared, then reappeared further along its trajectory. Participants discriminated correct reappearance times from incorrect (too early or too late) with a two-alternative forced-choice button press. In the second task, the target travelled rightwards behind a visible, rectangular occluder, and participants pressed a button at the time when they judged it should reappear. In both tasks, performance was significantly different under fixation as compared to free eye movement conditions. When eye movements were permitted, eye movements during occlusion were related to participants' judgements. Finally, even when participants were required to fixate, small changes in eye position around fixation (<2°) were influenced by occluded target motion. These results all indicate that overlapping systems control eye movements and judgements on motion extrapolation tasks. This has implications for understanding the mechanism underlying motion extrapolation.  相似文献   

12.
陈丽君  郑雪 《心理学报》2014,46(3):367-384
在潜藏式与矛盾式两类问题发现情境中, 以眼动仪为研究工具, 问题发现能力高与低的大学生各20名为被试, 探讨大学生在问题发现总体和4个兴趣区中的眼动特征及其与发现问题数量和质量之间的关系。研究表明:(1)不同能力大学生在不同情境及其兴趣区中的问题发现差异, 能够体现在眼动指标上。回视是反映问题发现能力的敏感指标。回视次数和发现问题数量与质量之间的正相关, 以及在高能力组学生上的优势, 体现了信息的联系和整合性加工在问题发现中具有积极意义。(2)潜藏式问题发现中, 个体平均注视时间更长, 反映其认知加工难度更大。在提供重要信息的区域, 被试会投入更多精力, 表现为在注视时间、注视次数和瞳孔直径大小等指标的上升。(3)眼睛注视区域与发现问题区域间存在对应关系, 显示出“眼随心动”现象。在问题发现的最初和最终阶段, 被试都会出现跨区信息搜寻行为, 分别代表了对问题线索的寻找和最后的检查评估。高能力被试在每个稳定注视阶段的注视时间更短, 这种信息转换的灵活性体现出其信息加工上的优势。动态眼动轨迹分析揭示了单个静态指标难以反映的新特点。  相似文献   

13.
We argue that task requirements can be the determinant in generating different results in studies on visual object recognition. We investigated priming for novel visual objects in three implicit memory tasks. A study-test design was employed in which participants first viewed line drawings of unfamiliar objects and later made different decisions about structural aspects of the objects. Priming for both symmetric and asymmetric possible objects was observed in a task requiring a judgment of structural possibility. However, when the task was changed to one requiring a judgment of structural symmetry, only symmetric possible objects showed priming. Finally, in a matching task in which participants made a same-different judgment, only symmetric possible objects exhibited priming. These results suggest that an understanding of object representation will be most fruitful if it is based on careful analyses of both the task demands and their interaction(s) with encoding and retrieval processes.  相似文献   

14.
孙琪  任衍具 《心理科学》2014,37(2):265-271
以真实场景图像中的物体搜索为实验任务, 操纵场景情境和目标模板, 采用眼动技术将搜索过程分为起始阶段、扫描阶段和确认阶段, 考察场景情境和目标模板对视觉搜索过程的影响机制。结果发现, 场景情境和目标模板的作用方式及时间点不同, 二者交互影响搜索的正确率和反应时, 仅场景情境影响起始阶段的时间, 随后二者交互影响扫描阶段和确认阶段的时间及主要眼动指标。作者由此提出了场景情境和目标模板在视觉搜索中的交互作用模型。  相似文献   

15.
陈艾睿  董波  方颖  于长宇  张明 《心理学报》2014,46(9):1281-1288
实验结合连续闪烁抑制范式和线索化范式, 通过操纵直视面孔和斜视面孔的呈现方式, 考察了动静线索类型对注视线索效应的影响。结果表明:两种线索在阈上条件都能产生注视线索效应, 且动态线索诱发的效应更大; 无意识条件下有且仅有动态注视线索能诱发注视线索效应。这说明眼睛运动是产生阈下注视线索效应的必要条件; 眼睛运动会增强阈上注视线索效应, 静态注视线索效应依赖于意识。研究结果支持了社会知觉与心理理论交互模型。  相似文献   

16.
Performance of unimanual and bimanual multiphased prehensile movements   总被引:1,自引:0,他引:1  
By manipulating task action demands in 2 experiments, the author investigated whether the context-dependent effects seen in unimanual multiphase movements are also present in bimanual movements. Participants (N = 14) in Experiment 1 either placed or tossed objects into targets. The results indicated that the intention to perform a subsequent action with an object could influence the performance of an earlier movement in a sequence in both unimanual and bimanual tasks. Furthermore, assimilation effects were found when the subsequent tasks being performed by the 2 hands were incongruent. In Experiment 2, the author investigated in 12 participants whether planning in a multiphase movement includes some representation of the accuracy demands of the subsequent task. The accuracy demands of a subsequent task did not appear to influence initial movement planning. Instead, the present results support the notion that it is the action requirements of the subsequent movement that lead to context-dependent effects.  相似文献   

17.
This study examines eye movements made by a patient with action disorganization syndrome (ADS) as everyday tasks are performed. Relative to both normal participants and control patients, the ADS patient showed normal time-locking of eye movements to the subsequent use of objects. However, there were proportionately more unrelated fixations, and more fixations concerned with locating objects irrelevant to the immediate action, compared with control participants. The data suggest a dissociation between normal eye movement patterns for control of visually guided actions such as reaching and grasping, and abnormal eye movements between object-related fixations. The implications for understanding ADS are discussed.  相似文献   

18.
The aim of the current study was to investigate subtle characteristics of social perception and interpretation in high-functioning individuals with autism spectrum disorders (ASDs), and to study the relation between watching and interpreting. As a novelty, we used an approach that combined moment-by-moment eye tracking and verbal assessment. Sixteen young adults with ASD and 16 neurotypical control participants watched a video depicting a complex communication situation while their eye movements were tracked. The participants also completed a verbal task with questions related to the pragmatic content of the video. We compared verbal task scores and eye movements between groups, and assessed correlations between task performance and eye movements. Individuals with ASD had more difficulty than the controls in interpreting the video, and during two short moments there were significant group differences in eye movements. Additionally, we found significant correlations between verbal task scores and moment-level eye movement in the ASD group, but not among the controls. We concluded that participants with ASD had slight difficulties in understanding the pragmatic content of the video stimulus and attending to social cues, and that the connection between pragmatic understanding and eye movements was more pronounced for participants with ASD than for neurotypical participants.  相似文献   

19.
In this paper we briefly describe preliminary data from two experiments that we have carried out to investigate the relationship between visual encoding and memory for objects and their locations within scenes. In these experiments, we recorded participants′ eye movements as they viewed a photograph of a cubicle with 12 objects positioned pseudo-randomly on a desk and shelves. After viewing the photograph, participants were taken to the actual cubicle where they undertook two memory tests. Participants were asked to identify the 12 target objects(from the photograph)presented amongst 12 distractors. They were then required to place each of the objects in the location that they occupied in the photograph. These tests assessed participants′ memory for identity of the objects and their locations. In Experiment 1, we assessed the influence of the encoding period and the test delay on object identity and location memory. In Experiment 2 we manipulated scanning behaviour during encoding by "boxing"some of the objects in the photo. We showed that using boxes to change eye movement behaviour during encoding directly affected the nature of memory for the scene. The results of these studies indicate a fundamental relationship between visual encoding and memory for objects and their locations. We explain our findings in terms of the Visual Memory Model(Hollingworth & Henderson, 2002).  相似文献   

20.
Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号