首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Perceptual tasks such as object matching, mammogram interpretation, mental rotation, and satellite imagery change detection often require the assignment of correspondences to fuse information across views. We apply techniques developed for machine translation to the gaze data recorded from a complex perceptual matching task modeled after fingerprint examinations. The gaze data provide temporal sequences that the machine translation algorithm uses to estimate the subjects' assumptions of corresponding regions. Our results show that experts and novices have similar surface behavior, such as the number of fixations made or the duration of fixations. However, the approach applied to data from experts is able to identify more corresponding areas between two prints. The fixations that are associated with clusters that map with high probability to corresponding locations on the other print are likely to have greater utility in a visual matching task. These techniques address a fundamental problem in eye tracking research with perceptual matching tasks: Given that the eyes always point somewhere, which fixations are the most informative and therefore are likely to be relevant for the comparison task?  相似文献   

2.
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.  相似文献   

3.
Because of limited peripheral vision, many visual tasks depend on multiple eye fixations. Good performance in such tasks demonstrates that some memory must survive from one fixation to the next. One factor that must influence performance is the degree to which multiple eye fixations interfere with the critical memories. In the present study, the amount of interference was measured by comparing visual discriminations based on multiple fixations to visual discriminations based on a single fixation. The procedure resembled partial report, but used a discrimination measure. In the prototype study, two lines were presented, followed by a single line and a cue. The cue pointed toward one of the positions of the first two lines. Observers were required to judge if the single line in the second display was longer or shorter than the cued line of the first display. These judgments were used to estimate a length threshold. The critical manipulation was to instruct observers either to maintain fixation between the lines of the first display or to fixate each line in sequence. The results showed an advantage for multiple fixations despite the intervening eye movements. In fact, thresholds for the multiple-fixation condition were nearly as good as those in a control condition where the lines were foveally viewed without eye movements. Thus, eye movements had little or no interfering effect in this task. Additional studies generalized the procedure and the stimuli. In conclusion, information about a variety of size and shape attributes was remembered with essentially no interference across eye fixations.  相似文献   

4.
Because of limited peripheral vision, many visual tasks depend on multiple eye fixations. Good performance in such tasks demonstrates that some memory must survive from one fixation to the next. One factor that must influence performance is the degree to which multiple eye fixations interfere with the critical memories. In the present study, the amount of interference was measured by comparing visual discriminations based on multiple fixations to visual discriminations based on a single fixation. The procedure resembled partial report, but used a discrimination measure. In the prototype study, two lines were presented, followed by a single line and a cue. The cue pointed toward one of the positions of the first two lines. Observers were required to judge if the single line in the second display was longer or shorter than the cued line of the first display. These judgments were used to estimate a length threshold. The critical manipulation was to instruct observers either to maintain fixation between the lines of the first display or to fixate each line in sequence. The results showed an advantage for multiple fixations despite the intervening eye movements. In fact, thresholds for the multiple-fixation condition were nearly as good as those in a control condition where the lines were foveally viewed without eye movements. Thus, eye movements had little or no interfering effect in this task. Additional studies generalized the procedure and the stimuli. In conclusion, information about a variety of size and shape attributes was remembered with essentially no interference across eye fixations.  相似文献   

5.
Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen’s kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen’s kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).  相似文献   

6.
Deaf individuals rely on facial expressions for emotional, social, and linguistic cues. In order to test the hypothesis that specialized experience with faces can alter typically observed gaze patterns, twelve hearing adults and twelve deaf, early-users of American Sign Language judged the emotion and identity of expressive faces (including whole faces, and isolated top and bottom halves), while accuracy and fixations were recorded. Both groups recognized individuals more accurately from top than bottom halves, and emotional expressions from bottom than top halves. Hearing adults directed the majority of fixations to the top halves of faces in both tasks, but fixated the bottom half slightly more often when judging emotion than identity. In contrast, deaf adults often split fixations evenly between the top and bottom halves regardless of task demands. These results suggest that deaf adults have habitual fixation patterns that may maximize their ability to gather information from expressive faces.  相似文献   

7.
Recent research suggests that the different components of eye movements (fixations, saccades) are not strictly separate but are interdependent processes. This argument rests on observations that gaze-step sizes yield unimodal distributions and exhibit power-law scaling, indicative of interdependent processes coordinated across timescales. The studies that produced these findings, however, employed complex tasks (visual search, scene perception). Thus, the question is whether the observed interdependence is a fundamental property of eye movements or emerges in the interplay between cognitive processes and complex visual stimuli. In this study, we used a simple eye movement task where participants moved their eyes in a prescribed sequence at several different paces. We outlined diverging predictions for this task for independence versus interdependence of fixational and saccadic fluctuations and tested these predictions by assessing the spectral properties of eye movements. We found no clear peak in the power spectrum attributable exclusively to saccadic fluctuations. Furthermore, changing the pace of the eye movement sequence yielded a global shift in scaling relations evident in the power spectrum, not just a localized shift for saccadic fluctuations. These results support the conclusion that fixations and saccades are interdependent processes.  相似文献   

8.
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.  相似文献   

9.
Handwriting, a complex motor process involves the coordination of both the upper limb and visual system. The gaze behavior that occurs during the handwriting process is an area that has been little studied. This study investigated the eye-movements of adults during writing and reading tasks. Eye and handwriting movements were recorded for six different words over three different tasks. The results compared reading and handwriting the same words, a between condition comparison and a comparison between the two handwriting tasks. Compared to reading, participants produced more fixations during handwriting tasks and the average fixation durations were longer. When reading fixations were found to be mostly around the center of word, whereas fixations when writing appear to be made for each letter in a written word and were located around the base of letters and flowed in a left to right direction. Between the two writing tasks more fixations were made when words were written individually compared to within sentences, yet fixation durations were no different. Correlation of the number of fixations made to kinematic variables revealed that horizontal size and road length held a strong correlation with the number of fixations made by participants.  相似文献   

10.
Performance on three different tasks was compared: naming, lexical decision, and reading (with eye fixation times on a target word measured). We examined the word frequency effect for a common set of words for each task and each subject. Naming and reading (particularly gaze duration) yielded similar frequency effects for the target words. The frequency effect found in lexical decision was greater than that found in naming and in eye fixation times. In all tasks, there was a correlation between the frequency effect and average response time. In general, the results suggest that both the naming and the lexical decision tasks yield data about word recognition processes that are consistent with effects found in eye fixations during silent reading.  相似文献   

11.
This study examines eye movements made by a patient with action disorganization syndrome (ADS) as everyday tasks are performed. Relative to both normal participants and control patients, the ADS patient showed normal time-locking of eye movements to the subsequent use of objects. However, there were proportionately more unrelated fixations, and more fixations concerned with locating objects irrelevant to the immediate action, compared with control participants. The data suggest a dissociation between normal eye movement patterns for control of visually guided actions such as reaching and grasping, and abnormal eye movements between object-related fixations. The implications for understanding ADS are discussed.  相似文献   

12.
Developmental Coordination Disorder (DCD) is characterized by substantial difficulties with motor coordination to the extent that it has a clear impact on the daily functioning of those who suffer from the disorder. Laboratory-based research indicated impaired oculomotor control in individuals with DCD. However, it is not clear how these oculomotor problems contribute to control and coordination in daily tasks. This study explored differences and similarities in gaze behaviour during reading and cup stacking between young adults with DCD and their matched typically developing counterparts (TD; aged 20–23 years). Gaze behaviour was recorded using eye-tracking, and hand movements were registered using a digital camera. Results of the reading tasks demonstrated similar behaviour between the groups, apart from a lower number of characters recorded per fixation in the DCD group. In cup stacking, the individuals with DCD were slower than their counterparts when three cups had to be displaced to a central target using the dominant hand. The gaze strategy of individuals with DCD involved systematic fixations on the cup or target prior to the hand movement to that cup or target, whereas these alternating saccades between cup and target were less obvious in the TD group. In the bimanual stacking task, where a pyramid of six cups had to be built on a central target using both hands, both groups mainly fixated the central target for the whole duration of the task, without distinct differences in gaze behaviour and duration of performance between individuals with and those without DCD. In conclusion, gaze behaviour of young adults with DCD shows differences from that of their typically developing counterparts that may be related to underlying oculomotor deficits in some but not all daily tasks.  相似文献   

13.
Results are presented from an experiment in which subjects' eye movements were recorded while they carried out two visual tasks with similar material. One task was chosen to require close visual scrutiny; the second was less visually demanding. The oculomotor behaviour in the two tasks differed in three ways. (1) When scrutinizing, there was a reduction in the area of visual space over which stimulation influences saccadic eye movements. (2) When moving their eyes to targets requiring scrutiny, subjects were more likely to make a corrective saccade. (3) The duration of fixations on targets requiring scrutiny was increased. The results are discussed in relation to current theories of visual attention and the control of saccadic eye movements.  相似文献   

14.
Recent behavioral and computational research on eye movement control during scene viewing has focused on where the eyes move. However, fixations also differ in their durations, and when the eyes move may be another important indicator of perceptual and cognitive activity. Here we used a scene onset delay paradigm to investigate the degree to which individual fixation durations are under direct moment-to-moment control of the viewer’s current visual scene. During saccades just prior to critical fixations, the scene was removed from view so that when the eyes landed, no scene was present. Following a manipulated delay period, the scene was restored to view. We found that one population of fixations was under the direct control of the current scene, increasing in duration as delay increased. A second population of fixations was relatively constant across delay. The pattern of data did not change whether delay duration was random or blocked, suggesting that the effects were not under the strategic control of the viewer. The results support a mixed control model in which the durations of some fixations proceed regardless of scene presence, whereas others are under the direct moment-to-moment control of ongoing scene analysis.  相似文献   

15.
Tatler BW  Wade NJ 《Perception》2003,32(2):167-184
Investigations of the ways in which the eyes move came to prominence in the 19th century, but techniques for measuring them more precisely emerged in the 20th century. When scanning a scene or text the eyes engage in periods of relative stability (fixations) interspersed with ballistic rotations (saccades). The saccade-and-fixate strategy, associated with voluntary eye movements, was first uncovered in the context of involuntary eye movements following body rotation. This pattern of eye movements is now referred to as nystagmus, and involves periods of slow eye movements, during which objects are visible, and rapid returns, when they are not; it is based on a vestibular reflex which attempts to achieve image stabilisation. Post-rotational nystagmus was reported in the late 18th century (by Wells), with afterimages used as a means of retinal stabilisation to distinguish between movement of the eyes and of the environment. Nystagmus was linked to vestibular stimulation in the 19th century, and Mach, Breuer, and Crum Brown all described its fast and slow phases. Wells and Breuer proposed that there was no visual awareness during the ballistic phase (saccadic suppression). The saccade-and-fixate strategy highlighted by studies of nystagmus was shown to apply to tasks like reading by Dodge, who used more sophisticated photographic techniques to examine oculomotor kinematics. The relationship between eye movements and perception, following earlier intuitions by Wells and Breuer, was explored by Dodge, and has been of fundamental importance in the direction of vision research over the last century.  相似文献   

16.
Event detection is the conversion of raw eye-tracking data into events—such as fixations, saccades, glissades, blinks, and so forth—that are relevant for researchers. In eye-tracking studies, event detection algorithms can have a serious impact on higher level analyses, although most studies do not accurately report their settings. We developed a data-driven eyeblink detection algorithm (Identification-Artifact Correction [I-AC]) for 50-Hz eye-tracking protocols. I-AC works by first correcting blink-related artifacts within pupil diameter values and then estimating blink onset and offset. Artifact correction is achieved with data-driven thresholds, and more reliable pupil data are output. Blink parameters are defined according to previous studies on blink-related visual suppression. Blink detection performance was tested with experimental data by visually checking the actual correspondence between I-AC output and participants’ eye images, recorded by the eyetracker simultaneously with gaze data. Results showed a 97% correct detection percentage.  相似文献   

17.
How does the pattern of eye fixation vary as an informative part of a word is encountered? If the processing of information lags behind the movement of the eyes, then we should expect no variation in the pattern; but if processing is immediate, then the movements of the reader's eyes should correspond to the distribution of information being inspected. An experiment is reported which examined the ways that the text ahead of the point of current fixation can be used to guide the eyes to future fixations, by monitoring fixations during a sentence comprehension task. The patterns of eye fixations upon words with uneven distributions of information (where, for example, words predictable from the sight of their first few letters but not from their last few letters are defined as containing informative beginnings) were observed, and it was found that more and longer fixations were produced when subjects looked at the informative parts of words, particularly at the informative endings of words. The results support the suggestion that eye movements are under the moment-to-moment control of cognitive mechanisms.  相似文献   

18.
Prosodic attributes of speech, such as intonation, influence our ability to recognize, comprehend, and produce affect, as well as semantic and pragmatic meaning, in vocal utterances. The present study examines associations between auditory perceptual abilities and the perception of prosody, both pragmatic and affective. This association has not been previously examined. Ninety-seven participants (49 female and 48 male participants) with normal hearing thresholds took part in two experiments, involving both prosody recognition and psychoacoustic tasks. The prosody recognition tasks included a vocal emotion recognition task and a focus perception task requiring recognition of an accented word in a spoken sentence. The psychoacoustic tasks included a task requiring pitch discrimination and three tasks also requiring pitch direction (i.e., high/low, rising/falling, changing/steady pitch). Results demonstrate that psychoacoustic thresholds can predict 31% and 38% of affective and pragmatic prosody recognition scores, respectively. Psychoacoustic tasks requiring pitch direction recognition were the only significant predictors of prosody recognition scores. These findings contribute to a better understanding of the mechanisms underlying prosody recognition and may have an impact on the assessment and rehabilitation of individuals suffering from deficient prosodic perception.  相似文献   

19.
Nystr?m and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.  相似文献   

20.
There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号