首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This study investigates the amount of ocular motility occuring in response to questions varying the kind of cognitive process required for answer. Fewer eye movements occur in response to questions designed to elicit visuo-spatial as compared to verbal-conceptual processes, a finding consistent with our earlier observation of more ‘stares’ occuring with the former than with the latter questions. The results are inconsistent with the traditional hypothesis that visual imagery involves an increase in scanning eye movements. The findings are interpreted in terms of a model postulating an interaction of the form of visual information processing and the type of cognitive activity subjects engage in.  相似文献   

2.
In the present experiments, we examined whether shifts of attention selectively interfere with the maintenance of both verbal and spatial information in working memory and whether the interference produced by eye movements is due to the attention shifts that accompany them. In Experiment 1, subjects performed either a spatial or a verbal working memory task, along with a secondary task requiring fixation or a secondary task requiring shifts of attention. The results indicated that attention shifts interfered with spatial, butnot with verbal, working memory, suggesting that the interference is specific to processes within the visuospatial sketchpad. In Experiment 2, subjects performed a primary spatial working memory task, along with a secondary task requiring fixation, an eye movement, or an attention shift executed in the absence of an eye movement. The results indicated that both eye movements and attention shifts interfered with spatial working memory. Eye movements interfered to a much greater extent than shifts of attention, however, suggesting that eye movements may contribute a unique source of interference, over and above the interference produced by the attention shifts that accompany them.  相似文献   

3.
Computer classifiers have been successful at classifying various tasks using eye movement statistics. However, the question of human classification of task from eye movements has rarely been studied. Across two experiments, we examined whether humans could classify task based solely on the eye movements of other individuals. In Experiment 1, human classifiers were shown one of three sets of eye movements: Fixations, which were displayed as blue circles, with larger circles meaning longer fixation durations; Scanpaths, which were displayed as yellow arrows; and Videos, in which a neon green dot moved around the screen. There was an additional Scene manipulation in which eye movement properties were displayed either on the original scene where the task (Search, Memory, or Rating) was performed or on a black background in which no scene information was available. Experiment 2 used similar methods but only displayed Fixations and Videos with the same Scene manipulation. The results of both experiments showed successful classification of Search. Interestingly, Search was best classified in the absence of the original scene, particularly in the Fixation condition. Memory also was classified above chance with the strongest classification occurring with Videos in the presence of the scene. Additional analyses on the pattern of correct responses in these two conditions demonstrated which eye movement properties successful classifiers were using. These findings demonstrate conditions under which humans can extract information from eye movement characteristics in addition to providing insight into the relative success/failure of previous computer classifiers.  相似文献   

4.
5.
We study how processing states alternate during information search tasks. Inference is carried out with a discriminative hidden Markov model (dHMM) learned from eye movement data, measured in an experiment consisting of three task types: (i) simple word search, (ii) finding a sentence that answers a question and (iii) choosing a subjectively most interesting title from a list of ten titles. The results show that eye movements contain necessary information for determining the task type. After training, the dHMM predicted the task for test data with 60.2% accuracy (pure chance 33.3%). Word search and subjective interest conditions were easier to predict than the question–answer condition. The dHMM that best fitted our data segmented each task type into three hidden states. The three processing states were identified by comparing the parameters of the dHMM states to literature on eye movement research. A scanning type of eye behavior was observed in the beginning of the tasks. Next, participants tended to shift to states reflecting reading type of eye movements, and finally they ended the tasks in states which we termed as the decision states.  相似文献   

6.
Experimental analogues of post-traumatic stress disorder suggest that loading the visuospatial sketchpad of working memory with a concurrent task reduces the vividness and associated distress of predominantly visual images. The present experiments explicitly tested the hypothesis that interfering with the phonological loop could analogously reduce the vividness and emotional impact of auditory images. In Experiment 1, 30 undergraduates formed non-specific images of emotive autobiographical memories while performing a concurrent task designed to load either the visuospatial sketchpad (eye movements) or phonological loop (articulatory suppression). Participants reported their images to be primarily visual, corresponding to the greater dual-task disruption observed for eye movements. Experiment 2 instructed participants to form specifically visual or auditory images. As predicted, concurrent articulation reduced vividness and emotional intensity ratings of auditory images to a greater extent than did eye movements, whereas concurrent eye movements reduced ratings of visual images much more than did articulatory suppression. Such modality-specific dual-task interference could usefully contribute to the treatment and management of intrusive distressing images in both clinical and non-clinical settings.  相似文献   

7.
It is not known why people move their eyes when engaged in non-visual cognition. The current study tested the hypothesis that differences in saccadic eye movement rate (EMR) during non-visual cognitive tasks reflect different requirements for searching long-term memory. Participants performed non-visual tasks requiring relatively low or high long-term memory retrieval while eye movements were recorded. In three experiments, EMR was substantially lower for low-retrieval than for high-retrieval tasks, including in an eyes closed condition in Experiment 3. Neither visual imagery nor between-task difficulty was related to EMR, although there was some evidence for a minor effect of within-task difficulty. Comparison of task-related EMRs to EMR during a no-task waiting period suggests that eye movements may be suppressed or activated depending on task requirements. We discuss a number of possible interpretations of saccadic eye movements during non-visual cognition and propose an evolutionary model that links these eye movements to memory search through an elaboration of circuitry involved in visual perception.  相似文献   

8.
Eye movement desensitization and reprocessing can reduce ratings of the vividness and emotionality of unpleasant memories-hence it is commonly used to treat posttraumatic stress disorder. The present experiments compared three accounts of how eye movements produce these benefits. Participants rated unpleasant autobiographical memories before and after eye movements or an eyes stationary control condition. In Experiment 1, eye movements produced benefits only when memories were held in mind during the movements, and eye movements increased arousal, contrary to an investigatory-reflex account. In Experiment 2, horizontal and vertical eye movements produced equivalent benefits, contrary to an interhemispheric-communication account. In Experiment 3, two other distractor tasks (auditory shadowing, drawing) produced benefits that were negatively correlated with working-memory capacity. These findings support a working-memory account of the eye movement benefits in which the central executive is taxed when a person performs a distractor task while attempting to hold a memory in mind.  相似文献   

9.
Four dual-task experiments required a speeded manual choice response to a tone in a close temporal proximity to a saccadic eye movement task. In Experiment 1, subjects made a saccade towards a single transient; in Experiment 2, a red and a green colour patch were presented to left and right, and the saccade was to which ever patch was the pre-specified target colour. There was some slowing of the eye movement, but neither task combination showed typical dual-task interference (the “psychological refractory effect”). However, more interference was observed when the direction of the saccade depended on whether a central colour patch was red or green, or when the saccade was directed towards the numerically higher of two large digits presented to the left and the right. Experiment 5 examined a vocal second task, for comparison. The findings might reflect the fact that eye movements can be directed by two separate brain systems--the superior colliculus and the frontal eye fields; commands from the latter but not the former may be delayed by simultaneous unrelated sensorimotor tasks.  相似文献   

10.
In the present study, second graders (n= 23), fourth graders (n= 16), sixth graders (n= 24) and adults (n= 21) read texts adopted from children’s science textbooks either with the task to answer a “why” question presented as the title of the text or for comprehension when their eye movements were recorded. Immediately after reading, readers answered a text memory and an integration question. Second graders showed an effect of questions as increased processing during first-pass reading, whereas older readers showed the effect in later look-backs. For adult readers, questions also facilitated first-pass reading. Text memory or integration question-answering was not influenced by the reading task. The results indicate that questions increase the standards of coherence for text information and that already young readers do modify their reading behaviour according to task demands.  相似文献   

11.
Langley Porter Neuropychtatrte Institute, University of California, San Francisco. Californta 94122 Response latencies to color are usually shorter than those to form. but sometimes this difference is absent or reversed. Three experiments investigated whether these differences result from differential susceptibility of color and form to extra focal processing. Experiment I showed shorter same-difference latencies to color than to form when two stimuli were presented simultaneously. This difference disappeared or reversed when the two stimuh appeared sequentially. Experiment II. using a multistimulus matching task, found that differences between response latencies to form and color were minimal at the center of the display and increased peripherally. Experiment III showed that eye movements were more frequent in matching forms than colors. Tasks that produced many eye movements had long average latencies, but the relationship between eye movements and latency was not a simple one. There was evidence both for parallel and serial strategies tn the use of the eyes to gather information. The results of these experiments are considered in relation to a theory of distributed attention.  相似文献   

12.
Two studies investigated the effects of eye movements on the rate of discovery and the vividness of visual images. Eye movements were manipulated by having three conditions: (1) The Ss were instructed to make eye movements while generating images to noun pairs; (2) the Ss were instructed not to make eye movements, but to think about making eye movements while generating images to noun pairs; (3) the Ss were instructed not to make eye movements and not to think about making eye movements while generating images to noun pairs. In addition, the ease of generating images was manipulated by using noun pairs that differed in their image-evoking capacity; five of the noun pairs consisted of high imagery-evoking nouns and five consisted of low imagery-evoking nouns. The two experiments were similar, with the exception that a between-groups design was used in Experiment 1, whereas Experiment 2 employed a within-Ss design. The results of both experiments showed highly significant effects of noun-pair type on both the rate of discovery and the vividness of images, with the fastest and most vivid images occuring to the high-imagery noun pairs. The effects of the eye-movement conditions on the rate of discovery and the vividness of the images were not significant in either experiment, and these findings are discussed in terms of the relationship of ocular activity to imagery.  相似文献   

13.
Three experiments examined the role of eye and limb movements in the maintenance of information in spatial working memory. In Experiment 1, reflexive saccades interfered with memory span for spatial locations but did not interfere with memory span for letters. In Experiment 2, three different types of eye movements (reflexive saccades, pro-saccades, and anti-saccades) interfered with working memory to the same extent. In all three cases, spatial working memory was much more affected than verbal working memory. The results of these two experiments suggest that eye movements interfere with spatial working memory primarily by disrupting processes localised in the visuospatial sketchpad. In Experiment 3, limb movements performed while maintaining fixation produced as much interference with spatial working memory as reflexive saccades. These results suggest that the interference produced by eye movements is not the result of their visual consequences. Rather, all spatially directed movements appear to have similar effects on visuospatial working memory.  相似文献   

14.
Eye-hand coordination is required to accurately perform daily activities that involve reaching, grasping and manipulating objects. Studies using aiming, grasping or sequencing tasks have shown a stereotypical temporal coupling pattern where the eyes are directed to the object in advance of the hand movement, which may facilitate the planning and execution required for reaching. While the temporal coordination between the ocular and manual systems has been extensively investigated in adults, relatively little is known about the typical development of eye-hand coordination. Therefore, the current study addressed an important knowledge gap by characterizing the profile of eye-hand coupling in typically developing school-age children (n = 57) and in a cohort of adults (n = 30). Eye and hand movements were recorded concurrently during the performance of a bead threading task which consists of four distinct movements: reach to bead, grasp, reach to needle, and thread. Results showed a moderate to high correlation between eye and hand latencies in children and adults, supporting that both movements were planned in parallel. Eye and reach latencies, latency differences, and dwell time during grasping and threading, showed significant age-related differences, suggesting eye-hand coupling becomes more efficient in adolescence. Furthermore, visual acuity, stereoacuity and accommodative facility were also found to be associated with the efficiency of eye-hand coordination in children. Results from this study can serve as reference values when examining eye and hand movement during the performance of fine motor skills in children with neurodevelopmental disorders.  相似文献   

15.
Previous studies have shown that number processing can induce spatial biases in perception and action and can trigger the orienting of visuospatial attention. Few studies, however, have investigated how spatial processing and visuospatial attention influences number processing. In the present study, we used the optokinetic stimulation (OKS) technique to trigger eye movements and thus overt orienting of visuospatial attention. Participants were asked to stare at OKS, while performing parity judgements (Experiment 1) or number comparison (Experiment 2), two numerical tasks that differ in terms of demands on magnitude processing. Numerical stimuli were acoustically presented, and participants responded orally. We examined the effects of OKS direction (leftward or rightward) on number processing. The results showed that rightward OKS abolished the classic number size effect (i.e., faster reaction times for small than large numbers) in the comparison task, whereas the parity task was unaffected by OKS direction. The effect of OKS highlights a link between visuospatial orienting and processing of number magnitude that is complementary to the more established link between numerical and visuospatial processing. We suggest that the bidirectional link between numbers and space is embodied in the mechanisms subserving sensorimotor transformations for the control of eye movements and spatial attention.  相似文献   

16.
Previous research has suggested that questions eliciting visual imagery are associated with lower rates of saccadic eye movements as compared to questions eliciting verbal processes. Two experiments reported here examined the roles of external visual stimulation and speech output in this effect. In both experiments, questions designed to elicit verbal-linguistic or visual-imaginal processing, and which required either syntactically complex or simple responses, were administered while eye movements were recorded by electrooculography. In experiment 1, 42 subjects responded while viewing either the interviewer's face or a gray oval on a video monitor. Imaginal questions elicited a lower rate of eye movements than did verbal questions regardless of the display on the monitor. In experiment 2, 17 subjects responded in conditions of light and darkness. Imaginal questions elicited lower rates of eye movements in both light and dark. Neither cognitive mode nor speech output requirements interacted with stimulus conditions in either experiment. The failure of visual conditions to influence the verbal-imaginal difference in eye movement rate is viewed as inconsistent with a visual interference interpretation of the relationship of eye movements to cognitive activity. Alternate interpretations are discussed.  相似文献   

17.
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   

18.
Nearly all the self‐talk cues studied so far have been self‐statements. However, the findings of Senay, Albarracin, and Noguchi suggest that interrogative self‐talk produces better task performance than declarative one. Two of the experiments reported here were meant to replicate that study, but the expected differences were not confirmed. Experiment 3 showed that if a self‐posed question about future behavior was answered positively, task performance was better than in groups exposed either to the self‐statement ‘I will do it’ or to a negative answer following the question. However, these differences occurred only in those who self‐reported the awareness of the impact of self‐talk on their thought processes. This effect and the possible reasons why between‐group differences were not found in Experiments 1 and 2 are discussed. An alternative explanation for the results of Experiment 3 is also proposed beside that stressing the impact of internal answer. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.  相似文献   

20.
We studied the strategic (presumably cortical) control of ocular fixation in experiments that measured the fixation offset effect (FOE) while manipulating readiness to make reflexive or voluntary eye movements. The visual grasp reflex, which generates reflexive saccades to peripheral visual signals, reflects an opponent process in the superior colliculus (SC) between fixation cells at the rostral pole, whose activity helps maintain ocular position and increases when a stimulus is present at fixation, and movement cells, which generate saccades and are inhibited by rostral fixation neurons. Voluntary eye movements are controlled by movement and fixation cells in the frontal eye field (FEF). The FOE--a decrease in saccade latency when the fixation stimulus is extinguished--has been shown to reflect activity in the collicular eye movement circuitry and also to have an activity correlate in the FEF. Our manipulation of preparatory set to make reflexive or voluntary eye movements showed that when reflexive saccades were frequent and voluntary saccades were infrequent, the FOE was attenuated only for reflexive saccades. When voluntary saccades were frequent and reflexive saccades were infrequent, the FOE was attenuated only for voluntary saccades. We conclude that cortical processes related to task strategy are able to decrease fixation neuron activity even in the presence of a fixation stimulus, resulting in a smaller FOE. The dissociation in the effects of a fixation stimulus on reflexive and voluntary saccade latencies under the same strategic set suggests that the FOEs for these two types of eye movements may reflect a change in cellular activity in different neural structures, perhaps in the SC for reflexive saccades and in the FEF for voluntary saccades.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号