首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The performance of bimanual movements involving separate objects presents an obvious challenge to the visuo-motor system: Visual feedback can only be obtained from one target at a time. To overcome this challenge overt shifts in visual attention may occur so that visual feedback from both movements may be used directly (Bingham, Hughes, & Mon-Williams, 2008; Riek, Tresilian, Mon-Williams, Coppard, & Carson, 2003). Alternatively, visual feedback from both movements may be obtained in the absence of eye movements, presumably by covert shifts in attention (Diedrichsen, Nambisan, Kennerley, & Ivry, 2004). Given that the quality of information falls with increasing distance from the fixated point, can we obtain the level of information required to accurately guide each hand for precision grasping of separate objects without moving our eyes to fixate each target separately? The purpose of the current study was to examine how the temporal coordination between the upper limbs is affected by the quality of visual information available during the performance of a bimanual task. A total of 11 participants performed congruent and incongruent movements towards near and/or far objects. Movements were performed in natural, fixate-centre, fixate-left, and fixate-right vision conditions. Analyses revealed that the transport phase of incongruent movements was similar across vision conditions for the temporal aspects of both the transport and grasp, whereas the spatial aspects of grasp formation were influenced by the quality of visual feedback. We suggest that bimanual coordination of the temporal aspects of reach-to-grasp movements are not influenced solely by overt shifts in visual attention but instead are influenced by a combination of factors in a task-constrained way.  相似文献   

2.
There is considerable evidence that covert visual attention precedes voluntary eye movements to an intended location. What happens to covert attention when an involuntary saccadic eye movement is made? In agreement with other researchers, we found that attention and voluntary eye movements are tightly coupled in such a way that attention always shifts to the intended location before the eyes begin to move. However, we found that when an involuntary eye movement is made, attention first precedes the eyes to the unintended location and then switches to the intended location, with the eyes following this pattern a short time later. These results support the notion that attention and saccade programming are tightly coupled.  相似文献   

3.
Recent computational models of cognition have made good progress in accounting for the visual processes needed to encode external stimuli. However, these models typically incorporate simplified models of visual processing that assume a constant encoding time for all visual objects and do not distinguish between eye movements and shifts of attention. This paper presents a domain-independent computational model, EMMA, that provides a more rigorous account of eye movements and visual encoding and their interaction with a cognitive processor. The visual-encoding component of the model describes the effects of frequency and foveal eccentricity when encoding visual objects as internal representations. The eye-movement component describes the temporal and spatial characteristics of eye movements as they arise from shifts of visual attention. When integrated with a cognitive model, EMMA generates quantitative predictions concerning when and where the eyes move, thus serving to relate higher-level cognitive processes and attention shifts with lower-level eye-movement behavior. The paper evaluates EMMA in three illustrative domains — equation solving, reading, and visual search — and demonstrates how the model accounts for aspects of behavior that simpler models of cognitive and visual processing fail to explain.  相似文献   

4.
In the present experiments, we examined whether shifts of attention selectively interfere with the maintenance of both verbal and spatial information in working memory and whether the interference produced by eye movements is due to the attention shifts that accompany them. In Experiment 1, subjects performed either a spatial or a verbal working memory task, along with a secondary task requiring fixation or a secondary task requiring shifts of attention. The results indicated that attention shifts interfered with spatial, butnot with verbal, working memory, suggesting that the interference is specific to processes within the visuospatial sketchpad. In Experiment 2, subjects performed a primary spatial working memory task, along with a secondary task requiring fixation, an eye movement, or an attention shift executed in the absence of an eye movement. The results indicated that both eye movements and attention shifts interfered with spatial working memory. Eye movements interfered to a much greater extent than shifts of attention, however, suggesting that eye movements may contribute a unique source of interference, over and above the interference produced by the attention shifts that accompany them.  相似文献   

5.
The premotor theory of attention predicts that motor movements, including manual movements and eye movements, are preceded by an obligatory shift of attention to the location of the planned response. We investigated whether the shifts of attention evoked by trained spatial cues (e.g., Dodd & Wilson, 2009) are obligatory by using an extreme prediction of the premotor theory: If individuals are trained to associate a color cue with a manual movement to the left or right, the shift of attention evoked by the color cue should also influence eye movements in an unrelated task. Participants were trained to associate an irrelevant color cue with left/right space via a training session in which directional responses were made. Experiment 1 showed that, posttraining, vertical saccades deviated in the direction of the trained response, despite the fact that the color cue was irrelevant. Experiment 2 showed that latencies of horizontal saccades were shorter when an eye movement had to be made in the direction of the trained response. These results demonstrate that the shifts of attention evoked by trained stimuli are obligatory, in addition to providing support for the premotor theory and for a connection between the attentional, motor, and oculomotor systems.  相似文献   

6.
When trying to remember verbal information from memory, people look at spatial locations that have been associated with visual stimuli during encoding, even when the visual stimuli are no longer present. It has been shown that such “eye movements to nothing” can influence retrieval performance for verbal information, but the mechanism underlying this functional relationship is unclear. More precisely, covert in comparison to overt shifts of attention could be sufficient to elicit the observed differences in retrieval performance. To test if covert shifts of attention explain the functional role of the looking-at-nothing phenomenon, we asked participants to remember verbal information that had been associated with a spatial location during an encoding phase. Additionally, during the retrieval phase, all participants solved an unrelated visual tracking task that appeared in either an associated (congruent) or an incongruent spatial location. Half the participants were instructed to look at the tracking task, half to shift their attention covertly (while keeping the eyes fixed). In two experiments, we found that memory retrieval depended on the location to which participants shifted their attention covertly. Thus, covert shifts of attention seem to be sufficient to cause differences in retrieval performance. The results extend the literature on the relationship between visuospatial attention, eye movements, and verbal memory retrieval and provide deep insights into the nature of the looking-at-nothing phenomenon.  相似文献   

7.
Becker SI 《Acta psychologica》2008,127(2):324-339
Previous studies indicate that priming affects attentional processes, facilitating processes of target detection and selection on repetition trials. However, the results are so far compatible with two different attentional views that propose entirely different mechanisms to account for priming. The priming of pop-out hypothesis explains priming by feature weighting processes that lead to more frequent selections of nontarget items on switch trials. According to the episodic retrieval account, switch trials conversely lead to temporal delays in retrieving priority rules that specify the target. The results from two eye tracking experiments clearly favour the priming of pop-out hypothesis: Switching the target and nontarget features leads to more frequent selection of nontargets, without affecting the time-course of saccades to a great extent. The results from two more control experiments demonstrate that the same results can be obtained in a visual search task that allows only covert attention shifts. This indicates that eye movements can reliably indicate covert attention shifts in visual search.  相似文献   

8.
We examined perceptual sequence learning by means of an adapted serial reaction time task in which eye movements were unnecessary for performing the sequence learning task. Participants had to respond to the identity of a target letter pair ("OX" or "XO") appearing in one of four locations. On the other locations, similar distractor letter pairs ("QY" or "YQ") were shown. While target identity changed randomly, target location was structured according to a deterministic sequence. To render eye movements superfluous, (1) stimulus letter pairs appeared around a fixation cross with a visual angle of 0.63°, which means that they appeared within the foveal visual area and (2) the letter pairs were presented for only 100 ms, a period too short to allow proper eye movements. Reliable sequence knowledge was acquired under these conditions, as responses were both slower and less accurate when the trained sequence was replaced by an untrained sequence. These results support the notion that perceptual sequence learning can be based on shifts of attention without overt oculomotor movements.  相似文献   

9.
Attention and saccadic eye movements   总被引:3,自引:0,他引:3  
Four threshold detection experiments addressed three issues concerning the relationship between movements of spatial attention and saccadic eye movements: (a) the time course of attention shifts wit saccades, (b) the response of the two systems to changes in stimulus parameters, and (c) the relationship of attention to saccadic suppression. These issues bear on the more general question of the degree of independence between the saccadic and attentional movement systems. The results of these experiments support the contention that the mechanisms that shift attention are separate from those that control saccadic eye movements. Relevant events in the visual field periphery, however, will trigger both a saccade and attention shift. The attentional response to such events does not appear to be under subjects' control. The implication of these results for theories of saccadic suppression is discussed.  相似文献   

10.
The “gap effect” refers to the finding that saccadic latencies are typically reduced when the fixation point is removed just prior to the presentation of a target. One explanation for this effect is that the removal of the fixation point causes the disengagement of covert attention and allows for extremely rapid movements of attention (express attentional shifts). However, previous research regarding express attentional shifts has yielded equivocal results. The present study used a variation of a peripheral cueing paradigm with a discrimination task (Experiment 1) and a detection task (Experiment 2) to further examine this issue. The results from eye movement and keypress latencies indicated that there were express attentional shifts with the discrimination task but not in the detection task. This pattern of results may have been due to differences in how attention was allocated between the two tasks. Thus, evidence for express attentional shifts was found, but only under certain conditions.  相似文献   

11.
Eye movements and the integration of visual memory and visual perception   总被引:3,自引:0,他引:3  
Because visual perception has temporal extent, temporally discontinuous input must be linked in memory. Recent research has suggested that this may be accomplished by integrating the active contents of visual short-term memory (VSTM) with subsequently perceived information. In the present experiments, we explored the relationship between VSTM consolidation and maintenance and eye movements, in order to discover how attention selects the information that is to be integrated. Specifically, we addressed whether stimuli needed to be overtly attended in order to be included in the memory representation or whether covert attention was sufficient. Results demonstrated that in static displays in which the to-be-integrated information was presented in the same spatial location, VSTM consolidation proceeded independently of the eyes, since subjects made few eye movements. In dynamic displays, however, in which the to-be-integrated information was presented in different spatial locations, eye movements were directly related to task performance. We conclude that these differences are related to different encoding strategies. In the static display case, VSTM was maintained in the same spatial location as that in which it was generated. This could apparently be accomplished with covert deployments of attention. In the dynamic case, however, VSTM was generated in a location that did not overlap with one of the to-be-integrated percepts. In order to "move" the memory trace, overt shifts of attention were required.  相似文献   

12.
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects.  相似文献   

13.
Previous studies have demonstrated that working memory for spatial location can be significantly disrupted by concurrent eye or limb movement (Baddeley, 1986; Smyth, Pearson, & Pendleton, 1988). Shifts in attention alone can also interfere with spatial span (Smyth & Scholey, 1994), even with no corresponding movement of the eyes or limbs (Smyth, 1996). What is not clear from these studies is how comparable is the magnitude of effect caused by different forms of spatial disrupter. Recently, it has been demonstrated that limb movements produce as much interference with spatial span as do reflexive saccades (Lawrence, Myerson, Oonk, & Abrams, 2001). In turn this has led to the hypothesis that all spatially directed movement can produce similar effects in visuo-spatial working memory. This paper reports the results of five experiments that have contrasted the effect of concurrent eye movement, limb movement, and covert attention shifts on participants' working memory for sequences of locations. All conditions involving concurrent eye movement produced significantly greater reduction in span than equivalent limb movement or covert attention shifts with eyes fixated. It is argued that these results demonstrate a crucial role for oculomotor control processes during the rehearsal of location-specific representations in working memory.  相似文献   

14.
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.  相似文献   

15.
In a number of studies, we have demonstrated that the spatial-temporal coupling of eye and hand movements is optimal for the pickup of visual information about the position of the hand and the target late in the hand's trajectory. Several experiments designed to examine temporal coupling have shown that the eyes arrive at the target area concurrently with the hand achieving peak acceleration. Between the time the hand reached peak velocity and the end of the movement, increased variability in the position of the shoulder and the elbow was accompanied by a decreased spatial variability in the hand. Presumably, this reduction in variability was due to the use of retinal and extra-retinal information about the relative positions of the eye, hand and target. However, the hand does not appear to be a slave to the eye. For example, we have been able to decouple eye movements and hand movements using Müller-Lyer configurations as targets. Predictable bias, found in primary and corrective saccadic eye movements, was not found for hand movements, if on-line visual information about the target was available during aiming. That is, the hand remained accurate even when the eye had a tendency to undershoot or overshoot the target position. However, biases of the hand were evident, at least in the initial portion of an aiming movement, when vision of the target was removed and vision of the hand remained. These findings accent the versatility of human motor control and have implications for current models of visual processing and limb control.  相似文献   

16.
Recent work has demonstrated that horizontal saccadic eye movements enhance verbal episodic memory retrieval, particularly in strongly right-handed individuals. The present experiments test three primary assumptions derived from this research. First, horizontal eye movements should facilitate episodic memory for both verbal and non-verbal information. Second, the benefits of horizontal eye movements should only be seen when they immediately precede tasks that demand right and left-hemisphere processing towards successful performance. Third, the benefits of horizontal eye movements should be most pronounced in the strongly right-handed. Two experiments confirmed these hypotheses: horizontal eye movements increased recognition sensitivity and decreased response times during a spatial memory test relative to both vertical eye movements and fixation. These effects were only seen when horizontal eye movements preceded episodic memory retrieval, and not when they preceded encoding (Experiment 1). Further, when eye movements preceded retrieval, they were only beneficial with recognition tests demanding a high degree of right and left-hemisphere activity (Experiment 2). In both experiments the beneficial effects of horizontal eye movements were greatest for strongly right-handed individuals. These results support recent work suggesting increased interhemispheric brain activity induced by bilateral horizontal eye movements, and extend this literature to the encoding and retrieval of landmark shape and location information.  相似文献   

17.
To take advantage of the increasing number of in-vehicle devices, automobile drivers must divide their attention between primary (driving) and secondary (operating in-vehicle device) tasks. In dynamic environments such as driving, however, it is not easy to identify and quantify how a driver focuses on the various tasks he/she is simultaneously engaged in, including the distracting tasks. Measures derived from the driver’s scan path have been used as correlates of driver attention. This article presents a methodology for analyzing eye positions, which are discrete samples of a subject’s scan path, in order to categorize driver eye movements. Previous methods of analyzing eye positions recorded in a dynamic environment have relied completely on the manual identification of the focus of visual attention from a point of regard superimposed on a video of a recorded scene, failing to utilize information regarding movement structure in the raw recorded eye positions. Although effective, these methods are too time consuming to be easily used when the large data sets that would be required to identify subtle differences between drivers, under different road conditions, and with different levels of distraction are processed. The aim of the methods presented in this article are to extend the degree of automation in the processing of eye movement data by proposing a methodology for eye movement analysis that extends automated fixation identification to include smooth and saccadic movements. By identifying eye movements in the recorded eye positions, a method of reducing the analysis of scene video to a finite search space is presented. The implementation of a software tool for the eye movement analysis is described, including an example from an on-road test-driving sample.  相似文献   

18.
Two experiments were conducted in an attempt to determine the conditions under which shifts in the starting position of a linear positioning response influenced the reproduction of the end location of movements of various lengths. In Experiment 1, response bias (i.e., shift in constant error) was affected by the direction of the shift in starting position between presentation and recall. For short (20 cm) and medium (50 cm) length movements, this relationship was evident regardless of hand used (left or right), direction of the movement (left to right or right to left), and length of the retention interval (5 or 45 s). However, no relation between response bias and the direction of the starting position shifts was apparent for long (80 cm) movements. The results of Experiment 2 in which more movement lengths were used revealed a response bias that corresponded to shifts in starting position primarily during the first few reproductions of the two shortest movements (20 and 30 cm). However, no systematic bias was evident for any length movement after three reproduction attempts. Possible strategies used by subjects to reproduce the end location of movements of various lengths were discussed.  相似文献   

19.
Two experiments were conducted in an attempt to determine the conditions under which shifts in the starting position of a linear positioning response influenced the reproduction of the end location of movements of various lengths. In Experiment 1, response bias (i.e., shift in constant error) was affected by the direction of the shift in starting position between presentation and recall. For short (20 cm) and medium (50 cm) length movements, this relationship was evident regardless of hand used (left or right), direction of the movement (left to right or right to left), and length of the retention interval (5 or 45 s). However, no relation between response bias and the direction of starting position shifts was apparent for long (80 cm) movements. The results of Experiment 2 in which more movement lengths were used revealed a response bias that corresponded to shifts in starting position primarily during the first few reproductions of the two shortest movements (20 and 30 cm). However, no systematic bias was evident for any length movement after three reproduction attempts. Possible strategies used by subjects to reproduce the end location of movements of various lengths were discussed.  相似文献   

20.
Many studies have shown that covert visual attention precedes saccadic eye movements to locations in space. The present research investigated whether the allocation of attention is similarly affected by eye blinks. Subjects completed a partial-report task under blink and no-blink conditions. Experiment 1 showed that blinking facilitated report of the bottom row of the stimulus array: Accuracy for the bottom row increased and mislocation errors decreased under blink, as compared with no-blink, conditions, indicating that blinking influenced the allocation of visual attention. Experiment 2 showed that this was true even when subjects were biased to attend elsewhere. These results indicate that attention moves downward before a blink in an involuntary fashion. The eyes also move downward during blinks, so attention may precede blink-induced eye movements just as it precedes saccades and other types of eye movements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号