首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Huettig F  Altmann GT 《Cognition》2005,96(1):B23-B32
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632-1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.  相似文献   

2.
The analysis of eye movement traces (i.e., the patterns of fixations in a search) is more complex than that of such parameters as mean fixation duration, and as a result, previous attempts have focused on a qualitative appraisal of the form of an eye movement trace. In this paper, the concept of thefixation map is introduced. Its application to the quantification of similarity of traces and the degree of coverage by fixations of a visual stimulus is discussed. The ability of fixation maps to aid in the understanding and communication of large numbers of eye movement traces is examined.  相似文献   

3.
P. McLeod, J. Driver, and J. Crisp (1988) proposed the existence of a movement filter in the human early visual system. This filter preattentively segregates all moving stimuli in the visual field from all stationery stimuli (McLeod et al., 1988) and all stimuli moving in one direction from those moving in another (P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991). The primary experimental paradigm that provides evidence for the movement filter is the visual search task. McLeod et al. (1988) demonstrated that a target defined by motion and shape perceptually pops out of a conjunctive display. Four experiments are presented that demonstrate that the output of the movement filter may depend on global characteristics of the display. When a moving element perceptually groups with a static object, preattentive segregation does not occur. However, when the same element does not perceptually group with a static object, preattentive segregation occurs.  相似文献   

4.
A device for detecting departures from visual fixation is described. The principle is to present control targets to one or both eyes at the projected position of the blind spot. When fixation is maintained the control target or targets are not seen. Departures from fixation manifest themselves by the reappearance of the target or targets in the visual field. The system allows approximately 1 deg of movement or less away from the fixation point in either direction.  相似文献   

5.
Given the prevalence, quality, and low cost of web cameras, along with the remarkable human sensitivity to gaze, we examined the accuracy of eye tracking using only a web camera. Participants were shown webcamera recordings of a person’s eyes moving 1°, 2°, or 3° of visual angle in one of eight radial directions (north, northeast, east, southeast, etc.), or no eye movement occurred at all. Observers judged whether an eye movement was made and, if so, its direction. Our findings demonstrate that for all saccades of any size or direction, observers can detect and discriminate eye movements significantly better than chance. Critically, the larger the saccade, the better the judgments, so that for eye movements of 3°, people can tell whether an eye movement occurred, and where it was going, at about 90% or better. This simple methodology of using a web camera and looking for eye movements offers researchers a simple, reliable, and cost-effective research tool that can be applied effectively both in studies where it is important that participants maintain central fixation (e.g., covert attention investigations) and in those where they are free or required to move their eyes (e.g., visual search).  相似文献   

6.
This study investigates whether threat-related words are especially likely to be perceived in unattended locations of the visual field. Threat-related, positive, and neutral words were presented at fixation as probes in a lexical decision task. The probe word was preceded by 2 simultaneous prime words (1 foveal, i.e., at fixation; 1 parafoveal, i.e., 2.2 deg. of visual angle from fixation), which were presented for 150 ms, one of which was either identical or unrelated to the probe. Results showed significant facilitation in lexical response times only for the probe threat words when primed parafoveally by an identical word presented in the right visual field. We conclude that threat-related words have privileged access to processing outside the focus of attention. This reveals a cognitive bias in the preferential, parallel processing of information that is important for adaptation.  相似文献   

7.
The importance of vision for postural equilibrium has long been known; traditionally, this visual contribution to the control of posture has been analyzed primarily in terms of optical and retinal phenomena. Recently, however, there has been some suggestion that binocular and monocular fixation of identical stimuli have differential effects. Three experiments were conducted in order to measure self-generated movement (sway during quiet standing) of the body's center of gravity while field structure, ankle proprioception, and binocular/monocular fixation were varied. Field structure was varied from total darkness, to the presence of single and multiple LEDs in the dark, to full field structure (i.e., the richness of the feed back information was varied). Ankle proprioception was varied by changing foot position from side-by-side to heel-to-toe positions. Results indicate that (1) ankle-joint input is a significant factor in reducing sway, (2) binocular fixation attenuates sway relative to monocular fixation, under otherwise identical visual conditions, and (3) this difference persists in total darkness. Taken together, the data indicate that the visual influence on postural equilibrium results from a complex synergy that receives multimodal inputs. A simple optical/retinal explanation is not sufficient.  相似文献   

8.
Numerous studies showed that the simultaneous execution of multiple actions is associated with performance costs. Here, we demonstrate that when highly automatic responses are involved, performance in single-response conditions can actually be worse than in dual-response conditions. Participants responded to peripheral visual stimuli with an eye movement (saccade), a manual key press, or both. To manipulate saccade automaticity, a central fixation cross either remained present throughout the trial (overlap condition, lower automaticity) or disappeared 200 ms before visual target onset (gap condition, greater automaticity). Crucially, single-response conditions yielded more performance errors than dual-response conditions (i.e., dual-response benefit), especially in gap trials. This was due to difficulties associated with inhibiting saccades when only manual responses were required, suggesting that response inhibition (remaining fixated) can be even more resource-demanding than overt response execution (saccade to peripheral target).  相似文献   

9.
Some recent studies have demonstrated that the processing of color is favored by the nondominant hemisphere in English-speaking subjects. Single Chinese logographs in Japanese- as well as Chinese-speaking subjects are similarly favored. In the present study, it was hypothesized that more Stroop interference would occur in the nondominant hemisphere because the two processes involved in the Stroop effect (i.e., reading logographs and naming colors) are possibly localized in that hemisphere in Chinese-speaking subjects. Eighteen right-handed Chinese-English bilinguals were used as subjects. There were three conditions in each visual field: Interference, reading, and naming. Each slide was presented for 150 msec preceded by a fixation dot. Subjects were asked to verbally report as fast and as accurately as possible either the color words or the color names, depending upon the condition. Reaction times and error rates were analyzed. As expected, more Stroop interference was obtained when color words were presented in the left visual field. This result is in direct contrast with that of Y-C. Tsao, T. Feustel, and C. Soseos 1979, Brain and Language, 8, 367–371. In that study, more Stroop interference was obtained when the materials were presented in the right visual field in English-speaking subjects.  相似文献   

10.
Geometrical stimuli (48 6-item arrays of familiar forms, e.g., circle), tachistoscopically presented in the right or left visual field, were more accurately perceived in the right than left visual field by 15 college students. Targets about half the length of the displays exposed here were perceived with equal facility in both visual fields (Bryden, 1960). Results suggest that length of array might affect the difference in perceptual accuracy of forms shown in the right and left visual fields. Figures in the right visual field were predominantly processed from left to right, and forms in the left visual field from right to left. Since more symbols were identified in the right than left visual field, the left to right encoding sequence may be more efficient than a right to left movement. Limited experience of most Ss in reading symbols from left to right is probably only one factor. Extensive experience reading alphabetical material from left to right might have developed the physiological mechanism underpinning this sequence more than the one serving the opposite movement.  相似文献   

11.
Summary 1. The persistence of visual perception was investigated under conditions of visual fixation as well as eye movement. The Ss' task was to discriminate brief double light impulses; their responses were recorded as a function of the duration of the interstimulus interval. Based on these data the critical interstimulus interval was calculated, which yielded equal response frequencies for the perception of one or two stimuli upon presentation of double light pulses.2. In the condition of visual fixation the two stimuli could not be discriminated until the mean value of interstimulus interval exceeded 73 msec. In the condition with eye movements, when the first stimulus was presented in the parafoveal region of the retina before the beginning of the saccade and the second stimulus in the foveal region just after termination of the eye movement, this duration was shown to be statistically of the same magnitude (76 msec).3. Possible alternative interpretations of this latter result, e.g., that it could be explained in terms of masking or saccadic suppression rather than visual persistence was discussed; it was attempted to invalidate such explanations by means of three control experiments.4. The main result, the persistence of visual perception during voluntary eye movements, was discussed in relation to the problem of spatial and temporal stability of visual perception.I thank Prof. Dr. H.W. Wendt for support in correcting the English translation.  相似文献   

12.
Recent research [e.g., Carrozzo, M., Stratta, F., McIntyre, J., & Lacquaniti, F. (2002). Cognitive allocentric representations of visual space shape pointing errors. Experimental Brain Research 147, 426-436; Lemay, M., Bertrand, C. P., & Stelmach, G. E. (2004). Pointing to an allocentric and egocentric remembered target. Motor Control, 8, 16-32] reported that egocentric and allocentric visual frames of reference can be integrated to facilitate the accuracy of goal-directed reaching movements. In the present investigation, we sought to specifically examine whether or not a visual background can facilitate the online, feedback-based control of visually-guided (VG), open-loop (OL), and memory-guided (i.e. 0 and 1000 ms of delay: D0 and D1000) reaches. Two background conditions were examined in this investigation. In the first background condition, four illuminated LEDs positioned in a square surrounding the target location provided a context for allocentric comparisons (visual background: VB). In the second condition, the target object was singularly presented against an empty visual field (no visual background: NVB). Participants (N=14) completed reaching movements to three midline targets in each background (VB, NVB) and visual condition (VG, OL, D0, D1000) for a total of 240 trials. VB reaches were more accurate and less variable than NVB reaches in each visual condition. Moreover, VB reaches elicited longer movement times and spent a greater proportion of the reaching trajectory in the deceleration phase of the movement. Supporting the benefit of a VB for online control, the proportion of endpoint variability explained by the spatial location of the limb at peak deceleration was less for VB as opposed to NVB reaches. These findings suggest that participants are able to make allocentric comparisons between a VB and target (visible or remembered) in addition to egocentric limb and VB comparisons to facilitate online reaching control.  相似文献   

13.
Accommodation was measured by the laser scintillation technique while the S viewed a stationary fixation spot through a series of apertures in a screen located at various distances. The magnitude of accommodation was a compromise between the distance of the fixation spot and the screen. Accommodation was affected significantly by the interaction of the distance of the screen with aperture sizes of 1 and 4 deg and distance of the screen with its order of movement from near to far or far to near. The data are interpreted as implying the importance of the peripheral visual field and/or perceptual factors when conflicting cues to distance coexist in the visual field.  相似文献   

14.
The goal of this study was to determine whether a sensorimotor or cognitive encoding is used to encode a target position and save it into iconic memory. The methodology consisted of disrupting a manual aiming movement to a memorized visual target by displacing the visual field containing the target. The nature of the encoding was inferred from the nature and the size of the errors relative to a control. The target was presented either centrally or in the right periphery. Participants moved their hand from the left to the right of fixation. Black and white vertical stripes covered the whole visual field. The visual field was either stationary throughout the trial or was displaced to the right or left at the extinction of the target or at the start of the hand movement. In the latter case, the displacement of the visual field obviously could only be taken into account by the participant during the gesture. In this condition, our hypothesis was that the aiming error would follow the direction of visual field displacement. Results showed three major effects: (1) Vision of the hand during the gesture improved the final accuracy; (2) visual field displacement produced an underestimation of the target distance only when the hand was not visible during the gesture and was always in the same direction displacement; and (3) the effect of the stationary structured visual field on aiming precision when the hand was not visible depended on the distance to the target. These results suggest that a stationary structured visual field is used to support the memory of the target position. The structured visual field is more critical when the hand is not visible and when the target appears in peripheral rather than central vision. This suggests that aiming depends on memory of the relative peripheral position of the target (allocentric reference). However, in the present task, cognitive encoding does not maintain the "position" of the target in memory without reference to the environment. The systematic effect of the visual field displacement on the manual aiming suggests that the role of environmental reference frames in memory for position is not well understood. Some studies, in particular those of Giesbrecht and Dixon (1999) and Glover and Dixon (2001), suggested differing roles of the environment in the retention of the target position and the control of aiming movements toward the target. The present observations contribute to understanding the mechanism involved in locating and grasping objects with the hand.  相似文献   

15.
We studied the strategic (presumably cortical) control of ocular fixation in experiments that measured the fixation offset effect (FOE) while manipulating readiness to make reflexive or voluntary eye movements. The visual grasp reflex, which generates reflexive saccades to peripheral visual signals, reflects an opponent process in the superior colliculus (SC) between fixation cells at the rostral pole, whose activity helps maintain ocular position and increases when a stimulus is present at fixation, and movement cells, which generate saccades and are inhibited by rostral fixation neurons. Voluntary eye movements are controlled by movement and fixation cells in the frontal eye field (FEF). The FOE--a decrease in saccade latency when the fixation stimulus is extinguished--has been shown to reflect activity in the collicular eye movement circuitry and also to have an activity correlate in the FEF. Our manipulation of preparatory set to make reflexive or voluntary eye movements showed that when reflexive saccades were frequent and voluntary saccades were infrequent, the FOE was attenuated only for reflexive saccades. When voluntary saccades were frequent and reflexive saccades were infrequent, the FOE was attenuated only for voluntary saccades. We conclude that cortical processes related to task strategy are able to decrease fixation neuron activity even in the presence of a fixation stimulus, resulting in a smaller FOE. The dissociation in the effects of a fixation stimulus on reflexive and voluntary saccade latencies under the same strategic set suggests that the FOEs for these two types of eye movements may reflect a change in cellular activity in different neural structures, perhaps in the SC for reflexive saccades and in the FEF for voluntary saccades.  相似文献   

16.
拼音文字阅读的眼动研究发现,老年人会采取一种“风险”阅读策略来弥补因视力和认知的自然老化所造成的阅读困难。本研究通过控制字间空格的大小对青年人和老年人汉语阅读过程的眼动模式进行了比较,结果发现,相比青年人,老年人读得更慢,有更长的注视时间和更多的回视,这些研究结果与拼音文字一致;但更为重要的发现是,相比青年人,老年人有更短的向前眼跳距离,缩小汉语字间空格给老年人阅读造成了更为显著的困难,但扩大字间空格两组被试表现出相似的阅读眼动模式。结果说明,汉语阅读中老年人并不使用拼音文字研究中发现的“风险”阅读策略,反而采用了更为谨慎的策略,原因可能在于,作为拼音文字阅读中的重要视觉线索,空格是影响两种语言下老年人阅读加工策略不同的重要因素。  相似文献   

17.
ABSTRACT

Pairs of emotional (pleasant or unpleasant) and neutral scenes were presented peripherally (≥5° away from fixation) during a central letter-discrimination task. Selective attentional capture was assessed by means of eye movement orienting, i.e., probability of first fixating a scene and the time until first fixation. Static and dynamic visual saliency values of the scenes were computationally modelled. Results revealed selective orienting to both pleasant and unpleasant relative to neutral scenes. Importantly, such effects remained in the absence of visual saliency differences, even though saliency influenced eye movements. This suggests that selective attention to emotional scenes is genuinely driven by the processing of affective significance in extrafoveal vision.  相似文献   

18.
Three experiments dealing with hemispheric specialization are presented. In Experiment 1, words and/or faces were presented tachistoscopically to the left or right of fixation. Words were more accurately identified in the right visual field and faces were more accurately identified in the left visual field. A forced choice error analysis for words indicated that errors made for word stimuli were most frequently visually similar words and this effect was particularly pronounced in the left visual field. Two additional experiments supported this finding. On the basis of the results, it was argued that word identification is a multistage process, with visual feature analysis carried out by the right hemisphere and identification and naming by the left hemisphere. In addition, Kinsbourne's attentional model of brain function was rejected in favor of an anatomical model which suggests that simultaneous processing of verbal and nonverbal information does not constrict the attention of either hemisphere.  相似文献   

19.
The visual behaviors and movement characteristics of pedestrians are related to their surrounding potential safety hazards, such as approaching vehicles. This study primarily aimed to investigate the visual patterns and walking behaviors of pedestrians interacting with approaching vehicles. Field experiments were conducted at two uncontrolled crosswalks located at the Cuihua and Yanta roads in Xi’an, China. The visual performance of pedestrians was assessed using the eye tracking system from SensoMotoric Instruments (SMI). Moreover, motion trajectories of the pedestrians and approaching vehicles were obtained using an unmanned aerial vehicle. Subsequently, the visual attributes and movement trajectories of pedestrians and motion trajectories of approaching vehicles were statistically analyzed. The results showed that approaching vehicles distracted the fixation of crossing pedestrians significantly, and occupied 29.5% of the total duration of fixation; that is, pedestrians always directed more fixation points to the approaching vehicles compared to other stimuli. As a vehicle approached, pedestrians’ fixation shifted from other areas of interest to the vehicle. Moreover, an increase in the velocity of the vehicle and a closer distance between pedestrian and the vehicle resulted in an increase in the pedestrians’ duration of fixation on the approaching vehicle, and they implemented more saccades. Furthermore, approaching vehicle’s velocity and distance between pedestrian and approaching vehicle are not significantly associated with pedestrian’s movement attributes. These findings provide insights into the crossing behavior of pedestrians during pedestrian-vehicle interactions, which could assist future researchers and policy makers.  相似文献   

20.
Integrating pictorial information across eye movements   总被引:5,自引:0,他引:5  
Six experiments are reported dealing with the types of information integrated across eye movements in picture perception. A line drawing of an object was presented in peripheral vision, and subjects made an eye movement to it. During the saccade, the initially presented picture was replaced by another picture that the subject was instructed to name as quickly as possible. The relation between the stimulus on the first fixation and the stimulus on the second fixation was varied. Across the six experiments, there was about 100-130 ms facilitation when the pictures were identical compared with a control condition in which only the target location was specified on the first fixation. This finding clearly implies that information about the first picture facilitated naming the second picture. Changing the size of the picture from one fixation to the next had little effect on naming time. This result is consistent with work on reading and low-level visual processes in indicating that pictorial information is not integrated in a point-by-point manner in an integrated visual buffer. Moreover, only about 50 ms of the facilitation for identical pictures could be attributed to the pictures having the same name. When the pictures represented the same concept (e.g., two different pictures of a horse), there was a 90-ms facilitation effect that could have been the result of either the visual or conceptual similarity of the pictures. However, when the pictures had different names, only visual similarity produced facilitation. Moreover, when the pictures had different names, there appeared to be inhibition from the competing names. The results of all six experiments are consistent with a model in which the activation of both the visual features and the name of the picture seen on the first fixation survive the saccade and combine with the information extracted on the second fixation to produce identification and naming of the second picture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号