首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study proposed and verified a new hypothesis on the relationship between gaze direction and visual attention: attentional bias by default gaze direction based on eye-head coordination. We conducted a target identification task in which visual stimuli appeared briefly to the left and right of a fixation cross. In Experiment 1, the direction of the participant’s head (aligned with the body) was manipulated to the left, front, or right relative to a central fixation point. In Experiment 2, head direction was manipulated to the left, front, or right relative to the body direction. This manipulation was based on results showing that bias of eye position distribution was highly correlated with head direction. In both experiments, accuracy was greater when the target appeared at a position where the eyes would potentially be directed. Consequently, eye–head coordination influences visual attention. That is, attention can be automatically biased toward the location where the eyes tend to be directed.  相似文献   

2.
A comprehensive model of gaze control must account for a number of empirical observations at both the behavioural and neurophysiological levels. The computational model presented in this article can simulate the coordinated movements of the eye, head, and body required to perform horizontal gaze shifts. In doing so it reproduces the predictable relationships between the movements performed by these different degrees of freedom (DOFs) in the primate. The model also accounts for the saccadic undershoot that accompanies large gaze shifts in the biological visual system. It can also account for our perception of a stable external world despite frequent gaze shifts and the ability to perform accurate memory-guided and double-step saccades. The proposed model also simulates peri-saccadic compression: the mis-localization of a briefly presented visual stimulus towards the location that is the target for a saccade. At the neurophysiological level, the proposed model is consistent with the existence of cortical neurons tuned to the retinal, head-centred, body-centred, and world-centred locations of visual stimuli and cortical neurons that have gain-modulated responses to visual stimuli. Finally, the model also successfully accounts for peri-saccadic receptive field (RF) remapping which results in reduced responses to stimuli in the current RF location and an increased sensitivity to stimuli appearing at the location that will be occupied by the RF after the saccade. The proposed model thus offers a unified explanation for this seemingly diverse range of phenomena. Furthermore, as the proposed model is an implementation of the predictive coding theory, it offers a single computational explanation for these phenomena and relates gaze shifts to a wider framework for understanding cortical function.  相似文献   

3.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

4.
Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags between eye and hand onsets were small and initiation times were substantially correlated, suggesting simultaneous control and tight eye-hand coupling. With eyes and hand departing from different start locations (gaze aligned with the center of the range of possible target positions), time lags between eye and hand onsets were large and initiation times were largely uncorrelated, suggesting independent control and decoupling of eye and hand movements. Furthermore, initial gaze position strongly mediated manual reaching performance indexed by increments in movement time as a function of target distance. Experiment 2 confirmed the impact of target foveation in modulating the effect of target distance on movement time. Our findings reveal the operation of an overarching, flexible neural control system that tunes the operation and cooperation of saccadic and manual control systems depending on where the eyes look at target onset.  相似文献   

5.
In two experiments, visually perceived eye level (VPEL) was measured while subjects viewed two-dimensional displays that were either upright or pitched 20 degrees top-toward or 20 degrees top-away from them. In Experiment 1, it was demonstrated that binocular exposure to a pair of pitched vertical lines or to a pitched random dot pattern caused a substantial upward VPEL shift for the top-toward pitched array and a similarly large downward shift for the top-away array. On the other hand, the same pitches of a pair of horizontal lines (viewed binocularly or monocularly) produced much smaller VPEL shifts. Because the perceived pitch of the pitched horizontal line display was nearly the same as the perceived pitch of the pitched vertical line and dot array, the relatively small influence of pitched horizontal lines on VPEL cannot be attributed simply to an underestimation of their pitch. In Experiment 2, the effects of pitched vertical lines, dots, and horizontal lines on VPEL were again measured, together with their effects on resting gaze direction (in the vertical dimension). As in Experiment 1, vertical lines and dots caused much larger VPEL shifts than did horizontal lines. The effects of the displays on resting gaze direction were highly similar to their effects on VPEL. These results are consistent with the hypothesis that VPEL shifts caused by pitched visual arrays are due to the direct influence of these arrays on the oculomotor system and are not mediated by perceived pitch.  相似文献   

6.
H Heuer  D A Owens 《Perception》1989,18(3):363-377
With horizontal gaze, the resting posture of binocular vergence typically corresponds to a distance of about 1 m. The effect of vertical direction of gaze on this basic resting posture was investigated. The dark vergence of twenty-four subjects was measured while they fixated a dim monocular light-point at vertical directions ranging from -45 degree (lowered) to +30 degrees (elevated). In one condition, gaze was varied by changes in eye position with the head held upright; in a second condition, gaze was varied by changes in head inclination with the eyes held in constant (horizontal) position with respect to the head. In both conditions, dark vergence shifted in the convergent (nearer) direction with lowered gaze and in the divergent (farther) direction with elevated gaze. The effect of varied eye inclination was larger, more variable across subjects, and more stable over time than that of varied head inclination. These findings indicate that multiple mechanisms contribute to gaze-related variations of the resting posture of the eyes. They may help to explain the variations of space perception and visual fatigue that are observed with different gaze inclinations.  相似文献   

7.
In 2 experiments, we evaluated relations between postural activity and the amplitude of visually guided eye movements. Participants shifted their gaze to follow horizontal oscillation of a visible target. The target moved with amplitude 9° or 24°. In different experiments, the frequency of target oscillation was 0.5 Hz or 1.1 Hz. In both experiments, the variability of head and torso motion was reduced (in the anterior-posterior axis) when participants viewed moving targets, relative to sway during viewing of stationary targets. Sway variability was not influenced by the amplitude of target motion. The results are compatible with the hypothesis that postural activity was modulated relative to the demands of suprapostural visual tasks.  相似文献   

8.
Recent studies have shown that cueing eye gaze can affect the processing of visual information, and this phenomenon is called the gaze-orienting effect (visual-GOE). Emerging evidence has shown that the cueing eye gaze also affects the processing of auditory information (auditory-GOE). However, it is unclear whether the auditory-GOE is modulated by emotion. We conducted three behavioural experiments to investigate whether cueing eye gaze influenced the orientation judgement to a sound, and whether the effect was modulated by facial expressions. The current study set four facial expressions (angry, fearful, happy, and neutral), manipulated the display type of facial expressions, and changed the sequence of gaze and emotional expressions. Participants were required to judge the sound orientation after facial expressions and gaze cues. The results showed that the orientation judgement of sound was influenced by gaze direction in all three experiments, and the orientation judgement of sound was faster when the face was oriented to the target location (congruent trials) than when the face was oriented away from the target location (incongruent trials). The modulation of emotion on auditory-GOE was observed only when gaze shifted followed by facial expression (Exp3); the auditory-GOE was significantly greater for angry faces than for neutral faces. These findings indicate that auditory-GOE as a social phenomenon exists widely, and the effect was modulated by facial expression. Gaze shift before the presentation of emotion was the key influencing factor for the emotional modulation in an auditory target gaze-orienting task. Our findings suggest that the integration of facial expressions and eye gaze was context-dependent.  相似文献   

9.
胡中华  赵光  刘强  李红 《心理学报》2012,44(4):435-445
已有研究发现在视觉搜索任务中对直视的探测比斜视更快且更准确, 该现象被命名为“人群中的凝视效应”。大多数研究者将该效应的产生归因于直视会捕获更多的注意。然而, 直视条件下对搜索项的匹配加工更容易也有可能导致对直视的探测比斜视快。此外,已有研究还发现头的朝向会影响对注视方向的探测, 但对于其产生原因缺乏实验验证。本研究采用视觉搜索范式, 运用眼动技术, 把注视探测的视觉搜索过程分为准备阶段、搜索阶段和反应阶段, 对这两个问题进行了探讨。结果显示:对直视的探测优势主要表现在搜索阶段和反应阶段; 在搜索阶段直视的探测优势获益于搜索路径的变短和分心项数量的变少以及分心项平均注视时间的变短; 头的朝向仅在搜索阶段对注视探测产生影响。该结果表明, 在直视探测中对搜索项的匹配加工比在斜视探测中更容易也是导致“人群中的凝视效应”的原因之一; 头的朝向仅仅影响了对注视方向的搜索并没有影响对其的确认加工。  相似文献   

10.
Two experiments using a modified Posner‐type visual cueing paradigm tested the prediction that detecting the darker region of the eyes of another's gaze triggers a reflexive orienting of the observer in the direction of the gaze. A target was presented in the left or right visual‐field following a gaze‐cue with positive or negative‐image polarity (Experiment 1). In Experiment 2, the polarity of the eyes was manipulated independently of the negative polarity of the face (eye‐positive or eye‐negative‐image polarity conditions). The results showed that the response to the target presented at the side the eyes gazed toward was faster than for the target presented at the other side in the positive polarity condition (Experiment 1), whereas, in the negative polarity condition, the gaze‐cuing effect was not found. In Experiment 2, in the eye‐negative condition, a reversed gaze‐cueing effect appeared, whereas in the eye‐positive polarity condition, a typical gaze‐cueing effect was obtained. These findings suggested that the reflexive orienting of the observer shifts toward the position indicated by the darker region of the other's eyes.  相似文献   

11.
Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2–4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search.  相似文献   

12.
Research has shown that observers automatically align their attention with another's gaze direction. The present study investigates whether inferring another's attended location affects the observer's attention in the same way as observing their gaze direction. In two experiments, we used a laterally oriented virtual human head to prime one of two laterally presented targets. Experiment 1 showed that, in contrast to the agent with closed eyes, observing the agent with open eyes facilitated the observer's alignment of attention with the primed target location. Experiment 2, where either sunglasses or occluders concealed the agent's eye direction, showed that only the agent with the sunglasses facilitated the observer's alignment of attention with the target location. Taken together, the data demonstrate that head orientation alone is not sufficient to trigger a shift in the observer's attention, that gaze direction is crucial to this process, and that inferring the region to which another person is attending does facilitate the alignment of attention.  相似文献   

13.
Selection mechanisms in reading lexically ambiguous words   总被引:9,自引:0,他引:9  
Readers' eye movements were monitored as they read sentences containing lexically ambiguous words. The ambiguous words were either biased (one strongly dominant interpretation) or nonbiased. Readers' gaze durations were longer on nonbiased than biased words when the disambiguating information followed the target word. In Experiment 1, reading times on the disambiguating word did not differ whether the disambiguation followed the target word immediately or occurred several words later. In Experiment 2, prior disambiguation eliminated the long gaze durations on nonbiased target words but resulted in long gaze durations on biased target words if the context demanded the subordinate meaning. The results indicate that successful integration of one meaning with prior context terminates the search for alternative meanings of that word. This results in selective (single meaning) access when integration of a dominant meaning is fast (due to a biasing context) and identification of a subordinate meaning is slow (a strongly biased ambiguity with a low-frequency meaning).  相似文献   

14.
Synergistic interactions between visual and postural behaviors were observed in a previous study during a precise visual task (search for a specific target in a picture) performed upright as steady as possible. The goal of the present study was to confirm and extend these novel findings in a more ecological condition with no steadiness requirement. Twelve healthy young adults performed two visual tasks, i.e. a precise task and a control task (free-viewing). Center of pressure, lower back, neck, head and eye movements were recorded during each task. The subjective cognitive workload was assessed after each task (NASA-TLX questionnaire). Pearson correlations and cross-correlations between eyes (time-series, characteristics of fixation) and center of pressure/body movements were used to test the synergistic model. As expected, significant negative Pearson correlations between eye and head-neck movement variables were only observed in searching. They indicated that larger precise gaze shifts were correlated with lower head and neck movements. One cross-correlation coefficient (between COP on the AP axis and eyes in the up/down direction) was also significantly higher, i.e. stronger, in searching than in free-viewing. These synergistic interactions likely required greater cognitive demand as indicated by the greater NASA-TLX score in searching. Moreover, the previous Pearson correlations were no longer significant after controlling for the NASA-TLX global score (thanks to partial correlations). This study provides new evidence of the existence of a synergistic process between visual and postural behaviors during visual search tasks.  相似文献   

15.
Mental images seem to have a size; the experimental problem was to map that image size onto a scale of physical measurement. To this end, two experiments were conducted to measure the size of mental images in degrees of visual angle. In Experiment 1, college students employed light pointers to indicate the horizontal extent of projected mental images of words (the letter string, not the referent). Imagined words covered about 1.0 degress of visual angle per letter. In Experiment 2, a more objective eye-movement response was used to measure the visual angle size of imagined letter strings. Visual angle of eye movement was found to increase regularly as the letter distance between the fixation point and a probed letter position increased. Each letter occupied about 2.5 degrees of visual angle for the four-letter strings in the control/default size condition. Possible relations between eye movements and images are discussed.  相似文献   

16.
Three experiments examined 3- to 5-year-olds' use of eye gaze cues to infer truth in a deceptive situation. Children watched a video of an actor who hid a toy in 1 of 3 cups. In Experiments 1 and 2, the actor claimed ignorance about the toy's location but looked toward 1 of the cups, without (Experiment 1) and with (Experiment 2) head movement. In Experiment 3, the actor provided contradictory verbal and eye gaze clues about the location of the toy. Four- and 5-year-olds correctly used the actor's gaze cues to locate the toy, whereas 3-year-olds failed to do so. Results suggest that by 4 years of age, children begin to understand that eye gaze cues displayed by a deceiver can be informative about the true state of affairs.  相似文献   

17.
The purpose of this study was to clarify the properties of gaze and head movements during forehand stroke in table tennis. Collegiate table tennis players (n = 12) conducted forehand strokes toward a ball launched by a skilled experimenter. A total of ten trials were conducted for the experimental task. Horizontal and vertical movements of the ball, gaze, head and eye were analyzed from the image recorded by an eye tracking device. The results showed that participants did not always keep their gaze and head position on the ball throughout the entire ball path. Our results indicate that table tennis players tend to gaze at the ball in the initial ball-tracking phase. Furthermore, there was a significant negative correlation between eye and head position especially in the vertical direction. This result suggests that horizontal VOR is suppressed more than vertical VOR in ball-tracking during table tennis forehand stroke. Finally, multiple regression analysis showed that the effect of head position to gaze position was significantly higher than that of eye position. This result indicates that gaze position during forehand stroke could be associated with head position rather than eye position. Taken together, head movements may play an important role in maintaining the ball in a constant egocentric direction in table tennis forehand stroke.  相似文献   

18.
This study tested effects of gaze-movement angle and extraretinal eye movement information on performance in a locomotion control task. Subjects hovered in a virtual scene to maintain position against optically simulated gusts. Gaze angle was manipulated by varying the simulated camera pitch orientation. Availability of extraretinal information was manipulated via simulated-pursuit fixation. In Experiment 1, subjects performed better when the camera faced a location on the ground than when it pointed toward the horizon. Experiment 2 tested whether this gain was influenced by availability of appropriate eye movements. Subjects performed slightly better when the camera pointed at nearby than at distant terrain, both in displays that did and in displays that did not simulate pursuit fixation. This suggested that subjects could perform the task using geometric image transformations, with or without appropriate eye movements. Experiment 3 tested more rigorously the relative importance of gaze angle and extraretinal information over a greater range of camera orientations; although subjects could use image transformations alone to control position adequately with a distant point of regard, they required eye movements for optimal performance when viewing nearby terrain.  相似文献   

19.
The head, eye, and shoulder are each free to rotate around three mutually orthogonal axes. These three degrees of freedom allow a given gaze or pointing direction of the eye, head, or arm to be obtained in many different possible orientations. Unlike translations in three dimensions, three-dimensional (3-D) rotations are noncommutative. Therefore, the orientation of a rigid body following sequential rotations about two different axes depends on the order of the rotations. In this article, we demonstrate that only two degrees of freedom are used during orienting movements of the head and pointing movements of the arm. This provides a unique orientation of head and arm for each gaze or pointing direction despite the noncommutativity of three-dimensional rotations. This observation is in itself not new. We found, however, that (a) the two-dimensional lquot;rotation surface,rquot; which describes the orientation of the head for all gaze directions, is curved, unlike the analogous flat plane for the eye. (b) The rotation surface for the head is curved differently than that for the arm. This result argues against the hypothesis that the orientations of head and arm are directly coupled during pointing. It also implies that the orientation of the eye in space during gaze shifts of the eye and head is not uniquely determined for a given direction of gaze. This finding argues against a perceptual basis for the reduction of rotational degrees of freedom.  相似文献   

20.
Atypical processing of eye contact is one of the significant characteristics of individuals with autism, but the mechanism underlying atypical direct gaze processing is still unclear. This study used a visual search paradigm to examine whether the facial context would affect direct gaze detection in children with autism. Participants were asked to detect target gazes presented among distracters with different gaze directions. The target gazes were either direct gaze or averted gaze, which were either presented alone (Experiment 1) or within facial context (Experiment 2). As with the typically developing children, the children with autism, were faster and more efficient to detect direct gaze than averted gaze, whether or not the eyes were presented alone or within faces. In addition, face inversion distorted efficient direct gaze detection in typically developing children, but not in children with autism. These results suggest that children with autism use featural information to detect direct gaze, whereas typically developing children use configural information to detect direct gaze.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号