首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigated the coupling between a speaker's and a listener's eye movements. Some participants talked extemporaneously about a television show whose cast members they were viewing on a screen in front of them. Later, other participants listened to these monologues while viewing the same screen. Eye movements were recorded for all speakers and listeners. According to cross-recurrence analysis, a listener's eye movements most closely matched a speaker's eye movements at a delay of 2 sec. Indeed, the more closely a listener's eye movements were coupled with a speaker's, the better the listener did on a comprehension test. In a second experiment, low-level visual cues were used to manipulate the listeners' eye movements, and these, in turn, influenced their latencies to comprehension questions. Just as eye movements reflect the mental state of an individual, the coupling between a speaker's and a listener's eye movements reflects the success of their communication.  相似文献   

2.
In two experiments, we examined the relation between gaze control and recollective experience in the context of face recognition. In Experiment 1, participants studied a series of faces, while their eye movements were eliminated either during study or test, or both. Subsequently, they made remember/know judgements for each recognized test face. The preclusion of eye movements impaired explicit recollection without affecting familiarity-based recognition. In Experiment 2, participants examined unfamiliar faces under two study conditions (similarity vs. difference judgements), while their eye movements were registered. Similarity vs. difference judgements produced the opposite effects on remember/know responses, with no systematic effects on eye movements. However, face recollection was related to eye movements, so that remember responses were associated with more frequent refixations than know responses. These findings suggest that saccadic eye movements mediate the nature of recollective experience, and that explicit recollection reflects a greater consistency between study and test fixations than familiarity-based face recognition.  相似文献   

3.
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.  相似文献   

4.
When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond online to event structure. Participants read narrative passages as we monitored their eye movements. Several measures revealed that event structure predicted eye movements. In two experiments, we found that both early and overall reading times were longer for event boundaries. We also found that regressive saccades were more likely to land on event boundaries, but that readers were less likely to regress out of an event boundary. Experiment 2 also demonstrated that tracking event structure carries a working memory load. Eye movements provide a rich set of online data to test the cognitive reality of event segmentation during reading.  相似文献   

5.
ABSTRACT

Can eye movements tell us whether people will remember a scene? In order to investigate the link between eye movements and memory encoding and retrieval, we asked participants to study photographs of real-world scenes while their eye movements were being tracked. We found eye gaze patterns during study to be predictive of subsequent memory for scenes. Moreover, gaze patterns during study were more similar to gaze patterns during test for remembered than for forgotten scenes. Thus, eye movements are indeed indicative of scene memory. In an explicit test for context effects of eye movements on memory, we found recognition rate to be unaffected by the disruption of spatial and/or temporal context of repeated eye movements. Therefore, we conclude that eye movements cue memory by selecting and accessing the most relevant scene content, regardless of its spatial location within the scene or the order in which it was selected.  相似文献   

6.
In a number of studies, we have demonstrated that the spatial-temporal coupling of eye and hand movements is optimal for the pickup of visual information about the position of the hand and the target late in the hand's trajectory. Several experiments designed to examine temporal coupling have shown that the eyes arrive at the target area concurrently with the hand achieving peak acceleration. Between the time the hand reached peak velocity and the end of the movement, increased variability in the position of the shoulder and the elbow was accompanied by a decreased spatial variability in the hand. Presumably, this reduction in variability was due to the use of retinal and extra-retinal information about the relative positions of the eye, hand and target. However, the hand does not appear to be a slave to the eye. For example, we have been able to decouple eye movements and hand movements using Müller-Lyer configurations as targets. Predictable bias, found in primary and corrective saccadic eye movements, was not found for hand movements, if on-line visual information about the target was available during aiming. That is, the hand remained accurate even when the eye had a tendency to undershoot or overshoot the target position. However, biases of the hand were evident, at least in the initial portion of an aiming movement, when vision of the target was removed and vision of the hand remained. These findings accent the versatility of human motor control and have implications for current models of visual processing and limb control.  相似文献   

7.
Coordinated control of eye and hand movements in dynamic reaching   总被引:3,自引:0,他引:3  
In the present study, we integrated two recent, at first sight contradictory findings regarding the question whether saccadic eye movements can be generated to a newly presented target during an ongoing hand movement. Saccades were measured during so-called adaptive and sustained pointing conditions. In the adapted pointing condition, subjects had to direct both their gaze and arm movements to a displaced target location. The results showed that the eyes could fixate the new target during pointing. In addition, a temporal coupling of these corrective saccades was found with changes in arm movement trajectories when reaching to the new target. In the sustained pointing condition, however, the same subjects had to point to the initial target, while trying to deviate their gaze to a new target that appeared during pointing. It was found that the eyes could not fixate the new target before the hand reached the initial target location. Together, the results indicate that ocular gaze is always forced to follow the target intended by a manual arm movement. A neural mechanism is proposed that couples ocular gaze to the target of an arm movement. Specifically, the mechanism includes a reach neuron layer besides the well-known saccadic layer in the primate superior colliculus. Such a tight, sub-cortical coupling of ocular gaze to the target of a reaching movement can explain the contrasting behavior of the eyes in dependency of whether the eye and hand share the same target position or attempt to move to different locations.  相似文献   

8.
In the present study, we examined whether eye movements facilitate retention of visuo-spatial information in working memory. In two experiments, participants memorised the sequence of the spatial locations of six digits across a retention interval. In some conditions, participants were free to move their eyes during the retention interval, but in others they either were required to remain fixated or were instructed to move their eyes exclusively to a selection of the memorised locations. Memory performance was no better when participants were free to move their eyes during the memory interval than when they fixated a single location. Furthermore, the results demonstrated a primacy effect in the eye movement behaviour that corresponded with the memory performance. We conclude that overt eye movements do not provide a benefit over covert attention for rehearsing visuo-spatial information in working memory.  相似文献   

9.
While it is frequently advantageous to be able to use our hands independently, many actions demand that we use our hands co-operatively. In this paper we present two experiments that examine functional binding between the limbs during the execution of bimanual reach-to-grasp movements. The first experiment examines the effect of gaze direction on unimanual and bimanual reaches. Even when subjects' eye movements are restricted during bimanual reaches so that they may only foveate one target object, the limbs remain tightly synchronized to a common movement duration. In contrast, grip aperture is independently scaled to the size of the target for each hand. The second experiment demonstrates however, that the independent scaling of grip aperture is task dependent. If the two target objects are unified so that they appear to be part of a single object, grip apertures become more similar across the hands (i.e., grip aperture to the large target object is reduced in size while peak aperture to the small target item is increased in size). These results suggest that the coupling of the limbs can operate at a functional level.  相似文献   

10.
Two experiments addressed the coupling between eye movements and the cognitive processes underlying enumeration. Experiment 1 compared eye movements in a counting task with those in a “look” task, in which subjects were told to look at each dot in a pattern once and only once. Experiment 2 presented the same dot patterns to every subject twice, to measure the consistency with which dots were fixated between and within subjects. In both experiments, the number of fixations increased linearly with the number of objects to be enumerated, consistent with tight coupling between eye movements and enumeration. However, analyses of fixation locations showed that subjects tended to look at dots in dense, central regions of the display and tended not to look at dots in sparse, peripheral regions of the display, suggesting a looser coupling between eye movements and enumeration. Thus, the eyes do not mirror the enumeration process very directly.  相似文献   

11.
When people interpret language, they can reduce the ambiguity of linguistic expressions by using information about perspective: the speaker's, their own, or a shared perspective. In order to investigate the mental processes that underlie such perspective taking, we tracked people's eye movements while they were following instructions to manipulate objects. The eye fixation data in two experiments demonstrate that people do not restrict the search for referents to mutually known objects. Eye movements indicated that addressees considered objects as potential referents even when the speaker could not see those objects, requiring addressees to use mutual knowledge to correct their interpretation. Thus, people occasionally use an egocentric heuristic when they comprehend. We argue that this egocentric heuristic is successful in reducing ambiguity, though it could lead to a systematic error.  相似文献   

12.
The aim of the current study was to investigate subtle characteristics of social perception and interpretation in high-functioning individuals with autism spectrum disorders (ASDs), and to study the relation between watching and interpreting. As a novelty, we used an approach that combined moment-by-moment eye tracking and verbal assessment. Sixteen young adults with ASD and 16 neurotypical control participants watched a video depicting a complex communication situation while their eye movements were tracked. The participants also completed a verbal task with questions related to the pragmatic content of the video. We compared verbal task scores and eye movements between groups, and assessed correlations between task performance and eye movements. Individuals with ASD had more difficulty than the controls in interpreting the video, and during two short moments there were significant group differences in eye movements. Additionally, we found significant correlations between verbal task scores and moment-level eye movement in the ASD group, but not among the controls. We concluded that participants with ASD had slight difficulties in understanding the pragmatic content of the video stimulus and attending to social cues, and that the connection between pragmatic understanding and eye movements was more pronounced for participants with ASD than for neurotypical participants.  相似文献   

13.
按照3~5年级小学生的语文阅读水平,在各年级中分别选取年龄相同的高、中、低三组儿童作为被试,要求他们阅读适合本年级阅读水平的5篇短文,探讨同一年龄段内读者阅读水平的高低对阅读眼动注视模式的影响是否存在发展上的差异。通过记录其眼动轨迹,结果发现:9岁儿童的阅读眼动注视模式受读者本身阅读水平的影响最大,10岁次之,到11岁,随着儿童基本眼动行为的成熟,这种影响随之消失。表明读者阅读眼动注视模式的发展动力来源于语言操作技能和眼球运动协调性提高的交互作用。  相似文献   

14.
The face inversion effect is the finding that inverted faces are more difficult to recognize than other inverted objects. The present study explored the possibility that eye movements have a role in producing the face inversion effect. In Experiment 1, we demonstrated that the faces used here produce a robust face inversion effect when compared with another homogenous set of objects (antique radios). In Experiment 2, participants' eye movements were monitored while they learned a set of faces and during a recognition test. Although we clearly found a face inversion effect, the same features of a face were fixated during the learning and recognition test faces, whether the face was right side up or upside down. Thus, the face inversion effect is not a result of a different pattern of eye movements during the viewing of the face.  相似文献   

15.
Recent studies have documented substantial variability among typical listeners in how gradiently they categorize speech sounds, and this variability in categorization gradience may link to how listeners weight different cues in the incoming signal. The present study tested the relationship between categorization gradience and cue weighting across two sets of English contrasts, each varying orthogonally in two acoustic dimensions. Participants performed a four-alternative forced-choice identification task in a visual world paradigm while their eye movements were monitored. We found that (a) greater categorization gradience derived from behavioral identification responses corresponds to larger secondary cue weights derived from eye movements; (b) the relationship between categorization gradience and secondary cue weighting is observed across cues and contrasts, suggesting that categorization gradience may be a consistent within-individual property in speech perception; and (c) listeners who showed greater categorization gradience tend to adopt a buffered processing strategy, especially when cues arrive asynchronously in time.  相似文献   

16.
The question investigated was whether or not eye movements accompanied by abnormal retinal image movements, movements that are either or both at a different rate or in a different direction than the eye movement, predictably lead to perceived movement. Os reported whether or not they saw a visual target move when the movement of the target was either dependent on and simultaneous with their eye movements or when the target movement was independent of their eye movements. In the main experiment, observations were made when the ratio between eye and target movement fem/tm) was 2/5, 1/5, 1/10, 1/20, and 0. All these ratios were tested when the direction of the target movement was in the same (H+), opposite (H?), and at right angles to (V+, V?) the movement of the eyes. Eye movements, target movements, and reports of target movement were recorded. Results indicate that a discrepancy between eye and target movement greater than 20% predictably leads to perceived target movement, whereas a discrepancy of 5% or less rarely leads to perceived movement. The results are interpreted as support for the operation of a compensatory mechanism during eye movements.  相似文献   

17.
People are unable to accurately report on their own eye movements most of the time. Can this be explained as a lack of attention to the objects we fixate? Here, we elicited eye-movement errors using the classic oculomotor capture paradigm, in which people tend to look at sudden onsets even when they are irrelevant. In the first experiment, participants were able to report their own errors on about a quarter of the trials on which they occurred. The aim of the second experiment was to assess what differentiates errors that are detected from those that are not. Specifically, we estimated the relative influence of two possible factors: how long the onset distractor was fixated (dwell time), and a measure of how much attention was allocated to the onset distractor. Longer dwell times were associated with awareness of the error, but the measure of attention was not. The effect of the distractor identity on target discrimination reaction time was similar whether or not the participant was aware they had fixated the distractor. The results suggest that both attentional and oculomotor capture can occur in the absence of awareness, and have important implications for our understanding of the relationship between attention, eye movements, and awareness.  相似文献   

18.
We asked participants to make simple risky choices while we recorded their eye movements. We built a complete statistical model of the eye movements and found very little systematic variation in eye movements over the time course of a choice or across the different choices. The only exceptions were finding more (of the same) eye movements when choice options were similar, and an emerging gaze bias in which people looked more at the gamble they ultimately chose. These findings are inconsistent with prospect theory, the priority heuristic, or decision field theory. However, the eye movements made during a choice have a large relationship with the final choice, and this is mostly independent from the contribution of the actual attribute values in the choice options. That is, eye movements tell us not just about the processing of attribute values but also are independently associated with choice. The pattern is simple—people choose the gamble they look at more often, independently of the actual numbers they see—and this pattern is simpler than predicted by decision field theory, decision by sampling, and the parallel constraint satisfaction model. © 2015 The Authors. Journal of Behavioral Decision Making published by John Wiley & Sons Ltd.  相似文献   

19.
Primates, including humans, communicate using facial expressions, vocalizations and often a combination of the two modalities. For humans, such bimodal integration is best exemplified by speech-reading - humans readily use facial cues to enhance speech comprehension, particularly in noisy environments. Studies of the eye movement patterns of human speech-readers have revealed, unexpectedly, that they predominantly fixate on the eye region of the face as opposed to the mouth. Here, we tested the evolutionary basis for such a behavioral strategy by examining the eye movements of rhesus monkeys observers as they viewed vocalizing conspecifics. Under a variety of listening conditions, we found that rhesus monkeys predominantly focused on the eye region versus the mouth and that fixations on the mouth were tightly correlated with the onset of mouth movements. These eye movement patterns of rhesus monkeys are strikingly similar to those reported for humans observing the visual components of speech. The data therefore suggest that the sensorimotor strategies underlying bimodal speech perception may have a homologous counterpart in a closely related primate ancestor.  相似文献   

20.
In this paper we briefly describe preliminary data from two experiments that we have carried out to investigate the relationship between visual encoding and memory for objects and their locations within scenes. In these experiments, we recorded participants′ eye movements as they viewed a photograph of a cubicle with 12 objects positioned pseudo-randomly on a desk and shelves. After viewing the photograph, participants were taken to the actual cubicle where they undertook two memory tests. Participants were asked to identify the 12 target objects(from the photograph)presented amongst 12 distractors. They were then required to place each of the objects in the location that they occupied in the photograph. These tests assessed participants′ memory for identity of the objects and their locations. In Experiment 1, we assessed the influence of the encoding period and the test delay on object identity and location memory. In Experiment 2 we manipulated scanning behaviour during encoding by "boxing"some of the objects in the photo. We showed that using boxes to change eye movement behaviour during encoding directly affected the nature of memory for the scene. The results of these studies indicate a fundamental relationship between visual encoding and memory for objects and their locations. We explain our findings in terms of the Visual Memory Model(Hollingworth & Henderson, 2002).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号