首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous research shows that directed actions can unconsciously influence higher-order cognitive processing, helping learners to retain knowledge and guiding problem solvers to useful insights (e.g. Cook, S. W., Mitchell, Z., & Goldin-Meadow, S. (2008). Gesturing makes learning last. Cognition, 106, 1047-1058; Thomas, L. E., & Lleras, A. (2007). Moving eyes and moving thought: on the spatial compatibility between eye movements and cognition. Psychonomic Bulletin and Review, 14, 663-668). We examined whether overt physical movement is necessary for these embodied effects on cognition, or whether covert shifts of attention are sufficient to influence cognition. We asked participants to try to solve Duncker’s radiation problem while occasionally directing them, via an unrelated digit-tracking task, to shift their attention (while keeping their eyes fixed) in a pattern related to the problem’s solution, to move their eyes in this pattern, or to keep their eyes and their attention fixed in the center of the display. Although they reported being unaware of any relationship between the digit-tracking task and the radiation problem, participants in both the eye-movement and attention-shift groups were more likely to solve the problem than were participants who maintained fixation. Our results show that by shifting attention in a pattern compatible with a problem’s solution, we can aid participants’ insight even in the absence of overt physical movements.  相似文献   

2.
Although the development of number-line estimation ability is well documented, little is known of the processes underlying successful estimators’ mappings of numerical information onto spatial representations during these tasks. We tracked adults’ eye movements during a number-line estimation task to investigate the processes underlying number-to-space translation, with three main results. First, eye movements were strongly related to the target number’s location, and early processing measures directly predicted later estimation performance. Second, fixations and estimates were influenced by the size of the first number presented, indicating that adults calibrate their estimates online. Third, adults’ number-line estimates demonstrated patterns of error consistent with the predictions of psychophysical models of proportion estimation, and eye movement data predicted the specific error patterns we observed. These results support proportion-based accounts of number-line estimation and suggest that adults’ translation of numerical information into spatial representations is a rapid, online process.  相似文献   

3.
This paper presents a novel three-dimensional (3-D) eye movement analysis algorithm for binocular eye tracking within virtualreality (VR). The user’s gaze direction, head position, and orientation are tracked in order to allow recording of the user’s fixations within the environment. Although the linear signal analysis approach is itself not new, its application to eye movement analysis in three dimensions advances traditional two-dimensional approaches, since it takes into account the six degrees of freedom of head movements and is resolution independent. Results indicate that the 3-D eye movement analysis algorithm can successfully be used for analysis of visual process measures in VR. Process measures not only can corroborate performance measures, but also can lead to discoveries of the reasons for performance improvements. In particular, analysis of users’ eye movements in VR can potentially lead to further insights into the underlying cognitive processes of VR subjects.  相似文献   

4.
The change blindness phenomenon suggests that visual representations retained across saccades are very limited. In this paper we sought to specify the kind of information that is in fact retained. We investigated targeting performance for saccadic eye movements, since one need for visual representations across eye and body positions may be to guide coordinated movements. We examined saccades in the context of an ongoing sensory motor task in order to make stronger generalizations about natural visual functioning and deployment of attention. Human subjects copied random patterns of coloured blocks on a computer display. Their eye movement pattern was consistent from block to block, including a precise saccade to a previously-placed, neighbouring block during each additional block placement. This natural, consistent eye movement allowed the previously-placed, neighbouring block to serve as an implicit target without instructions to the subject. On random trials, we removed the target object from the display during a preceding saccade, so that observers were required to make the targeting saccade without a currently visible target. Targeting performance was excellent, and appeared to be influenced by spatial information that was not visible during the preceding fixation. Subjects were generally unaware of the disappearance and reappearance of the target. We conclude that spatial information about visual targets is retained across eye movements and used to guide subsequent movements.  相似文献   

5.
Eye movements in Sally-Anne false-belief tasks appear to reflect the ability to implicitly monitor the mental states of other individuals (theory of mind, or ToM). It has recently been proposed that an early-developing, efficient, and automatically operating ToM system subserves this ability. Surprisingly absent from the literature, however, is an empirical test of the influence of domain-general executive processing resources on this implicit ToM system. In the study reported here, a dual-task method was employed to investigate the impact of executive load on eye movements in an implicit Sally-Anne false-belief task. Under no-load conditions, adult participants displayed eye movement behavior consistent with implicit belief processing, whereas evidence for belief processing was absent for participants under cognitive load. These findings indicate that the cognitive system responsible for implicitly tracking beliefs draws at least minimally on executive processing resources. Thus, even the most low-level processing of beliefs appears to reflect a capacity-limited operation.  相似文献   

6.
The aim of the current study was to investigate subtle characteristics of social perception and interpretation in high-functioning individuals with autism spectrum disorders (ASDs), and to study the relation between watching and interpreting. As a novelty, we used an approach that combined moment-by-moment eye tracking and verbal assessment. Sixteen young adults with ASD and 16 neurotypical control participants watched a video depicting a complex communication situation while their eye movements were tracked. The participants also completed a verbal task with questions related to the pragmatic content of the video. We compared verbal task scores and eye movements between groups, and assessed correlations between task performance and eye movements. Individuals with ASD had more difficulty than the controls in interpreting the video, and during two short moments there were significant group differences in eye movements. Additionally, we found significant correlations between verbal task scores and moment-level eye movement in the ASD group, but not among the controls. We concluded that participants with ASD had slight difficulties in understanding the pragmatic content of the video stimulus and attending to social cues, and that the connection between pragmatic understanding and eye movements was more pronounced for participants with ASD than for neurotypical participants.  相似文献   

7.
Given the prevalence, quality, and low cost of web cameras, along with the remarkable human sensitivity to gaze, we examined the accuracy of eye tracking using only a web camera. Participants were shown webcamera recordings of a person’s eyes moving 1°, 2°, or 3° of visual angle in one of eight radial directions (north, northeast, east, southeast, etc.), or no eye movement occurred at all. Observers judged whether an eye movement was made and, if so, its direction. Our findings demonstrate that for all saccades of any size or direction, observers can detect and discriminate eye movements significantly better than chance. Critically, the larger the saccade, the better the judgments, so that for eye movements of 3°, people can tell whether an eye movement occurred, and where it was going, at about 90% or better. This simple methodology of using a web camera and looking for eye movements offers researchers a simple, reliable, and cost-effective research tool that can be applied effectively both in studies where it is important that participants maintain central fixation (e.g., covert attention investigations) and in those where they are free or required to move their eyes (e.g., visual search).  相似文献   

8.
When trying to remember verbal information from memory, people look at spatial locations that have been associated with visual stimuli during encoding, even when the visual stimuli are no longer present. It has been shown that such “eye movements to nothing” can influence retrieval performance for verbal information, but the mechanism underlying this functional relationship is unclear. More precisely, covert in comparison to overt shifts of attention could be sufficient to elicit the observed differences in retrieval performance. To test if covert shifts of attention explain the functional role of the looking-at-nothing phenomenon, we asked participants to remember verbal information that had been associated with a spatial location during an encoding phase. Additionally, during the retrieval phase, all participants solved an unrelated visual tracking task that appeared in either an associated (congruent) or an incongruent spatial location. Half the participants were instructed to look at the tracking task, half to shift their attention covertly (while keeping the eyes fixed). In two experiments, we found that memory retrieval depended on the location to which participants shifted their attention covertly. Thus, covert shifts of attention seem to be sufficient to cause differences in retrieval performance. The results extend the literature on the relationship between visuospatial attention, eye movements, and verbal memory retrieval and provide deep insights into the nature of the looking-at-nothing phenomenon.  相似文献   

9.
The authors investigated the relation between hand kinematics and eye movements in 2 variants of a rhythmical Fitts's task in which eye movements were necessary or not necessary. P. M. Fitts's (1954) law held in both conditions with similar slope and marginal differences in hand-kinematic patterns and movement continuity. Movement continuity and eye—hand synchronization were more directly related to movement time than to task index of difficulty. When movement time was decreased to fewer than 350 ms, eye—hand synchronization switched from continuous monitoring to intermittent control. The 1:1 frequency ratio with stable π/6 relative phase changed for 1:3 and 1:5 frequency ratios with less stable phase relations. The authors conclude that eye and hand movements in a rhythmical Fitts's task are dynamically synchronized to produce the best behavioral performance.  相似文献   

10.
This experiment investigated whether there are age differences in implicit learning of non-spatially arranged sequential patterns. We tested 12 young and 12 old participants for five sessions each in a non-spatial alternating serial reaction time (ASRT) task, in which predictable pattern events alternated with random, unpredictable ones. People of both ages were able to learn the sequence, but older people showed less pattern sensitivity than younger ones. Neither group was able to exhibit declarative knowledge of the pattern or to discriminate between pattern and random sequences on a recognition test, suggesting that the learning was indeed implicit. These findings indicate that the age deficits previously observed in the learning of spatial sequences are not due solely to age-related deficits in visuo-spatial attention or control of eye movements, but rather reflect a more general deficit in the ability to learn subtle sequential regularities.  相似文献   

11.
Participants’ eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent’s concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent’s name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.  相似文献   

12.
The ability to attribute mental states to others is crucial for social competency. To assess mentalizing abilities, in false-belief tasks participants attempt to identify an actor's belief about an object's location as opposed to the object's actual location. Passing this test on explicit measures is typically achieved by 4 years of age, but recent eye movement studies reveal registration of others' beliefs by 7 to 15 months. Consequently, a 2-path mentalizing system has been proposed, consisting of a late developing, cognitively demanding component and an early developing, implicit/automatic component. To date, investigations on the implicit system have been based on single-trial experiments only or have not examined how it operates across time. In addition, no study has examined the extent to which participants are conscious of the belief states of others during these tasks. Thus, the existence of a distinct implicit mentalizing system is yet to be demonstrated definitively. Here we show that adults engaged in a primary unrelated task display eye movement patterns consistent with mental state attributions across a sustained temporal period. Debriefing supported the hypothesis that this mentalizing was implicit. It appears there indeed exists a distinct implicit mental state attribution system.  相似文献   

13.
Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation.  相似文献   

14.
ABSTRACT

Can eye movements tell us whether people will remember a scene? In order to investigate the link between eye movements and memory encoding and retrieval, we asked participants to study photographs of real-world scenes while their eye movements were being tracked. We found eye gaze patterns during study to be predictive of subsequent memory for scenes. Moreover, gaze patterns during study were more similar to gaze patterns during test for remembered than for forgotten scenes. Thus, eye movements are indeed indicative of scene memory. In an explicit test for context effects of eye movements on memory, we found recognition rate to be unaffected by the disruption of spatial and/or temporal context of repeated eye movements. Therefore, we conclude that eye movements cue memory by selecting and accessing the most relevant scene content, regardless of its spatial location within the scene or the order in which it was selected.  相似文献   

15.
In a visual search experiment, participants had to decide whether or not a target object was present in a fourobject search array. One of these objects could be a semantically related competitor (e.g.,shirt for the targettrousers) or a conceptually unrelated object with the same name as the target—for example,bat (baseball) for the targetbat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants’ response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants’ visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention.  相似文献   

16.
Research has shown that implicitly guiding attention via visual cues or unrelated tasks can increase the likelihood of solving insight problems. We examined whether following another person making specific skin-crossing saccades could induce similar attentional shifts and increase solution rates for Duncker's ((1945)) radiation problem. We presented 150 participants with one of three 30-s eye movement patterns from another problem solver: (a) focusing solely on the central tumour; (b) naturally making skin-crossing saccades between the outside area and the tumour from multiple angles; or (c) making deliberate skin-crossing saccades between the outside area and the tumour from multiple angles. Following another person making skin-crossing saccades increased the likelihood of solving the radiation problem. Our results demonstrate that another person's eye movements can promote attentional shifts that trigger insight problem solving.  相似文献   

17.
In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, orexplicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number ofimplicit RFLs—screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a re-calibration procedure when necessary. This article also demonstrates how the disparity will vary across screen regions and participants and how each participant’s uniqueerror signature can be used to reduce the systematic error in the eye movement data collected for that participant.  相似文献   

18.
This study examined influences of social context on movement parameters in a pick-and-place task. Participants' motion trajectories were recorded while they performed sequences of natural movements either working side-by-side with a partner or alone. It was expected that movement parameters would be specifically adapted to the joint condition to overcome the difficulties arising from the requirement to coordinate with another person. To disentangle effects based on participants' effort to coordinate their movements from effects merely due to the other's presence, a condition was included where only one person performed the task while being observed by the partner. Results indicate that participants adapted their movements temporally and spatially to the joint action situation: Overall movement duration was shorter, and mean and maximum velocity was higher when actually working together than when working alone. Pick-to-place trajectories were also shifted away from the partner in spatial coordinates. The partner's presence as such did not have an impact on movement parameters. These findings are interpreted as evidence for the use of implicit strategies to facilitate movement coordination in joint action tasks.  相似文献   

19.
《Acta psychologica》2013,142(3):394-401
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.  相似文献   

20.
A new eye-movement-contingent probe task is presented in which readers’ eye movements are monitored as they read sentences and respond to a probe word; the timing of the display of the probe word is dependent on fixation of a target word. The present study examined semantic priming effects. The target word was either related (doctor) or unrelated (lawyer) to the probe word (nurse), and the probe appeared 120, 250, 500, or 750msec after the reader first fixated on the target word. When the probe word appeared (in the location of the target word), the rest of the sentence disappeared until the participant named the probe word. Then the sentence reappeared, and the participant continued reading the sentence. Naming times to the probe word were recorded, as was sentence reading time and the eye movement behavior relative to the onset of the probe word. Priming effects were observed, since probe reaction time to related probes was faster than that to unrelated probes. Ways in which this paradigm can be used to study various issues in language processing are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号