首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In three experiments, we used eyetracking to investigate the time course of biases in looking behaviour during visual decision making. Our study replicated and extended prior research by Shimojo, Simion, Shimojo, and Scheier (2003), and Simion and Shimojo (2006). Three groups of participants performed forced-choice decisions in a two-alternative free-viewing condition (Experiment 1a), a two-alternative gaze-contingent window condition (Experiment 1b), and an eight-alternative free-viewing condition (Experiment 1c). Participants viewed photographic art images and were instructed to select the one that they preferred (preference task), or the one that they judged to be photographed most recently (recency task). Across experiments and tasks, we demonstrated robust bias towards the chosen item in either gaze duration, gaze frequency or both. The present gaze bias effect was less task specific than those reported previously. Importantly, in the eight-alternative condition we demonstrated a very early gaze bias effect, which rules out a postdecision response-related explanation.  相似文献   

2.
In the present study we considered the two factors that have been advocated for playing a role in emotional attention: perception of gaze direction and facial expression of emotions. Participants performed an oculomotor task in which they had to make a saccade towards one of the two lateral targets, depending on the colour of the fixation dot which appeared at the centre of the computer screen. At different time intervals (stimulus onset asynchronies, SOAs: 50,100,150 ms) following the onset of the dot, a picture of a human face (gazing either to the right or to the left) was presented at the centre of the screen. The gaze direction of the face could be congruent or incongruent with respect to the location of the target, and the expression could be neutral or angry. In Experiment 1 the facial expressions were presented randomly in a single block, whereas in Experiment 2 they were shown in separate blocks. Latencies for correct saccades and percentage of errors (saccade direction errors) were considered in the analyses. Results showed that incongruent trials determined a significantly higher percentage of saccade direction errors with respect to congruent trials, thus confirming that gaze direction, even when task-irrelevant, interferes with the accuracy of the observer’s oculomotor behaviour. The angry expression was found to hold attention for a longer time with respect to the neutral one, producing delayed saccade latencies. This was particularly evident at 100 ms SOA and for incongruent trials. Emotional faces may then exert a modulatory effect on overt attention mechanisms.  相似文献   

3.
Filik R 《Cognition》2008,106(2):1038-1046
Readers typically experience processing difficulty when they encounter a word that is anomalous within the local context, such as 'The mouse picked up the dynamite...'. The research reported here demonstrates that by placing a sentence in a fictional scenario that is already well known to the reader (e.g., a Tom and Jerry cartoon, as a context for the example sentence above), the difficulty usually associated with these pragmatic anomalies can be immediately eliminated, as reflected in participants' eye movement behaviour. This finding suggests that readers can rapidly integrate information from their common ground, specifically, their cultural knowledge, whilst interpreting incoming text, and provides further evidence that incoming words are immediately integrated within the global discourse.  相似文献   

4.
Perceived gaze in faces is an important social cue that influences spatial orienting of attention. In three experiments, we examined whether the social relevance of gaze direction modulated spatial interference in response selection, using three different stimuli: faces, isolated eyes, and symbolic eyes (Experiments 1, 2, and 3, respectively). Each experiment employed a variant of the spatial Stroop paradigm in which face location and gaze direction were put into conflict. Results showed a reverse congruency effect between face location to the right or left of fixation and gaze direction only for stimuli with a social meaning to participants (Experiments 1 and 2). The opposite was observed for the nonsocial stimuli used in Experiment 3. Results are explained as facilitation in response to eye contact.  相似文献   

5.
When making a decision, people spend longer looking at the option they ultimately choose compared to other options—termed the gaze bias effect—even during their first encounter with the options (Glaholt & Reingold, 2009a, 2009b; Schotter, Berry, McKenzie & Rayner, 2010). Schotter et al. (2010) suggested that this is because people selectively encode decision-relevant information about the options, online during the first encounter with them. To extend their findings and test this claim, we recorded subjects' eye movements as they made judgements about pairs of images (i.e., which one was taken more recently or which one was taken longer ago). We manipulated whether both images were presented in the same colour content (e.g., both in colour or both in black-and-white) or whether they differed in colour content and the extent to which colour content was a reliable cue to relative recentness of the images. We found that the magnitude of the gaze bias effect decreased when the colour content cue was not reliable during the first encounter with the images, but no modulation of the gaze bias effect in remaining time on the trial. These data suggest people do selectively encode decision-relevant information online.  相似文献   

6.
ABSTRACT

Can eye movements tell us whether people will remember a scene? In order to investigate the link between eye movements and memory encoding and retrieval, we asked participants to study photographs of real-world scenes while their eye movements were being tracked. We found eye gaze patterns during study to be predictive of subsequent memory for scenes. Moreover, gaze patterns during study were more similar to gaze patterns during test for remembered than for forgotten scenes. Thus, eye movements are indeed indicative of scene memory. In an explicit test for context effects of eye movements on memory, we found recognition rate to be unaffected by the disruption of spatial and/or temporal context of repeated eye movements. Therefore, we conclude that eye movements cue memory by selecting and accessing the most relevant scene content, regardless of its spatial location within the scene or the order in which it was selected.  相似文献   

7.
The present study examines the extent to which attentional biases in contamination fear commonly observed in obsessive-compulsive disorder (OCD) are specific to disgust or fear cues, as well as the components of attention involved. Eye tracking was used to provide greater sensitivity and specificity than afforded by traditional reaction time measures of attention. Participants high (HCF; n = 23) and low (LCF; n = 25) in contamination fear were presented with disgusted, fearful, or happy faces paired with neutral faces for 3 s trials. Evidence of both vigilance and maintenance-based biases for threat was found. The high group oriented attention to fearful faces but not disgusted faces compared to the low group. However, the high group maintained attention on both disgusted and fearful expressions compared to the low group, a pattern consistent across the 3 s trials. The implications of these findings for conceptualizing emotional factors that moderate attentional biases in contamination-based OCD are discussed.  相似文献   

8.
Analyses carried out on a large corpus of eye movement data were used to comment on four contentious theoretical issues. The results provide no evidence that word frequency and word predictability have early interactive effects on inspection time. Contrary to some earlier studies, in these data there is little evidence that properties of a prior word generally spill over and influence current processing. In contrast, there is evidence that both the frequency and the predictability of a word in parafoveal vision influence foveal processing. In the case of predictability, the direction of the effect suggests that more predictable parafoveal words produce longer foveal fixations. Finally, there is evidence that information about word class modulates processing over a span greater than a single word. The results support the notion of distributed parallel processing.  相似文献   

9.
Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events.  相似文献   

10.
The time-course of representing others’ perspectives is inconclusive across the currently available models of ToM processing. We report two visual-world studies investigating how knowledge about a character’s basic preferences (e.g. Tom’s favourite colour is pink) and higher-order desires (his wish to keep this preference secret) compete to influence online expectations about subsequent behaviour. Participants’ eye movements around a visual scene were tracked while they listened to auditory narratives. While clear differences in anticipatory visual biases emerged between conditions in Experiment 1, post-hoc analyses testing the strength of the relevant biases suggested a discrepancy in the time-course of predicting appropriate referents within the different contexts. Specifically, predictions to the target emerged very early when there was no conflict between the character’s basic preferences and higher-order desires, but appeared to be relatively delayed when comprehenders were provided with conflicting information about that character’s desire to keep a secret. However, a second experiment demonstrated that this apparent ‘cognitive cost’ in inferring behaviour based on higher-order desires was in fact driven by low-level features between the context sentence and visual scene. Taken together, these results suggest that healthy adults are able to make complex higher-order ToM inferences without the need to call on costly cognitive processes. Results are discussed relative to previous accounts of ToM and language processing.  相似文献   

11.
A comprehensive model of gaze control must account for a number of empirical observations at both the behavioural and neurophysiological levels. The computational model presented in this article can simulate the coordinated movements of the eye, head, and body required to perform horizontal gaze shifts. In doing so it reproduces the predictable relationships between the movements performed by these different degrees of freedom (DOFs) in the primate. The model also accounts for the saccadic undershoot that accompanies large gaze shifts in the biological visual system. It can also account for our perception of a stable external world despite frequent gaze shifts and the ability to perform accurate memory-guided and double-step saccades. The proposed model also simulates peri-saccadic compression: the mis-localization of a briefly presented visual stimulus towards the location that is the target for a saccade. At the neurophysiological level, the proposed model is consistent with the existence of cortical neurons tuned to the retinal, head-centred, body-centred, and world-centred locations of visual stimuli and cortical neurons that have gain-modulated responses to visual stimuli. Finally, the model also successfully accounts for peri-saccadic receptive field (RF) remapping which results in reduced responses to stimuli in the current RF location and an increased sensitivity to stimuli appearing at the location that will be occupied by the RF after the saccade. The proposed model thus offers a unified explanation for this seemingly diverse range of phenomena. Furthermore, as the proposed model is an implementation of the predictive coding theory, it offers a single computational explanation for these phenomena and relates gaze shifts to a wider framework for understanding cortical function.  相似文献   

12.
People look longer at things that they choose than things they do not choose. How much of this tendency—the gaze bias effect—is due to a liking effect compared to the information encoding aspect of the decision-making process? Do these processes compete under certain conditions? We monitored eye movements during a visual decision-making task with four decision prompts: Like, dislike, older, and newer. The gaze bias effect was present during the first dwell in all conditions except the dislike condition, when the preference to look at the liked item and the goal to identify the disliked item compete. Colour content (whether a photograph was colour or black-and-white), not decision type, influenced the gaze bias effect in the older/newer decisions because colour is a relevant feature for such decisions. These interactions appear early in the eye movement record, indicating that gaze bias is influenced during information encoding.  相似文献   

13.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

14.
PurposeWe examined links between the kinematics of an opponent’s actions and the visual search behaviors of badminton players responding to those actions.MethodA kinematic analysis of international standard badminton players (n = 4) was undertaken as they completed a range of serves. Video of these players serving was used to create a life-size temporal occlusion test to measure anticipation responses. Expert (n = 8) and novice (n = 8) badminton players anticipated serve location while wearing an eye movement registration system.ResultsDuring the execution phase of the opponent’s movement, the kinematic analysis showed between-shot differences in distance traveled and peak acceleration at the shoulder, elbow, wrist and racket. Experts were more accurate at responding to the serves compared to novice players. Expert players fixated on the kinematic locations that were most discriminating between serve types more frequently and for a longer duration compared to novice players. Moreover, players were generally more accurate at responding to serves when they fixated vision upon the discriminating arm and racket kinematics.ConclusionsFindings extend previous literature by providing empirical evidence that expert athletes’ visual search behaviors and anticipatory responses are inextricably linked to the opponent action being observed.  相似文献   

15.
Adults use gaze and voice signals as cues to the mental and emotional states of others. We examined the influence of voice cues on children’s judgments of gaze. In Experiment 1, 6-year-olds, 8-year-olds, and adults viewed photographs of faces fixating the center of the camera lens and a series of positions to the left and right and judged whether gaze was direct or averted. On each trial, participants heard the participant-directed voice cue (e.g., “I see you”), an object-directed voice cue (e.g., “I see that”), or no voice. In 6-year-olds, the range of directions of gaze leading to the perception of eye contact (the cone of gaze) was narrower for trials with object-directed voice cues than for trials with participant-directed voice cues or no voice. This effect was absent in 8-year-olds and adults, both of whom had a narrower cone of gaze than 6-year-olds. In Experiment 2, we investigated whether voice cues would influence adults’ judgments of gaze when the task was made more difficult by limiting the duration of exposure to the face. Adults’ cone of gaze was wider than in Experiment 1, and the effect of voice cues was similar to that observed in 6-year-olds in Experiment 1. Together, the results indicate that object-directed voice cues can decrease the width of the cone of gaze, allowing more adult-like judgments of gaze in young children, and that voice cues may be especially effective when the cone of gaze is wider because of immaturity (Experiment 1) or limited exposure (Experiment 2).  相似文献   

16.
Young infants produce a variety of spontaneous arm and leg movements in the first few months of life. Coordination of leg joints has been extensively investigated, whereas arm joint coordination has mainly been investigated in the sitting position in the context of early reaching and grasping. The current study investigated arm and leg joint coordination of movements produced in the supine position in 10 fullterm infants aged 6, 12 and 18 weeks. Longitudinal comparisons within limbs (intralimb) as well as between limbs (interlimb, ipsilateral and contralateral) were made as well as an exploration of differences in the development for boys and girls. The relationship between the joint angles was examined by measuring pair-wise cross-correlation functions for the angular displacement curves of the leg (hip, knee and ankle) and arm (shoulder, elbow and wrist) joints of both the right and left side. Both the arms and legs were found to follow a similar pattern of intralimb coordination, although the leg joints were more tightly coupled than the arm joints, particularly the proximal with the middle joint. In support of earlier findings, differences in the development of the right and left side were identified. In addition, gender differences in joint coordination were found for both intralimb and interlimb coordination. This contrasts with the view that gender differences in motor development may be primarily a result of environmental influences.  相似文献   

17.
In this study, we examine the convergent validity of a measure of maternal looming derived using a motion capture system, and the temporal coordination between maternal loom and infant gaze using an event-based bootstrapping procedure. The sample comprised 26 mothers diagnosed with postpartum depression, 43 nondepressed mothers, and their 4-month-old infants. Mother-infant interactions were recorded during a standard face-to-face setting using video cameras and a motion capture system. First, results showed that maternal looming was correlated with a globally coded measure of maternal overriding. Maternal overriding is an intrusive behavior occurring when the mother re-directs the infant’s attention to parent-led activities. Thus, this result confirms that maternal looming can be considered a spatial intrusion in early interactions. Second, results showed that compared to nondepressed dyads, depressed dyads were more likely to coordinate maternal loom and infant gaze in a Loom-in-Gaze-pattern. We discuss the use of automated measurement for analyzing mother-infant interactions, and how the Loom-in-Gaze pattern can be interpreted as a disturbance in infant self-regulation.  相似文献   

18.
Reading fluency is often indexed by performance on rapid automatized naming (RAN) tasks, which are known to reflect speed of access to lexical codes. We used eye tracking to investigate visual influences on naming fluency. Specifically, we examined how visual crowding affects fluency in a RAN-letters task on an item-by-item basis, by systematically manipulating the interletter spacing of items, such that upcoming letters in the array were viewed in the fovea, parafovea, or periphery relative to a given fixated letter. All lexical information was kept constant. Nondyslexic readers’ gaze durations were longer in foveal than in parafoveal and peripheral trials, indicating that visual crowding slows processing even for fluent readers. Dyslexics’ gaze durations were longer in foveal and parafoveal trials than in peripheral trials. Our results suggest that for dyslexic readers, influences of crowding on naming speed extend to a broader visual span (to parafoveal vision) than that for nondyslexic readers, but do not extend as far as peripheral vision. The findings extend previous research by elucidating the different visual spans within which crowding operates for dyslexic and nondyslexic readers in an online fluency task.  相似文献   

19.
Okamoto-Barth S  Kawai N 《Cognition》2006,101(3):B42-B50
The present study investigated how anticipation of a target's appearance affects human attention to gaze cues provided by a schematic face. Subjects in a 'catch' group received a high number of 'catch' trials, in which no target stimulus appeared. Subjects in the control group did not receive any catch trials. As in previous studies, both groups showed a facilitation effect to the cued location during shorter stimulus onset asynchrony (SOA). In both groups, an analysis of eye movements confirmed that subjects' eyes remained on the fixation point, ruling out the possibility that the facilitation effect was due to shifting eye movements (saccades) as opposed to a shift in covert attention. But while the control group's response time (RT) decreased as SOA increased, the catch group's RT had a U-shaped pattern and the facilitation effect to the cued location was reversed at the longest SOA (1005 ms). These results suggest that subjects in the catch group disengaged their attention during long SOAs because they expected the trial to be a catch trial. This disengagement of attention during long SOAs results in a delay before attention could be re-focused to the previous location regardless of the cue validity ["IOR (inhibition of return)"-like-phenomenon]. Unlike the conventional IOR, we suggest that this "IOR"-like phenomenon caused by an unpredictive central gaze cue is likely to be mediated by an endogenous mechanism.  相似文献   

20.
It has been found that Western observers cannot inhibit their gaze to the eye region, even if they are told to avoid doing so when they observe face stimuli because of the importance of the eye region. However, studies indicate that the nose region is more important for face processing among Eastern observers. We used the “don’t look” paradigm with Eastern observers, in which participants were told to avoid fixating on a specific region (eye, nose, and mouth). The results extend previous findings as both the eye and nose regions attracted their gaze. Interestingly, the fixation behaviors differed for the eyes and nose in terms of the time-dependent view, in which reflexive saccades to the eye with a persistent fixation to the nose were observed. The nose regions could have stronger attractiveness than previously thought.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号