首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Contemporary research literature indicates that eye movements during the learning and testing phases can predict and affect future recognition processes. Nevertheless, only partial information exists regarding eye movements in the various components of recognition processes: Hits, Correct rejections, Misses and False Alarms (FA). In an attempt to address this issue, participants in this study viewed human faces in a yes/no recognition memory paradigm. They were divided into two groups – one group that carried out the testing phase immediately after the learning phase (n?=?30) and another group with a 15-minute delay between phases (n?=?28). The results showed that the Immediate group had a lower FA rate than the Delay group, and that no Hit rate differences were observed between the two groups. Eye movements differed between the recognition processes in the learning and the testing phases, and this pattern interacted with the group type. Hence, eye movement measures seem to track memory accuracy during both learning and testing phases and this pattern also interacts with the length of delay between learning and testing. This pattern of results suggests that eye movements are indicative of present and future recognition processes.  相似文献   

2.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

3.
In two experiments, participants solved anagram problems while their eye movements were monitored. Each problem consisted of a circular array of five letters: a scrambled four-letter solution word containing three consonants and one vowel, and an additional randomly-placed distractor consonant. Viewing times on the distractor consonant compared to the solution consonants provided an online measure of knowledge of the solution. Viewing times on the distractor consonant and the solution consonants were indistinguishable early in the trial. In contrast, several seconds prior to the response, viewing times on the distractor consonant decreased in a gradual manner compared to viewing times on the solution consonants. Importantly, this pattern was obtained across both trials in which participants reported the subjective experience of insight and trials in which they did not. These findings are consistent with the availability of partial knowledge of the solution prior to such information being accessible to subjective phenomenal awareness.  相似文献   

4.
The comparison of fractions is a difficult task that can often be facilitated by separately comparing components (numerators and denominators) of the fractions—that is, by applying so-called component-based strategies. The usefulness of such strategies depends on the type of fraction pair to be compared. We investigated the temporal organization and the flexibility of strategy deployment in fraction comparison by evaluating sequences of eye movements in 20 young adults. We found that component-based strategies could account for the response times and the overall number of fixations observed for the different fraction pairs. The analysis of eye movement sequences showed that the initial eye movements in a trial were characterized by stereotypical scanning patterns indicative of an exploratory phase that served to establish the kind of fraction pair presented. Eye movements that followed this phase adapted to the particular type of fraction pair and indicated the deployment of specific comparison strategies. These results demonstrate that participants employ eye movements systematically to support strategy use in fraction comparison. Participants showed a remarkable flexibility to adapt to the most efficient strategy on a trial-by-trial basis. Our results confirm the value of eye movement measurements in the exploration of strategic adaptation in complex tasks.  相似文献   

5.
Two experiments explored the mapping between language and mental representations of visual scenes. In both experiments, participants viewed, for example, a scene depicting a woman, a wine glass and bottle on the floor, an empty table, and various other objects. In Experiment 1, participants concurrently heard either ‘The woman will put the glass on the table’ or ‘The woman is too lazy to put the glass on the table’. Subsequently, with the scene unchanged, participants heard that the woman ‘will pick up the bottle, and pour the wine carefully into the glass.’ Experiment 2 was identical except that the scene was removed before the onset of the spoken language. In both cases, eye movements after ‘pour’ (anticipating the glass) and at ‘glass’ reflected the language-determined position of the glass, as either on the floor, or moved onto the table, even though the concurrent (Experiment 1) or prior (Experiment 2) scene showed the glass in its unmoved position on the floor. Language-mediated eye movements thus reflect the real-time mapping of language onto dynamically updateable event-based representations of concurrently or previously seen objects (and their locations).  相似文献   

6.
The growing popularity of mobile-phone technology has led to changes in the way people—particularly younger people—communicate. A clear example of this is the advent of Short Message Service (SMS) language, which includes orthographic abbreviations (e.g., omitting vowels, as in wk, week) and phonetic respelling (e.g., using u instead of you). In the present study, we examined the pattern of eye movements during reading of SMS sentences (e.g., my hols wr gr8), relative to normally written sentences, in a sample of skilled “texters”. SMS sentences were created by using (mostly) orthographic or phonological abbreviations. Results showed that there is a reading cost—both at a local level and at a global level—for individuals who are highly expert in SMS language. Furthermore, phonological abbreviations resulted in a greater cost than orthographic abbreviations.  相似文献   

7.
Earlier studies have shown that eye movements during retrieval of disturbing images about past events reduce their vividness and emotionality, which may be due to both tasks competing for working memory resources. This study examined whether eye movements reduce vividness and emotionality of visual distressing images about feared future events: “flashforwards”. A non-clinical sample was asked to select two images of feared future events, which were self-rated for vividness and emotionality. These images were retrieved while making eye movements or without a concurrent secondary task, and then vividness and emotionality were rated again. Relative to the no-dual task condition, eye movements while thinking of future-oriented images resulted in decreased ratings of image vividness and emotional intensity. Apparently, eye movements reduce vividness and emotionality of visual images about past and future feared events. This is in line with a working memory account of the beneficial effects of eye movements, which predicts that any task that taxes working memory during retrieval of disturbing mental images will be beneficial.  相似文献   

8.
Remembering the past and imagining the future both rely on complex mental imagery. We considered the possibility that constructing a future scene might tap a component of mental imagery that is not as critical for remembering past scenes. Whereas visual imagery plays an important role in remembering the past, we predicted that spatial imagery plays a crucial role in imagining the future. For the purpose of teasing apart the different components underpinning scene construction in the two experiences of recalling episodic memories and shaping novel future events, we used a paradigm that might selectively affect one of these components (i.e., the spatial). Participants performed concurrent eye movements while remembering the past and imagining the future. These concurrent eye movements selectively interfere with spatial imagery, while sparing visual imagery. Eye movements prevented participants from imagining complex and detailed future scenes, but had no comparable effect on the recollection of past scenes. Similarities between remembering the past and imagining the future are coupled with some differences. The present findings uncover another fundamental divergence between the two processes.  相似文献   

9.
Observers can visually track multiple objects that move independently even if the scene containing the moving objects is rotated in a smooth way. Abrupt scene rotations yield tracking more difficult but not impossible. For nonrotated, stable dynamic displays, the strategy of looking at the targets' centroid has been shown to be of importance for visual tracking. But which factors determine successful visual tracking in a nonstable dynamic display? We report two eye tracking experiments that present evidence for centroid looking. Across abrupt viewpoint changes, gaze on the centroid is more stable than gaze on targets indicating a process of realigning targets as a group. Further, we show that the relative importance of centroid looking increases with object speed.  相似文献   

10.
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   

11.
The eye movements of Finnish first and second graders were monitored as they read sentences where polysyllabic words were either hyphenated at syllable boundaries, alternatingly coloured (every second syllable black, every second red) or had no explicit syllable boundary cues (e.g., ta-lo vs. talo vs. talo = “house”). The results showed that hyphenation at syllable boundaries slows down reading of first and second graders even though syllabification by hyphens is very common in Finnish reading instruction, as all first-grade textbooks include hyphens at syllable boundaries. When hyphens were positioned within a syllable (t-alo vs. ta-lo), beginning readers were even more disrupted. Alternate colouring did not affect reading speed, no matter whether colours signalled syllable structure or not. The results show that beginning Finnish readers prefer to process polysyllabic words via syllables rather than letter by letter. At the same time they imply that hyphenation encourages sequential syllable processing, which slows down the reading of children, who are already capable of parallel syllable processing or recognising words directly via the whole-word route.  相似文献   

12.
13.
14.
Two eye movement experiments are reported that examine the influence of sentence context on morphological processing. English compound words which vary in beginning lexeme frequency (Experiment 1) and ending lexeme frequency (Experiment 2) were embedded into sentence contexts that were either predictive of the compound word or were neutral with respect to the compound. A predictable sentence context reduced the effect of beginning lexeme frequency on first fixation and single fixation durations. However, sentence context did not modify effects of beginning and ending lexeme frequency in later fixation measures. These results further support the theoretical position that morphology plays a role at multiple levels within readers' mental lexicons. In addition, these results suggest that access to early morpho-orthographic processes can be influenced by sentence context, a finding that suggests an interactive relationship between sentence context and word recognition.  相似文献   

15.
Eye movement desensitization and reprocessing can reduce ratings of the vividness and emotionality of unpleasant memories-hence it is commonly used to treat posttraumatic stress disorder. The present experiments compared three accounts of how eye movements produce these benefits. Participants rated unpleasant autobiographical memories before and after eye movements or an eyes stationary control condition. In Experiment 1, eye movements produced benefits only when memories were held in mind during the movements, and eye movements increased arousal, contrary to an investigatory-reflex account. In Experiment 2, horizontal and vertical eye movements produced equivalent benefits, contrary to an interhemispheric-communication account. In Experiment 3, two other distractor tasks (auditory shadowing, drawing) produced benefits that were negatively correlated with working-memory capacity. These findings support a working-memory account of the eye movement benefits in which the central executive is taxed when a person performs a distractor task while attempting to hold a memory in mind.  相似文献   

16.
ABSTRACT

Differences in eye movement patterns are often found when comparing passive viewing paradigms to actively engaging in everyday tasks. Arguably, investigations into visuomotor control should therefore be most useful when conducted in settings that incorporate the intrinsic link between vision and action. We present a study that compares oculomotor behaviour and hazard reaction times across a simulated driving task and a comparable, but passive, video-based hazard perception task. We found that participants scanned the road less during the active driving task and fixated closer to the front of the vehicle. Participants were also slower to detect the hazards in the driving task. Our results suggest that the interactivity of simulated driving places increased demand upon the visual and attention systems than simply viewing driving movies. We offer insights into why these differences occur and explore the possible implications of such findings within the wider context of driver training and assessment.  相似文献   

17.
Cognitive-behavioral therapy for anxiety disorders typically involves exposure to the conditioned stimulus (CS). Despite its status as an effective and primary treatment, many patients do not show clinical improvement or relapse. Contemporary learning theory suggests that treatment may be optimized by adding techniques that aim at revaluating the aversive consequence (US) of the feared stimulus. This study tested whether US devaluation via a dual task – imagining the US while making eye movements – decreases conditioned fear. Following fear acquisition one group recalled the US while making eye movements (EM) and one group merely recalled the US (RO). Next, during a test phase, all participants were re-presented the CSs. Dual tasking, relative to the control condition, decreased memory vividness and emotionality. Moreover, only in the dual task condition reductions were observed in self-reported fear, US expectancy, and CS unpleasantness, but not in skin conductance responses. Findings provide the first evidence that the dual task decreases conditioned fear and suggest it may be a valuable addition to exposure therapy.  相似文献   

18.
Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events.  相似文献   

19.
Recent computational models of cognition have made good progress in accounting for the visual processes needed to encode external stimuli. However, these models typically incorporate simplified models of visual processing that assume a constant encoding time for all visual objects and do not distinguish between eye movements and shifts of attention. This paper presents a domain-independent computational model, EMMA, that provides a more rigorous account of eye movements and visual encoding and their interaction with a cognitive processor. The visual-encoding component of the model describes the effects of frequency and foveal eccentricity when encoding visual objects as internal representations. The eye-movement component describes the temporal and spatial characteristics of eye movements as they arise from shifts of visual attention. When integrated with a cognitive model, EMMA generates quantitative predictions concerning when and where the eyes move, thus serving to relate higher-level cognitive processes and attention shifts with lower-level eye-movement behavior. The paper evaluates EMMA in three illustrative domains — equation solving, reading, and visual search — and demonstrates how the model accounts for aspects of behavior that simpler models of cognitive and visual processing fail to explain.  相似文献   

20.
Visual input is frequently disrupted by eye movements, blinks, and occlusion. The visual system must be able to establish correspondence between objects visible before and after a disruption. Current theories hold that correspondence is established solely on the basis of spatiotemporal information, with no contribution from surface features. In five experiments, we tested the relative contributions of spatiotemporal and surface feature information in establishing object correspondence across saccades. Participants generated a saccade to one of two objects, and the objects were shifted during the saccade so that the eyes landed between them, requiring a corrective saccade to fixate the target. To correct gaze to the appropriate object, correspondence must be established between the remembered saccade target and the target visible after the saccade. Target position and surface feature consistency were manipulated. Contrary to existing theories, surface features and spatiotemporal information both contributed to object correspondence, and the relative weighting of the two sources of information was governed by the demands of the task. These data argue against a special role for spatiotemporal information in object correspondence, indicating instead that the visual system can flexibly use multiple sources of relevant information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号