首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Contemporary research literature indicates that eye movements during the learning and testing phases can predict and affect future recognition processes. Nevertheless, only partial information exists regarding eye movements in the various components of recognition processes: Hits, Correct rejections, Misses and False Alarms (FA). In an attempt to address this issue, participants in this study viewed human faces in a yes/no recognition memory paradigm. They were divided into two groups – one group that carried out the testing phase immediately after the learning phase (n?=?30) and another group with a 15-minute delay between phases (n?=?28). The results showed that the Immediate group had a lower FA rate than the Delay group, and that no Hit rate differences were observed between the two groups. Eye movements differed between the recognition processes in the learning and the testing phases, and this pattern interacted with the group type. Hence, eye movement measures seem to track memory accuracy during both learning and testing phases and this pattern also interacts with the length of delay between learning and testing. This pattern of results suggests that eye movements are indicative of present and future recognition processes.  相似文献   

2.
    
ABSTRACT

Can eye movements tell us whether people will remember a scene? In order to investigate the link between eye movements and memory encoding and retrieval, we asked participants to study photographs of real-world scenes while their eye movements were being tracked. We found eye gaze patterns during study to be predictive of subsequent memory for scenes. Moreover, gaze patterns during study were more similar to gaze patterns during test for remembered than for forgotten scenes. Thus, eye movements are indeed indicative of scene memory. In an explicit test for context effects of eye movements on memory, we found recognition rate to be unaffected by the disruption of spatial and/or temporal context of repeated eye movements. Therefore, we conclude that eye movements cue memory by selecting and accessing the most relevant scene content, regardless of its spatial location within the scene or the order in which it was selected.  相似文献   

3.
Visual information processing is guided by an active mechanism generating saccadic eye movements to salient stimuli. Here we investigate the specific contribution of saccades to memory encoding of verbal and spatial properties in a serial recall task. In the first experiment, participants moved their eyes freely without specific instruction. We demonstrate the existence of qualitative differences in eye-movement strategies during verbal and spatial memory encoding. While verbal memory encoding was characterized by shifting the gaze to the to-be-encoded stimuli, saccadic activity was suppressed during spatial encoding. In the second experiment, participants were required to suppress saccades by fixating centrally during encoding or to make precise saccades onto the memory items. Active suppression of saccades had no effect on memory performance, but tracking the upcoming stimuli decreased memory performance dramatically in both tasks, indicating a resource bottleneck between display-controlled saccadic control and memory encoding. We conclude that optimized encoding strategies for verbal and spatial features are underlying memory performance in serial recall, but such strategies work on an involuntary level only and do not support memory encoding when they are explicitly required by the task.  相似文献   

4.
    
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   

5.
Search targets are typically remembered much better than other objects even when they are viewed for less time. However, targets have two advantages that other objects in search displays do not have: They are identified categorically before the search, and finding them represents the goal of the search task. The current research investigated the contributions of both of these types of information to the long-term visual memory representations of search targets. Participants completed either a predefined search or a unique-object search in which targets were not defined with specific categorical labels before searching. Subsequent memory results indicated that search target memory was better than distractor memory even following ambiguously defined searches and when the distractors were viewed significantly longer. Superior target memory appears to result from a qualitatively different representation from those of distractor objects, indicating that decision processes influence visual memory.  相似文献   

6.
Two experiments were conducted to investigate how linguistic information influences attention allocation in visual search and memory for words. In Experiment 1, participants searched for the synonym of a cue word among five words. The distractors included one antonym and three unrelated words. In Experiment 2, participants were asked to judge whether the five words presented on the screen comprise a valid sentence. The relationships among words were sentential, semantically related or unrelated. A memory recognition task followed. Results in both experiments showed that linguistically related words produced better memory performance. We also found that there were significant interactions between linguistic relation conditions and memorization on eye-movement measures, indicating that good memory for words relied on frequent and long fixations during search in the unrelated condition but to a much lesser extent in linguistically related conditions. We conclude that semantic and syntactic associations attenuate the link between overt attention allocation and subsequent memory performance, suggesting that linguistic relatedness can somewhat compensate for a relative lack of attention during word search.  相似文献   

7.
Eye movement desensitization and reprocessing can reduce ratings of the vividness and emotionality of unpleasant memories-hence it is commonly used to treat posttraumatic stress disorder. The present experiments compared three accounts of how eye movements produce these benefits. Participants rated unpleasant autobiographical memories before and after eye movements or an eyes stationary control condition. In Experiment 1, eye movements produced benefits only when memories were held in mind during the movements, and eye movements increased arousal, contrary to an investigatory-reflex account. In Experiment 2, horizontal and vertical eye movements produced equivalent benefits, contrary to an interhemispheric-communication account. In Experiment 3, two other distractor tasks (auditory shadowing, drawing) produced benefits that were negatively correlated with working-memory capacity. These findings support a working-memory account of the eye movement benefits in which the central executive is taxed when a person performs a distractor task while attempting to hold a memory in mind.  相似文献   

8.
Visual input is frequently disrupted by eye movements, blinks, and occlusion. The visual system must be able to establish correspondence between objects visible before and after a disruption. Current theories hold that correspondence is established solely on the basis of spatiotemporal information, with no contribution from surface features. In five experiments, we tested the relative contributions of spatiotemporal and surface feature information in establishing object correspondence across saccades. Participants generated a saccade to one of two objects, and the objects were shifted during the saccade so that the eyes landed between them, requiring a corrective saccade to fixate the target. To correct gaze to the appropriate object, correspondence must be established between the remembered saccade target and the target visible after the saccade. Target position and surface feature consistency were manipulated. Contrary to existing theories, surface features and spatiotemporal information both contributed to object correspondence, and the relative weighting of the two sources of information was governed by the demands of the task. These data argue against a special role for spatiotemporal information in object correspondence, indicating instead that the visual system can flexibly use multiple sources of relevant information.  相似文献   

9.
Posttraumatic stress disorder (PTSD) is effectively treated with eye movement desensitization and reprocessing (EMDR) with patients making eye movements during recall of traumatic memories. Many therapists have replaced eye movements with bilateral beeps, but there are no data on the effects of beeps. Experimental studies suggest that eye movements may be beneficial because they tax working memory, especially the central executive component, but the presence/degree of taxation has not been assessed directly. Using discrimination Reaction Time (RT) tasks, we found that eye movements slow down RTs to auditive cues (experiment I), but binaural beeps do not slow down RTs to visual cues (experiment II). In an arguably more sensitive “Random Interval Repetition” task using tactile stimulation, working memory taxation of beeps and eye movements were directly compared. RTs slowed down during beeps, but the effects were much stronger for eye movements (experiment III). The same pattern was observed in a memory experiment with healthy volunteers (experiment IV): vividness of negative memories was reduced after both beeps and eye movements, but effects were larger for eye movements. Findings support a working memory account of EMDR and suggest that effects of beeps on negative memories are inferior to those of eye movements.  相似文献   

10.
    
Recent research put forward the hypothesis that eye movements are integrated in memory representations and are reactivated when later recalled. However, “looking back to nothing” during recall might be a consequence of spatial memory retrieval. Here, we aimed at distinguishing between the effect of spatial and oculomotor information on perceptual memory. Participants’ task was to judge whether a morph looked rather like the first or second previously presented face. Crucially, faces and morphs were presented in a way that the morph reactivated oculomotor and/or spatial information associated with one of the previously encoded faces. Perceptual face memory was largely influenced by these manipulations. We considered a simple computational model with an excellent match (4.3% error) that expresses these biases as a linear combination of recency, saccade, and location. Surprisingly, saccades did not play a role. The results suggest that spatial and temporal rather than oculomotor information biases perceptual face memory.  相似文献   

11.
    
Abstract

Objective: Because the working memory (WM) has a limited capacity, the cognitive reactions towards persuasive information in the WM might be disturbed by taxing it by other means, in this study, by inducing voluntary eye movements (EMi). This is expected to influence persuasion.

Methods: Participants (N?=?127) listened to an auditory persuasive message on fruit and vegetable consumption, that was either framed positively or negatively. Half of them was asked to keep following a regularly moving dot on their screen with their eyes. At pretest, cognitive self-affirmation inclination (CSAI) was assessed as individual difference to test possible moderation effects.

Results: The EMi significantly lowered the quality of the mental images that participants reported to have of the persuasive outcomes. With regard to self-reported fruit and vegetable consumption after two weeks, EMi significantly lowered consumption when CSAI was high but it significantly increased consumption when CSAI was low.

Conclusions: The results verify our earlier findings that induced EM can influence persuasion. Although it remains unclear whether the effects of EMi were caused by disturbing mental images of persuasive outcomes or self-regulative reactions to these images, or both, the WM account may provide new theoretical as well as practical angles on persuasion.  相似文献   

12.
Displays of eye movements may convey information about cognitive processes but require interpretation. We investigated whether participants were able to interpret displays of their own or others' eye movements. In Experiments 1 and 2, participants observed an image under three different viewing instructions. Then they were shown static or dynamic gaze displays and had to judge whether it was their own or someone else's eye movements and what instruction was reflected. Participants were capable of recognizing the instruction reflected in their own and someone else's gaze display. Instruction recognition was better for dynamic displays, and only this condition yielded above chance performance in recognizing the display as one's own or another person's (Experiments 1 and 2). Experiment 3 revealed that order information in the gaze displays facilitated instruction recognition when transitions between fixated regions distinguish one viewing instruction from another. Implications of these findings are discussed.  相似文献   

13.
Viewing position effects are commonly observed in reading, but they have only rarely been investigated in object perception or in the realistic context of a natural scene. In two experiments, we explored where people fixate within photorealistic objects and the effects of this landing position on recognition and subsequent eye movements. The results demonstrate an optimal viewing position—objects are processed more quickly when fixation is in the centre of the object. Viewers also prefer to saccade to the centre of objects within a natural scene, even when making a large saccade. A central landing position is associated with an increased likelihood of making a refixation, a result that differs from previous reports and suggests that multiple fixations within objects, within scenes, occur for a range of reasons. These results suggest that eye movements within scenes are systematic and are made with reference to an early parsing of the scene into constituent objects.  相似文献   

14.
Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.  相似文献   

15.
It is not known why people move their eyes when engaged in non-visual cognition. The current study tested the hypothesis that differences in saccadic eye movement rate (EMR) during non-visual cognitive tasks reflect different requirements for searching long-term memory. Participants performed non-visual tasks requiring relatively low or high long-term memory retrieval while eye movements were recorded. In three experiments, EMR was substantially lower for low-retrieval than for high-retrieval tasks, including in an eyes closed condition in Experiment 3. Neither visual imagery nor between-task difficulty was related to EMR, although there was some evidence for a minor effect of within-task difficulty. Comparison of task-related EMRs to EMR during a no-task waiting period suggests that eye movements may be suppressed or activated depending on task requirements. We discuss a number of possible interpretations of saccadic eye movements during non-visual cognition and propose an evolutionary model that links these eye movements to memory search through an elaboration of circuitry involved in visual perception.  相似文献   

16.
Horizontal eye movement is an essential component of the psychological intervention “eye movement desensitization and reprocessing” (EMDR) used in posttraumatic stress disorder. A hypothesized mechanism of action is an overload of the visuospatial sketchpad and/or the phonological loop of the working memory.  相似文献   

17.
很久以来,人们一直在致力于研究人的眼动与心理活动之间的关系,从而揭示其间的奥妙。众多研究表明,睡眠时的快速眼动(RapidEyeMovements,REM)及其周期对记忆的形成与巩固有重要作用。在清醒状态下,人的眼动就是为了获得视觉信息,如果涉及记忆任务,这就是编码阶段的具体行动。另外,问题解决过程也有眼动参与。总体看来,眼动和记忆之间是一种双向的关系,适当的眼动训练或许能够提高记忆成绩。  相似文献   

18.
    
When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond online to event structure. Participants read narrative passages as we monitored their eye movements. Several measures revealed that event structure predicted eye movements. In two experiments, we found that both early and overall reading times were longer for event boundaries. We also found that regressive saccades were more likely to land on event boundaries, but that readers were less likely to regress out of an event boundary. Experiment 2 also demonstrated that tracking event structure carries a working memory load. Eye movements provide a rich set of online data to test the cognitive reality of event segmentation during reading.  相似文献   

19.
Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVPGs in tasks with long-term memory (LTM) and working memory (WM) requirements. Experiment 1 yielded significantly higher eye movement rate (EMR) in tasks requiring LTM search than in a WM task requiring maintenance of information. Experiment 2 manipulated accessibility of items in study-test episodic tasks using the levels of processing paradigm. EMR was high in episodic recall irrespective of item accessibility. Experiment 3 examined functional significance of saccades in LTM tasks. Voluntary saccadic suppression produced no evidence that saccades contribute to task performance. We discuss the apparent epiphenomenal nature of spontaneous saccades from an evolutionary perspective and outline a neuroanatomical model of the link between the saccadic and memory system.  相似文献   

20.
There is a large body of evidence suggesting that words learnt early in life are recognised and produced faster than words learnt later in life, even when other variables are controlled. This is known as the Age of Acquisition (AoA) effect. However, there is an aspect of AoA that requires research of a greater depth, namely the method of obtaining the AoA measures. In the majority of studies, adult participants were asked to estimate the age at which they learnt a given word. Morrison, Chappell, and Ellis (1997) proposed a new method for obtaining objective-AoA data. They asked children to name some objects, and the age at which a given word appeared with 75% or more frequency was considered the AoA of that word. Although this method is more valid than adult ratings, it has only rarely been used. The main aim of this work is to provide objective-AoA norms in Spanish for a set of 175 object names following the procedures used by Morrison et al. The relationships among objective-AoA, estimated-AoA, and other psycholinguistic variables (name-agreement, familiarity, visual complexity, word length, etc.) obtained from a previous study are also analysed. Finally, the similarity of objective- and estimated-AoA measures was examined using data from several languages. A cluster analysis and a multidimensional-scaling analysis revealed that the estimated-AoA measures in a language correlated more with the estimated-AoA measures of the other languages than with the objective measures in the same language. The results suggest that it would be desirable to always use objective-AoA norms because they are less skewed by familiarity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号