首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
通过两个实验探讨了视觉表象任务信息的通达对表象加工眼动的影响。结果表明, 在低通达条件下, 表象任务的眼动复制了知觉任务的眼动; 随着表象任务信息通达水平的提高, 眼动的注视点平均持续时间、平均眼跳距离和平均眼跳时间会发生规律性变化; 眼动控制与任务信息通达水平对表象眼动的影响存在不同的机制。实验结果佐证了眼动在视觉表象中起机能性作用的观点。  相似文献   

3.
ABSTRACT

Can eye movements tell us whether people will remember a scene? In order to investigate the link between eye movements and memory encoding and retrieval, we asked participants to study photographs of real-world scenes while their eye movements were being tracked. We found eye gaze patterns during study to be predictive of subsequent memory for scenes. Moreover, gaze patterns during study were more similar to gaze patterns during test for remembered than for forgotten scenes. Thus, eye movements are indeed indicative of scene memory. In an explicit test for context effects of eye movements on memory, we found recognition rate to be unaffected by the disruption of spatial and/or temporal context of repeated eye movements. Therefore, we conclude that eye movements cue memory by selecting and accessing the most relevant scene content, regardless of its spatial location within the scene or the order in which it was selected.  相似文献   

4.
Most conceptions of episodic memory hold that reinstatement of encoding operations is essential for retrieval success, but the specific mechanisms of retrieval reinstatement are not well understood. In three experiments, we used saccadic eye movements as a window for examining reinstatement in scene recognition. In Experiment 1, participants viewed complex scenes, while number of study fixations was controlled by using a gaze-contingent paradigm. In Experiment 2, effects of stimulus saliency were minimized by directing participants' eye movements during study. At test, participants made remember/know judgments for each recognized stimulus scene. Both experiments showed that remember responses were associated with more consistent study-test fixations than false rejections (Experiments 1 and 2) and know responses (Experiment 2). In Experiment 3, we examined the causal role of gaze consistency on retrieval by manipulating participants' expectations during recognition. After studying name and scene pairs, each test scene was preceded by the same or different name as during study. Participants made more consistent eye movements following a matching, rather than mismatching, scene name. Taken together, these findings suggest that explicit recollection is a function of perceptual reconstruction and that event memory influences gaze control in this active reconstruction process.  相似文献   

5.
Previous research showed that the eyes revisit the location in which the stimulus has been encoded when visual or verbal information is retrieved from memory. A recent study showed that this behavior still occurs 1 week after encoding, suggesting that visual, spatial and linguistic information is tightly associated with the oculomotor trace and stored as an integrated memory representation. However, it is yet unclear whether looking behavior simply remains stable between encoding and recall or whether it changes over time in a more fine-tuned manner. Here, we investigate the time course of looking behavior during recall in multiple sessions across 1 week. Participants encoded visual objects presented in one of the four locations on the computer screen. In five sessions during the week after encoding, they performed on a visual memory recall task. During retrieval, participants looked back to the encoding location, but only in the recall sessions within 1 day of encoding. We discuss different explanations for the temporal dynamics of looking behavior during recall, searching for the role of eye movements in memory.  相似文献   

6.
以表象看到一个运动员完成三级跳远项目为实验任务,对表象任务的信息通达水平、眼动注视点的活动位置和被试对三级跳远项目的知识水平和技能水平进行系统的操纵,通过2个实验探讨了视觉表象眼动的变化是基于知识学习表征差异还是技能训练表征差异的问题。实验1以没有三级跳远运动专业技能知识且对该运动的认知水平也较低的大学生为被试,结果表明,在完成高信息通达水平的表象任务时,注视点需要较短的持续时间,但眼跳距离会增大,眼跳频率会变低;实验2对表象任务的知识学习表征水平和技能训练表征水平进行操纵,分别以对实验任务进行过知识学习和专业技能训练的人为被试,结果表明,随着被试知识习得水平和技能水平表征能力的提高,不同表象任务信息通达水平间的眼动差异将消失,但知识学习和技能表征的差异在平均眼跳时间上有差异,技能训练型的被试其平均眼跳时间要短于知识学习型被试,达到临界水平显著,注视点平均持续时间和平均眼跳距离等均没有差异。  相似文献   

7.
Time is grounded in various ways, and previous studies point to a “mental time line” with past associated with the left, and future with the right side. In this study, we investigated whether spontaneous eye movements on a blank screen would follow a mental timeline during encoding, free recall, and recognition of past and future items. In all three stages of processing, gaze position was more rightward during future items compared to past items. Moreover, horizontal gaze position during encoding predicted horizontal gaze position during free recall and recognition. We conclude that mental time line and the stored gaze position during encoding assist memory retrieval of past versus future items. Our findings highlight the spatial nature of temporal representations.  相似文献   

8.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.  相似文献   

9.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

10.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

11.
Eye movements during mental imagery are not epiphenomenal but assist the process of image generation. Commands to the eyes for each fixation are stored along with the visual representation and are used as spatial index in a motor‐based coordinate system for the proper arrangement of parts of an image. In two experiments, subjects viewed an irregular checkerboard or color pictures of fish and were subsequently asked to form mental images of these stimuli while keeping their eyes open. During the perceptual phase, a group of subjects was requested to maintain fixation onto the screen's center, whereas another group was free to inspect the stimuli. During the imagery phase, all of these subjects were free to move their eyes. A third group of subjects (in Experiment 2) was free to explore the pattern but was requested to maintain central fixation during imagery. For subjects free to explore the pattern, the percentage of time spent fixating a specific location during perception was highly correlated with the time spent on the same (empty) locations during imagery. The order of scanning of these locations during imagery was correlated to the original order during perception. The strength of relatedness of these scanpaths and the vividness of each image predicted performance accuracy. Subjects who fixed their gaze centrally during perception did the same spontaneously during imagery. Subjects free to explore during perception, but maintaining central fixation during imagery, showed decreased ability to recall the pattern. We conclude that the eye scanpaths during visual imagery reenact those of perception of the same visual scene and that they play a functional role.  相似文献   

12.
We investigated eye‐movements during preschool children's pictorial recall of seen objects. Thirteen 3‐ to 4‐year‐old children completed a perceptual encoding and a pictorial recall task. First, they were exposed to 16 pictorial objects, which were positioned in one of four distinct areas on the computer screen. Subsequently, they had to recall these pictorial objects from memory in order to respond to specific questions about visual details. We found that children spent more time fixating the areas in which the pictorial objects were previously displayed. We conclude that as early as age 3–4 years old, children show specific eye‐movements when they recall pictorial contents of previously seen objects.  相似文献   

13.
Recent work has demonstrated that horizontal saccadic eye movements enhance verbal episodic memory retrieval, particularly in strongly right-handed individuals. The present experiments test three primary assumptions derived from this research. First, horizontal eye movements should facilitate episodic memory for both verbal and non-verbal information. Second, the benefits of horizontal eye movements should only be seen when they immediately precede tasks that demand right and left-hemisphere processing towards successful performance. Third, the benefits of horizontal eye movements should be most pronounced in the strongly right-handed. Two experiments confirmed these hypotheses: horizontal eye movements increased recognition sensitivity and decreased response times during a spatial memory test relative to both vertical eye movements and fixation. These effects were only seen when horizontal eye movements preceded episodic memory retrieval, and not when they preceded encoding (Experiment 1). Further, when eye movements preceded retrieval, they were only beneficial with recognition tests demanding a high degree of right and left-hemisphere activity (Experiment 2). In both experiments the beneficial effects of horizontal eye movements were greatest for strongly right-handed individuals. These results support recent work suggesting increased interhemispheric brain activity induced by bilateral horizontal eye movements, and extend this literature to the encoding and retrieval of landmark shape and location information.  相似文献   

14.
Research with brief presentations of scenes has indicated that scene context facilitates object identification. In the present experiments we used a paradigm in which an object in a scene is "wiggled"--drawing both attention and an eye fixation to itself--and then named. Thus the effect of scene context on object identification can be examined in a situation in which the target object is fixated and hence is fully visible. Experiment 1 indicated that a scene background that was episodically consistent with a target object facilitated the speed of naming. In Experiments 2 and 3, we investigated the time course of scene background information acquisition using display changes contingent on eye movements to the target object. The results from Experiment 2 were inconclusive; however, Experiment 3 demonstrated that scene background information present only on either the first or second fixation on a scene significantly affected naming time. Thus background information appears to be both extracted and able to affect object identification continuously during scene viewing.  相似文献   

15.
Previewing scenes briefly makes finding target objects more efficient when viewing is through a gaze-contingent window (windowed viewing). In contrast, showing a preview of a randomly arranged search display does not benefit search efficiency when viewing during search is of the full display. Here, we tested whether a scene preview is beneficial when the scene is fully visible during search. Scene previews, when presented, were 250 ms in duration. During search, the scene was either fully visible or windowed. A preview always provided an advantage, in terms of decreasing the time to initially fixate and respond to targets and in terms of the total number of fixations. In windowed visibility, a preview reduced the distance of fixations from the target position until at least the fourth fixation. In full visibility, previewing reduced the distance of the second fixation but not of later fixations. The gist information derived from the initial glimpse of a scene allowed for placement of the first one or two fixations at information-rich locations, but when nonfoveal information was available, subsequent eye movements were only guided by online information.  相似文献   

16.
A horizontally moving target was followed by rotation of the eyes alone or by a lateral movement of the head. These movements resulted in the retinal displacement of a vertically moving target from its perceived path, the amplitude of which was determined by the phase and amplitude of the object motion and of the eye or head movements. In two experiments, we tested the prediction from our model of spatial motion (Swanston, Wade, & Day, 1987) that perceived distance interacts with compensation for head movements, but not with compensation-for eye movements with respect to a stationary head. In both experiments, when the vertically moving target was seen at a distance different from its physical distance, its perceived path was displaced relative to that seen when there was no error in pereived distance, or when it was pursued by eye movements alone. In a third experiment, simultaneous measurements of eye and head position during lateral head movements showed that errors in fixation were not sufficient to require modification of the retinal paths determined by the geometry of the observation conditions in Experiments 1 and 2.  相似文献   

17.
Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.  相似文献   

18.
We investigated eye movements during long-term pictorial recall. Participants performed a perceptual encoding task, in which they memorized 16 stimuli that were displayed in different areas on a computer screen. After the encoding phase the participants had to recall and visualize the images and answer to specific questions about visual details of the stimuli. One week later the participants repeated the pictorial recall task. Interestingly, not only in the immediate recall task but also 1 week later participants looked longer at the areas where the stimuli were encoded. The major contribution of this study is that memory for pictorial objects, including their spatial location, is stable and robust over time.  相似文献   

19.
A horizontally moving target was followed by rotation of the eyes alone or by a lateral movement of the head. These movements resulted in the retinal displacement of a vertically moving target from its perceived path, the amplitude of which was determined by the phase and amplitude of the object motion and of the eye or head movements. In two experiments, we tested the prediction from our model of spatial motion (Swanston, Wade, & Day, 1987) that perceived distance interacts with compensation for head movements, but not with compensation for eye movements with respect to a stationary head. In both experiments, when the vertically moving target was seen at a distance different from its physical distance, its perceived path was displaced relative to that seen when there was no error in perceived distance, or when it was pursued by eye movements alone. In a third experiment, simultaneous measurements of eye and head position during lateral head movements showed that errors in fixation were not sufficient to require modification of the retinal paths determined by the geometry of the observation conditions in Experiments 1 and 2.  相似文献   

20.

While previous research has shown that during mental imagery participants look back to areas visited during encoding it is unclear what happens when information presented during encoding is incongruent. To investigate this research question, we presented 30 participants with incongruent audio-visual associations (e.g. the image of a car paired with the sound of a cat) and later asked them to create a congruent mental representation based on the auditory cue (e.g. to create a mental representation of a cat while hearing the sound of a cat). The results revealed that participants spent more time in the areas where they previously saw the object and that incongruent audio-visual information during encoding did not appear to interfere with the generation and maintenance of mental images. This finding suggests that eye movements can be flexibly employed during mental imagery depending on the demands of the task.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号