首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Most conceptions of episodic memory hold that reinstatement of encoding operations is essential for retrieval success, but the specific mechanisms of retrieval reinstatement are not well understood. In three experiments, we used saccadic eye movements as a window for examining reinstatement in scene recognition. In Experiment 1, participants viewed complex scenes, while number of study fixations was controlled by using a gaze-contingent paradigm. In Experiment 2, effects of stimulus saliency were minimized by directing participants' eye movements during study. At test, participants made remember/know judgments for each recognized stimulus scene. Both experiments showed that remember responses were associated with more consistent study-test fixations than false rejections (Experiments 1 and 2) and know responses (Experiment 2). In Experiment 3, we examined the causal role of gaze consistency on retrieval by manipulating participants' expectations during recognition. After studying name and scene pairs, each test scene was preceded by the same or different name as during study. Participants made more consistent eye movements following a matching, rather than mismatching, scene name. Taken together, these findings suggest that explicit recollection is a function of perceptual reconstruction and that event memory influences gaze control in this active reconstruction process.  相似文献   

2.
In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene.  相似文献   

3.
A gaze-contingent short-term memory paradigm was used to obtain forgetting functions for realistic objects in scenes. Experiment 1 had observers freely view nine-item scenes. After observers' gaze left a predetermined target, they could fixate from 1-7 intervening nontargets before the scene was replaced by a spatial probe at the target location. The task was then to select the target from four alternatives. A steep recency benefit was found over the 1-2 intervening object range that declined into an above-chance prerecency asymptote over the remainder of the forgetting function. In Experiment 2, we used sequential presentation and variable delays to explore the contributions of decay and extrafoveal processes to these behaviors. We conclude that memory for objects in scenes, when serialized by fixation sequence, shows recency and prerecency effects that are similar to isolated objects presented sequentially over time. We discuss these patterns in the context of the serial order memory literature and object file theory.  相似文献   

4.
Past research has shown that change detection performance is often more efficient for target objects that are semantically incongruent with a surrounding scene context than for target objects that are semantically congruent with the scene context. One account of these findings is that attention is attracted to objects for which the identity of the object conflicts with the meaning of the scene, perhaps as a violation of expectancies created by earlier recruitment of scene gist information. An alternative account of the performance benefit for incongruent objects is that attention is more apt to linger on incongruent objects, as perhaps identifying these objects is more difficult due to conflicting information from the scene context. In the current experiment, we present natural scenes in a change detection task while monitoring eye movements. We find that eye gaze is attracted to these objects relatively early during scene processing.  相似文献   

5.
6.
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.  相似文献   

7.
In this study, we investigated the immediate and persisting effects of object location changes on gaze control during scene viewing. Participants repeatedly inspected a randomized set of naturalistic scenes for later questioning. On the seventh presentation, an object was shown at a new location, whereas the change was reversed for all subsequent presentations of the scene. We tested whether deviations from stored scene representations would modify eye movements to the changed regions and whether these effects would persist. We found that changed objects were looked at longer and more often, regardless of change reportability. These effects were most pronounced immediately after the change occurred and quickly leveled off once a scene remained unchanged. However, participants continued to perform short validation checks to changed scene regions, which implies a persistent modulation of eye movement control beyond the occurrence of object location changes.  相似文献   

8.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

9.
The reported experiments aimed to investigate whether a person and his or her gaze direction presented in the context of a naturalistic scene cause perception, memory, and attention to be biased in typically developing adolescents and high-functioning adolescents with autism spectrum disorder (ASD). A novel computerized image manipulation program presented a series of photographic scenes, each containing a person. The program enabled participants to laterally maneuver the scenes behind a static window, the borders of which partially occluded the scenes. The gaze direction of the person in the scenes spontaneously cued attention of both groups in the direction of gaze, affecting judgments of preference (Experiment 1a) and causing memory biases (Experiment 1b). Experiment 2 showed that the gaze direction of a person cues visual search accurately to the exact location of gaze in both groups. These findings suggest that biases in preference, memory, and attention are caused by another person’s gaze direction when viewed in a complex scene in adolescents with and without ASD  相似文献   

10.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

11.
Previous studies comparing eye movements between humans and their closest relatives, chimpanzees, have revealed similarities and differences between the species in terms of where individuals fixate their gaze during free viewing of a naturalistic scene, including social stimuli (e.g. body and face). However, those results were somewhat confounded by the fact that gaze behavior is influenced by low-level stimulus properties (e.g., color and form) and by high-level processes such as social sensitivity and knowledge about the scene. Given the known perceptual and cognitive similarities between chimpanzees and humans, it is expected that such low-level effects do not play a critical role in explaining the high-level similarities and differences between the species. However, there is no quantitative evidence to support this assumption. To estimate the effect of local stimulus saliency on such eye-movement patterns, this study used a well-established bottom-up saliency model. In addition, to elucidate the cues that the viewers use to guide their gaze, we presented scenes in which we had manipulated various stimulus properties. As expected, the saliency model did not fully predict the fixation patterns actually observed in chimpanzees and humans. In addition, both species used multiple cues to fixate socially significant areas such as the face. There was no evidence suggesting any differences between chimpanzees and humans in their responses to low-level saliency. Therefore, this study found a substantial amount of similarity in the perceptual mechanisms underlying gaze guidance in chimpanzees and humans and thereby offers a foundation for direct comparisons between them.  相似文献   

12.
Eyes move over visual scenes to gather visual information. Studies have found heavy-tailed distributions in measures of eye movements during visual search, which raises questions about whether these distributions are pervasive to eye movements, and whether they arise from intrinsic or extrinsic factors. Three different measures of eye movement trajectories were examined during visual foraging of complex images, and all three were found to exhibit heavy tails: Spatial clustering of eye movements followed a power law distribution, saccade length distributions were lognormally distributed, and the speeds of slow, small amplitude movements occurring during fixations followed a 1/f spectral power law relation. Images were varied to test whether the spatial clustering of visual scene information is responsible for heavy tails in eye movements. Spatial clustering of eye movements and saccade length distributions were found to vary with image type and task demands, but no such effects were found for eye movement speeds during fixations. Results showed that heavy-tailed distributions are general and intrinsic to visual foraging, but some of them become aligned with visual stimuli when required by task demands. The potentially adaptive value of heavy-tailed distributions in visual foraging is discussed.  相似文献   

13.
通过眼动记录和部分场景再认两种方法,探讨了虚拟建筑物对称场景中物体朝向统一、凸显两种条件对内在参照系建立的影响。结果发现:(1)场景中均为有朝向建筑物且朝向统一时,被试选择物体朝向与对称轴建立内在参照系的可能性没有差异;(2)场景中只有一个有朝向建筑物,其余均为无朝向物体时,即朝向凸显条件下,被试倾向于选择对称轴来建立内在参照系。物体朝向对内在参照系建立的影响作用具有局限性和不稳定性。  相似文献   

14.
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual–spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while they remembered a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval should impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy, even on the trials in which no probe occurred. These findings support models of working memory in which visual–spatial selection mechanisms contribute to the maintenance of object representations.  相似文献   

15.
The present study employed a saccade-contingent change paradigm to investigate the effect of spatial frequency filtering on fixation durations during scene viewing. Subjects viewed grayscale scenes while encoding them for a later memory test. During randomly chosen saccades, the scene was replaced with an alternate version that remained throughout the critical fixation that followed. In Experiment 1, during the critical fixation, the scene could be changed to high-pass and low-pass spatial frequency filtered versions. Under both conditions, fixation durations increased, and the low-pass condition produced a greater effect than the high-pass condition. In subsequent experiments, we manipulated the familiarity of scene information during the critical fixation by flipping the filtered scenes upside down or horizontally. Under these conditions, we observed lengthening of fixation durations but no difference between the high-pass and low-pass conditions, suggesting that the filtering effect is related to the mismatch between information extracted within the critical fixation and the ongoing scene representation in memory. We also conducted control experiments that tested the effect of changes to scene orientation (Experiment 2a) and the addition of color to a grayscale scene (Experiment 2b). Fixation distribution analysis suggested two effects on the distribution fixation durations: a fast-acting effect that was sensitive to all transsaccadic changes tested and a later effect in the tail of the distribution that was likely tied to the processing of scene information. These findings are discussed in the context of theories of oculomotor control during scene viewing.  相似文献   

16.
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 13, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a “depth-of-processing” account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.  相似文献   

17.
Boundary extension (BE) is a memory error for close-up views of scenes in which participants tend to remember a picture of a scene as depicting a more wide-angle view than what was actually displayed. However, some experiments have yielded data that indicate a normalized memory of the views depicted in a set of scenes, suggesting that memory for the previously studied scenes has become drawn toward the average view in the image set. In previous studies, normalization is only found when the retention interval is very long or when the stimuli no longer appear to represent a spatial expanse. In Experiment 1, we examine whether normalization can influence results for scenes depicting a partial view of space and when the memory test occurs immediately following the study block by manipulating the degree of difference between studied close-up and wide-angle scenes. In Experiment 2, normalization is induced in a set of scenes by creating conditions expected to lead to memory interference, suggesting that this may be the cause of view normalization. Based on the multi-source model of BE, these scenes should be extended during perception (Intraub, H. (2010). Rethinking scene perception: A multisource model. Psychology of Learning and Motivation, 52, 231–265). In Experiment 3, we show that BE is indeed observable if the same scenes are tested differently, supporting the notion that BE is primarily a perceptual phenomenon while normalization is a memory effect.  相似文献   

18.
People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics—let alone their semantic congruity—processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.  相似文献   

19.
Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.  相似文献   

20.
Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号