首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The role of attention in memory for objects in natural scenes was investigated using a visual memory. In Experiment 1, participants were asked to memorize six cued (to be attended) objects in a natural scene, and were subsequently tested on one of the cued objects. Four types of test scene images were created by jumbling different sections in scene's background: Attended sections changed, unattended sections changed, both sections changed, and both unchanged. In Experiment 2, the procedure was the same as that of the jumble condition except that scenes to be memorized were also jumbled. Results showed that jumbling of attended sections reduced memory performance, whereas jumbling unattended sections did not, irrespective of the regularity of scene to be memorized. This finding suggests that attention plays an important role in mental construction of a natural scene representation, and leads to enhanced visual memory.  相似文献   

2.
Boundary extension (BE) is a memory error in which observers remember more of a scene than they actually viewed. This error reflects one’s prediction that a scene naturally continues and is driven by scene schema and contextual knowledge. In two separate experiments we investigated the necessity of context and scene schema in BE. In Experiment 1, observers viewed scenes that either contained semantically consistent or inconsistent objects as well as objects on white backgrounds. In both types of scenes and in the no-background condition there was a BE effect; critically, semantic inconsistency in scenes reduced the magnitude of BE. In Experiment 2 when we used abstract shapes instead of meaningful objects, there was no BE effect. We suggest that although scene schema is necessary to elicit BE, contextual consistency is not required.  相似文献   

3.
Participants' eye movements were monitored in two scene viewing experiments that manipulated the task-relevance of scene stimuli and their availability for extrafoveal processing. In both experiments, participants viewed arrays containing eight scenes drawn from two categories. The arrays of scenes were either viewed freely (Free Viewing) or in a gaze-contingent viewing mode where extrafoveal preview of the scenes was restricted (No Preview). In Experiment 1a, participants memorized the scenes from one category that was designated as relevant, and in Experiment 1b, participants chose their preferred scene from within the relevant category. We examined first fixations on scenes from the relevant category compared to the irrelevant category (Experiments 1a and 1b), and those on the chosen scene compared to other scenes not chosen within the relevant category (Experiment 1b). A survival analysis was used to estimate the first discernible influence of the task-relevance on the distribution of first-fixation durations. In the free viewing condition in Experiment 1a, the influence of task relevance occurred as early as 81 ms from the start of fixation. In contrast, the corresponding value in the no preview condition was 254 ms, demonstrating the crucial role of extrafoveal processing in enabling direct control of fixation durations in scene viewing. First fixation durations were also influenced by whether or not the scene was eventually chosen (Experiment 1b), but this effect occurred later and affected fewer fixations than the effect of scene category, indicating that the time course of scene processing is an important variable mediating direct control of fixation durations.  相似文献   

4.
Theories of objects recognition, scene perception, and neural representation of scenes imply that jumbling a coherent scene should reduce change detection. However, evidence from the change detection literature questions whether jumbling affects change detection. The experiments reported here demonstrate that jumbling does, in fact, reduce change detection. In Experiments 1 and 2, change detection was better for normal scenes than for jumbled scenes. In Experiment 3, inversion failed to interfere with change detection, demonstrating that the disruption of surface and object continuity inherent to jumbling is responsible for reduced change detection. These findings provide a crucial commonality between change detection research and theories of scene perception and neural representation. We also discuss why previous research may have failed to find effects of jumbling.  相似文献   

5.
Recent evidence suggests that spatial frequency (SF) processing of simple and complex visual patterns is flexible. The use of spatial scale in scene perception seems to be influenced by people's expectations. However as yet there is no direct evidence for top-down attentional effects on flexible scale use in scene perception. In two experiments we provide such evidence. We presented participants with low- and high-pass SF filtered scenes and cued their attention to the relevant scale. In Experiment 1 we subsequently presented them with hybrid scenes (both low- and high-pass scenes present). We observed that participants reported detecting the cued component of hybrids. To explore if this might be due to decision biases, in Experiment 2, we replaced hybrids with images containing meaningful scenes at uncued SFs and noise at the cued SFs (invalid cueing). We found that participants performed poorly on invalid cueing trials. These findings are consistent with top-down attentional modulation of early spatial frequency processing in scene perception.  相似文献   

6.
Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target), partial scenes, and two control primes were used. Partial scenes excluded the target objects' locations, but these areas could be predicted. Full and partial scenes produced equal performance facilitation. In Experiment 2, task-irrelevant partial scene primes were also tested. These primes did not facilitate performance (i.e. simple scene previews did not help). Experiment 3 showed that a partial prime's utility depended on the area of the scene that would be tested; the task-irrelevant primes used in Experiment 2 were useful for other distance judgments. Experiment 4 showed that partial scene facilitation is not limited to the area immediately surrounding the prime. The study demonstrated that perceived and mentally extrapolated layouts are equally effective.  相似文献   

7.
What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.  相似文献   

8.
通过眼动记录和部分场景再认两种方法,探讨了虚拟建筑物对称场景中物体朝向统一、凸显两种条件对内在参照系建立的影响。结果发现:(1)场景中均为有朝向建筑物且朝向统一时,被试选择物体朝向与对称轴建立内在参照系的可能性没有差异;(2)场景中只有一个有朝向建筑物,其余均为无朝向物体时,即朝向凸显条件下,被试倾向于选择对称轴来建立内在参照系。物体朝向对内在参照系建立的影响作用具有局限性和不稳定性。  相似文献   

9.
Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0°–360° in 36° increments) around the scene, and participants judged whether the objects’ positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.  相似文献   

10.
In two experiments, participants were trained to recognize a playground scene from four vantage points and were subsequently asked to recognize the playground from a novel perspective between the four learned viewing perspectives, as well as from the trained perspectives. In both experiments, people recognized the novel view more efficiently than those that they had recently used in order to learn the scene. Additionally, in Experiment 2, participants who viewed a novel stimulus on their very first test trial correctly recognized it more quickly (and also tended to recognize it more accurately) than did participants whose first test trial was a familiar view of the scene. These findings call into question the idea that scenes are recognized by comparing them with single previous experiences, and support a growing body of literature on the existence of psychological mechanisms that combine spatial information from multiple views of a scene.  相似文献   

11.
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.  相似文献   

12.
Observers have difficulty detecting visual changes. However, they are unaware of this inability, suggesting that people do not have an accurate understanding of visual processes. We explored whether this error is related to participants' beliefs about the roles of intention and scene complexity in detecting changes. In Experiment 1 participants had a higher failure rate for detecting changes in an incidental change detection task than an intentional change detection task. This effect of intention was greatest for complex scenes. However, participants predicted equal levels of change detection for both types of changes across scene complexity. In Experiment 2, emphasizing the differences between intentional and incidental tasks allowed participants to make predictions that were less inaccurate. In Experiment 3, using more sensitive measures and accounting for individual differences did not further improve predictions. These findings suggest that adults do not fully understand the role of intention and scene complexity in change detection.  相似文献   

13.
Research with brief presentations of scenes has indicated that scene context facilitates object identification. In the present experiments we used a paradigm in which an object in a scene is "wiggled"--drawing both attention and an eye fixation to itself--and then named. Thus the effect of scene context on object identification can be examined in a situation in which the target object is fixated and hence is fully visible. Experiment 1 indicated that a scene background that was episodically consistent with a target object facilitated the speed of naming. In Experiments 2 and 3, we investigated the time course of scene background information acquisition using display changes contingent on eye movements to the target object. The results from Experiment 2 were inconclusive; however, Experiment 3 demonstrated that scene background information present only on either the first or second fixation on a scene significantly affected naming time. Thus background information appears to be both extracted and able to affect object identification continuously during scene viewing.  相似文献   

14.
Inverting scenes interferes with visual perception and memory on many tasks. Might scene inversion eliminate boundary extension (BE) for briefly-presented photographs? In Experiment 1, an upright or inverted photograph (133, 258, or 383?ms) was followed by a 258 ms masked interval and a test photograph showing the identical view. Test photographs were rated as “same”, “closer”, or “farther away” (5-point scale). BE was just as great for inverted as upright views at the 133 and 383 ms durations, but surprisingly was greater for inverted views at the 258 ms duration. In Experiment 2, 258-ms views yielded greater BE when the study photographs were always tested in the opposite orientation, indicating that the difference in BE was related to encoding. Results suggest that scene construction beyond the view boundaries occurs rapidly and is not impeded by scene inversion, but that changes in the relative quality of visual details available for upright and inverted views may sometimes yield increased BE for inverted scenes.  相似文献   

15.
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.  相似文献   

16.
Objects are rarely viewed in isolation, and so how they are perceived is influenced by the context in which they are viewed and their interaction with other objects (e.g., whether objects are colocated for action). We investigated the combined effects of action relations and scene context on an object decision task. Experiment 1 investigated whether the benefit for positioning objects so that they interact is enhanced when objects are viewed within contextually congruent scenes. The results indicated that scene context influenced perception of nonaction-related objects (e.g., monitor and keyboard), but had no effect on responses to action-related objects (e.g., bottle and glass) that were processed more rapidly. In Experiment 2, we reduced the saliency of the object stimuli and found that, under these circumstances, scene context influenced responses to action-related objects. We discuss the data in terms of relatively late effects of scene processing on object perception.  相似文献   

17.
Most conceptions of episodic memory hold that reinstatement of encoding operations is essential for retrieval success, but the specific mechanisms of retrieval reinstatement are not well understood. In three experiments, we used saccadic eye movements as a window for examining reinstatement in scene recognition. In Experiment 1, participants viewed complex scenes, while number of study fixations was controlled by using a gaze-contingent paradigm. In Experiment 2, effects of stimulus saliency were minimized by directing participants' eye movements during study. At test, participants made remember/know judgments for each recognized stimulus scene. Both experiments showed that remember responses were associated with more consistent study-test fixations than false rejections (Experiments 1 and 2) and know responses (Experiment 2). In Experiment 3, we examined the causal role of gaze consistency on retrieval by manipulating participants' expectations during recognition. After studying name and scene pairs, each test scene was preceded by the same or different name as during study. Participants made more consistent eye movements following a matching, rather than mismatching, scene name. Taken together, these findings suggest that explicit recollection is a function of perceptual reconstruction and that event memory influences gaze control in this active reconstruction process.  相似文献   

18.
Participants viewed slides depicting ordinary routines (e.g., going grocery shopping) and later received a recognition test. In Experiment 1, there was higher recognition confidence to high-schema-relevant than to low-schema-relevant items. In Experiment 2, participants viewed slide sequences that sometimes contained a cause (e.g., woman taking orange from bottom of pile) but not an effect scene (oranges on floor), or an effect but not a cause scene. Participants mistook new cause scenes as old when they viewed the effect; false alarms to cause scenes and high-schema-relevant items increased with retention interval. Experiment 3 showed that the backward inference effect was accompanied by false explicit recollection, whereas false alarms to schema-high foils were based on familiarity. This suggests that the 2 types of inferential errors are produced by different underlying mechanisms.  相似文献   

19.
20.
The reported experiments aimed to investigate whether a person and his or her gaze direction presented in the context of a naturalistic scene cause perception, memory, and attention to be biased in typically developing adolescents and high-functioning adolescents with autism spectrum disorder (ASD). A novel computerized image manipulation program presented a series of photographic scenes, each containing a person. The program enabled participants to laterally maneuver the scenes behind a static window, the borders of which partially occluded the scenes. The gaze direction of the person in the scenes spontaneously cued attention of both groups in the direction of gaze, affecting judgments of preference (Experiment 1a) and causing memory biases (Experiment 1b). Experiment 2 showed that the gaze direction of a person cues visual search accurately to the exact location of gaze in both groups. These findings suggest that biases in preference, memory, and attention are caused by another person’s gaze direction when viewed in a complex scene in adolescents with and without ASD  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号