首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

2.
The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.  相似文献   

3.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

4.
Previewing scenes briefly makes finding target objects more efficient when viewing is through a gaze-contingent window (windowed viewing). In contrast, showing a preview of a randomly arranged search display does not benefit search efficiency when viewing during search is of the full display. Here, we tested whether a scene preview is beneficial when the scene is fully visible during search. Scene previews, when presented, were 250 ms in duration. During search, the scene was either fully visible or windowed. A preview always provided an advantage, in terms of decreasing the time to initially fixate and respond to targets and in terms of the total number of fixations. In windowed visibility, a preview reduced the distance of fixations from the target position until at least the fourth fixation. In full visibility, previewing reduced the distance of the second fixation but not of later fixations. The gist information derived from the initial glimpse of a scene allowed for placement of the first one or two fixations at information-rich locations, but when nonfoveal information was available, subsequent eye movements were only guided by online information.  相似文献   

5.
Many experiments have shown that knowing a targets visual features improves search performance over knowing the target name. Other experiments have shown that scene context can facilitate object search in natural scenes. In this study, we investigated how scene context and target features affect search performance. We examined two possible sources of information from scene context—the scenes gist and the visual details of the scene—and how they potentially interact with target-feature information. Prior to commencing search, participants were shown a scene and a target cue depicting either a picture or the category name (or no-information control). Using eye movement measures, we investigated how the target features and scene context influenced two components of search: early attentional guidance processes and later verification processes involved in the identification of the target. We found that both scene context and target features improved guidance, but that target features also improved speed of target recognition. Furthermore, we found that a scenes visual details played an important role in improving guidance, much more so than did the scenes gist alone.  相似文献   

6.
Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.  相似文献   

7.
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.  相似文献   

8.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

9.
Eye movements and picture processing during recognition   总被引:3,自引:0,他引:3  
Eye movements were monitored during a recognition memory test of previously studied pictures of full-color scenes. The test scenes were identical to the originals, had an object deleted from them, or included a new object substituted for an original object. In contrast to a prior report (Parker, 1978), we found no evidence that object deletions or substitutions could be recognized on the basis of information acquired from the visual periphery. Deletions were difficult to recognize peripherally, and the eyes were not attracted to them. Overall, the amplitude of the average saccade to the critical object during the memory test was less than 4.5 degrees of visual angle in all conditions and averaged 4.1 degrees across conditions. We conclude that information about object presence and identity in a scene is limited to a relatively small region around the current fixation point.  相似文献   

10.
The current study investigated from how large a region around their current point of gaze viewers can take in information when searching for objects in real-world scenes. Visual span size was estimated using the gaze-contingent moving window paradigm. Experiment 1 featured window radii measuring 1, 3, 4, 4.7, 5.4, and 6.1°. Experiment 2 featured six window radii measuring between 5 and 10°. Each scene occupied a 24.8 × 18.6° field of view. Inside the moving window, the scene was presented in high resolution. Outside the window, the scene image was low-pass filtered to impede the parsing of the scene into constituent objects. Visual span was defined as the window size at which object search times became indistinguishable from search times in the no-window control condition; this occurred with windows measuring 8° and larger. Notably, as long as central vision was fully available (window radii ≥ 5°), the distance traversed by the eyes through the scene to the search target was comparable to baseline performance. However, to move their eyes to the target, viewers made shorter saccades, requiring more fixations to cover the same image space, and thus more time. Moreover, a gaze-data based decomposition of search time revealed disruptions in specific subprocesses of search. In addition, nonlinear mixed models analyses demonstrated reliable individual differences in visual span size and parameters of the search time function.  相似文献   

11.
How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or absent. Visual search efficiency does not change after hundreds of trials through an unchanging scene (Experiment 1). Memory search, in contrast, begins inefficiently but becomes efficient with practice. Given a choice between vision and memory, observers choose vision (Experiments 2 and 3). However, if forced to use their memory on some trials, they learn to use memory on all trials, even when reliable visual information remains available (Experiment 4). The results suggest that observers make a pragmatic choice between vision and memory, with a strong bias toward visual search even for memorized stimuli.  相似文献   

12.
In four experiments, we examined whether watching a scene from the perspective of a camera rotating across it allowed participants to recognize or identify the scene's spatial layout. Completing a representational momentum (RM) task, participants viewed a smoothly animated display and then indicated whether test probes were in the same position as they were in the final view of the animation. We found RM anticipations for the camera's movement across the scene, with larger distortions resulting from camera rotations that brought objects into the viewing frame compared with camera rotations that took objects out of the viewing frame. However, the RM task alone did not lead to successful recognition of the scene's map or identification of spatial relations between objects. Watching a scene from a rotating camera's perspective and making position judgments is not sufficient for learning spatial layout.  相似文献   

13.
孙琪  任衍具 《心理科学》2014,37(2):265-271
以真实场景图像中的物体搜索为实验任务, 操纵场景情境和目标模板, 采用眼动技术将搜索过程分为起始阶段、扫描阶段和确认阶段, 考察场景情境和目标模板对视觉搜索过程的影响机制。结果发现, 场景情境和目标模板的作用方式及时间点不同, 二者交互影响搜索的正确率和反应时, 仅场景情境影响起始阶段的时间, 随后二者交互影响扫描阶段和确认阶段的时间及主要眼动指标。作者由此提出了场景情境和目标模板在视觉搜索中的交互作用模型。  相似文献   

14.
15.
What is the nature of the representation formed during the viewing of natural scenes? We tested two competing hypotheses regarding the accumulation of visual information during scene viewing. The first holds that coherent visual representations disintegrate as soon as attention is withdrawn from an object and thus that the visual representation of a scene is exceedingly impoverished. The second holds that visual representations do not necessarily decay upon the withdrawal of attention, but instead can be accumulated in memory from previously attended regions. Target objects in line drawings of natural scenes were changed during a saccadic eye movement away from those objects. Three findings support the second hypothesis. First, changes to the visual form of target objects (token substitution) were successfully detected, as indicated by both explicit and implicit measures, even though the target object was not attended when the change occurred. Second, these detections were often delayed until well after the change. Third, changes to semantically inconsistent target objects were detected better than changes to semantically consistent objects.  相似文献   

16.
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.  相似文献   

17.
Eye movements were monitored while participants performed a change detection task with images of natural scenes. An initial and a modified scene image were displayed in alternation, separated by a blank interval (flicker paradigm). In the modified image, a single target object was changed either by deleting that object from the scene or by rotating that object 90 degrees in depth. In Experiment 1, fixation position at detection was more likely to be in the target object region than in any other region of the scene. In Experiment 2, participants detected scene changes more accurately, with fewer false alarms, and more quickly when allowed to move their eyes in the scene than when required to maintain central fixation. These data suggest a major role for fixation position in the detection of changes to natural scenes across discrete views.  相似文献   

18.
In contextual cueing, the position of a search target is learned over repeated exposures to a visual display. The strength of this effect varies across stimulus types. For example, real-world scene contexts give rise to larger search benefits than contexts composed of letters or shapes. We investigated whether such differences in learning can be at least partially explained by the degree of semantic meaning associated with a context independently of the nature of the visual information available (which also varies across stimulus types). Chess boards served as the learning context as their meaningfulness depends on the observer's knowledge of the game. In Experiment 1, boards depicted actual game play, and search benefits for repeated boards were 4 times greater for experts than for novices. In Experiment 2, search benefits among experts were halved when less meaningful randomly generated boards were used. Thus, stimulus meaningfulness independently contributes to learning context-target associations.  相似文献   

19.
In contextual cueing, the position of a search target is learned over repeated exposures to a visual display. The strength of this effect varies across stimulus types. For example, real-world scene contexts give rise to larger search benefits than contexts composed of letters or shapes. We investigated whether such differences in learning can be at least partially explained by the degree of semantic meaning associated with a context independently of the nature of the visual information available (which also varies across stimulus types). Chess boards served as the learning context as their meaningfulness depends on the observer's knowledge of the game. In Experiment 1, boards depicted actual game play, and search benefits for repeated boards were 4 times greater for experts than for novices. In Experiment 2, search benefits among experts were halved when less meaningful randomly generated boards were used. Thus, stimulus meaningfulness independently contributes to learning context–target associations.  相似文献   

20.
What controls how long the eyes remain fixated during scene perception? We investigated whether fixation durations are under the immediate control of the quality of the current scene image. Subjects freely viewed photographs of scenes in preparation for a later memory test while their eye movements were recorded. Using the saccade-contingent display change method, scenes were degraded (Experiment 1) or enhanced (Experiment 2) via blurring (low-pass filtering) during predefined saccades. Results showed that fixation durations immediately after a display change were influenced by the degree of blur, with a monotonic relationship between degree of blur and fixation duration. The results also demonstrated that fixation durations can be both increased and decreased by changes in the degree of blur. The results suggest that fixation durations in scene viewing are influenced by the ease of processing of the image currently in view. The results are consistent with models of saccade generation in scenes in which moment-to-moment difficulty in visual and cognitive processing modulates fixation durations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号