首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
A scene prime can induce a mental representation of layout that is functional in the sense that it facilitates the processing of depth relations in a subsequent same-scene target. Five experiments indicated that the representation can consist of separate and independent functional regions. In each experiment, primes with as many as four unrelated regions facilitated spatial processing within each region. The prime representations were functional despite structural discontinuity at region borders. The results indicate a limitation in the importance of structural constraint in representations of scene layout. However, when structural disruption occurred within regions that were perceived (Experiment 5), spatial processing was slowed. The results suggest that scene representation is more top down than is scene perception; the effects of structural disruption were overcome within representations, but not within perception.  相似文献   

2.
Sanocki and Epstein (1997) provided evidence that an immediate prior experience of a scene, as a prime, can induce representations of its spatial layout, facilitating the subsequent spatial processing of objects in the target scene. In their experiments, observers responded to target scenes by indicating which of two critical objects was closer in the pictorial space. Reaction times to target scenes that were preceded by same-scene primes without the critical objects were faster than reaction times to target scenes that were preceded by different scene or control primes (geometrical figures). By manipulating the nature of the prime and the interval between prime and target, and by cueing the position of the critical objects, we obtain evidence that the facilitating effect of the same-scene primes can also be explained by the sudden appearance of the critical objects in the target scene. In same-scene conditions, the critical objects cause a local onset, whereas in different-scene and control conditions the entire target scene causes a global onset. As a result, the local onset in the same-scene condition produces a shift of attention towards the critical objects, resulting in faster processing of the critical objects.  相似文献   

3.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

4.
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.  相似文献   

5.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

6.
Global transsaccadic change blindness during scene perception   总被引:1,自引:0,他引:1  
Each time the eyes are spatially reoriented via a saccadic eye movement, the image falling on the retina changes. How visually specific are the representations that are functional across saccades during active scene perception? This question was investigated with a saccade-contingent display-change paradigm in which pictures of complex real-world scenes were globally changed in real time during eye movements. The global changes were effected by presenting each scene as an alternating set of scene strips and occluding gray bars, and by reversing the strips and bars during specific saccades. The results from two experiments demonstrated a global transsaccadic change-blindness effect, suggesting that point-by-point visual representations are not functional across saccades during complex scene perception.  相似文献   

7.
When novel scenes are encoded, the representations of scene layout are generally viewpoint specific. Past studies of scene recognition have typically required subjects to explicitly study and encode novel scenes, but in everyday visual experience, it is possible that much scene learning occurs incidentally. Here, we examine whether implicitly encoded scene layouts are also viewpoint dependent. We used the contextual cuing paradigm, in which search for a target is facilitated by implicitly learned associations between target locations and novel spatial contexts (Chun & Jiang, 1998). This task was extended to naturalistic search arrays with apparent depth. To test viewpoint dependence, the viewpoint of the scenes was varied from training to testing. Contextual cuing and, hence, scene context learning decreased as the angular rotation from training viewpoint increased. This finding suggests that implicitly acquired representations of scene layout are viewpoint dependent.  相似文献   

8.
Nine experiments examined the means by which visual memory for individual objects is structured into a larger representation of a scene. Participants viewed images of natural scenes or object arrays in a change detection task requiring memory for the visual form of a single target object. In the test image, 2 properties of the stimulus were independently manipulated: the position of the target object and the spatial properties of the larger scene or array context. Memory performance was higher when the target object position remained the same from study to test. This same-position advantage was reduced or eliminated following contextual changes that disrupted the relative spatial relationships among contextual objects (context deletion, scrambling, and binding change) but was preserved following contextual change that did not disrupt relative spatial relationships (translation). Thus, episodic scene representations are formed through the binding of objects to scene locations, and object position is defined relative to a larger spatial representation coding the relative locations of contextual objects.  相似文献   

9.
Amnesia is a deficit in relational memory   总被引:8,自引:0,他引:8  
Eye movements were monitored to assess memory for scenes indirectly (implicitly). Two eye movement–based memory phenomena were observed: (a) the repetition effect , a decrease in sampling of previously viewed scenes compared with new scenes, reflecting memory for those scenes, and (b) the relational manipulation effect , an increase in viewing of the regions where manipulations of relations among scene elements had occurred. In normal control subjects, the relational manipulation effect was expressed only in the absence of explicit awareness of the scene manipulations. Thus, memory representations of scenes contain information about relations among elements of the scenes, at least some of which is not accessible to verbal report. But amnesic patients with severe memory impairment failed to show the relational manipulation effect. Their failure to show any demonstrable memory for relations among the constituent elements of scenes suggests that amnesia involves a fundamental deficit in relational (declarative) memory processing.  相似文献   

10.
In four experiments, we examined whether watching a scene from the perspective of a camera rotating across it allowed participants to recognize or identify the scene's spatial layout. Completing a representational momentum (RM) task, participants viewed a smoothly animated display and then indicated whether test probes were in the same position as they were in the final view of the animation. We found RM anticipations for the camera's movement across the scene, with larger distortions resulting from camera rotations that brought objects into the viewing frame compared with camera rotations that took objects out of the viewing frame. However, the RM task alone did not lead to successful recognition of the scene's map or identification of spatial relations between objects. Watching a scene from a rotating camera's perspective and making position judgments is not sufficient for learning spatial layout.  相似文献   

11.
12.
The present study employed a saccade-contingent change paradigm to investigate the effect of spatial frequency filtering on fixation durations during scene viewing. Subjects viewed grayscale scenes while encoding them for a later memory test. During randomly chosen saccades, the scene was replaced with an alternate version that remained throughout the critical fixation that followed. In Experiment 1, during the critical fixation, the scene could be changed to high-pass and low-pass spatial frequency filtered versions. Under both conditions, fixation durations increased, and the low-pass condition produced a greater effect than the high-pass condition. In subsequent experiments, we manipulated the familiarity of scene information during the critical fixation by flipping the filtered scenes upside down or horizontally. Under these conditions, we observed lengthening of fixation durations but no difference between the high-pass and low-pass conditions, suggesting that the filtering effect is related to the mismatch between information extracted within the critical fixation and the ongoing scene representation in memory. We also conducted control experiments that tested the effect of changes to scene orientation (Experiment 2a) and the addition of color to a grayscale scene (Experiment 2b). Fixation distribution analysis suggested two effects on the distribution fixation durations: a fast-acting effect that was sensitive to all transsaccadic changes tested and a later effect in the tail of the distribution that was likely tied to the processing of scene information. These findings are discussed in the context of theories of oculomotor control during scene viewing.  相似文献   

13.
Facilitatory scene priming is the positive effect of a scene prime on the immediately subsequent spatial processing of a related target, relative to control primes. In the present experiments, a large set of scenes were presented, each several times. The accuracy of a relational spatial-layout judgment was the main measure (which of two probes in a scene was closer?). The effect of scene primes on sensitivity was near zero for the first presentation of a scene; advantages for scene primes occurred only after two or three presentations. In addition, a bias effect emerged in reaction times for novel scenes. These results imply that facilitatory scene priming requires learning and is top-down in nature. Scene priming may require the consolidation of interscene relations in a memory representation.  相似文献   

14.
Recognition memory for previously seen multiobject scenes was examined for different types of contextual arrangements between objects in the scenes. It was found that organized scenes with novel but possible interobject relations were recognized more accurately than either organized scenes with familiar interobject relations or unorganized scenes with impossible interobject relations. This finding was obtained for adults, 8- to 10-year-old children, and 5- to 8-year-old children who indicated concrete-operational ability in Piaget’s conservation-of-liquid quantity task. The results were interpreted in conjunction with a two-stage model of scene processing involving the formation of a schema to represent a scene (Stage 1), and the operation of the schema in governing the further processing of detailed information in the scene (Stage 2). It was concluded that preoperational children can form schemata to represent organized scenes (Stage 1), but it is not until the emergence of concrete operations that these schemata become operational with respect to guiding the further processing of information in the scene (Stage 2).  相似文献   

15.
Eye movements were monitored while participants performed a change detection task with images of natural scenes. An initial and a modified scene image were displayed in alternation, separated by a blank interval (flicker paradigm). In the modified image, a single target object was changed either by deleting that object from the scene or by rotating that object 90 degrees in depth. In Experiment 1, fixation position at detection was more likely to be in the target object region than in any other region of the scene. In Experiment 2, participants detected scene changes more accurately, with fewer false alarms, and more quickly when allowed to move their eyes in the scene than when required to maintain central fixation. These data suggest a major role for fixation position in the detection of changes to natural scenes across discrete views.  相似文献   

16.
Current models of visual perception suggest that, during scene categorization, low spatial frequencies (LSF) are rapidly processed and activate plausible interpretations of visual input. This coarse analysis would be used to guide subsequent processing of high spatial frequencies (HSF). The present study aimed to further examine how information from LSF and HSF interact and influence each other during scene categorization. In a first experimental session, participants had to categorize LSF and HSF filtered scenes belonging to two different semantic categories (artificial vs. natural). In a second experimental session, we used hybrid scenes as stimuli made by combining LSF and HSF from two different scenes which were semantically similar or dissimilar. Half of the participants categorized LSF scenes in hybrids, and the other half categorized HSF scenes in hybrids. Stimuli were presented for 30 or 100?ms. Session 1 results showed better performance for LSF than HSF scene categorization. Session 2 scene categorization was faster when participants attended and categorized LSF than HSF scene in hybrids. The semantic interference of a semantically dissimilar HSF scene on LSF scene categorization was greater than the semantic interference of a semantically dissimilar LSF scene on HSF scene categorization, irrespective of exposure duration. These results suggest a LSF advantage for scene categorization, and highlight the prominent role of HSF information when there is uncertainty about the visual stimulus, in order to disentangle between alternative interpretations.  相似文献   

17.
We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to recognise a previously learned haptic scene of novel objects presented across the same or different orientation as learning whilst they either remained in the same position to moved to a new position relative to the scene. Scene rotation incurred a cost in recognition performance in all groups. However, overall haptic scene recognition performance was worse in the congenitally blind group. Moreover, unlike the late blind or sighted groups, the congenitally blind group were unable to compensate for the cost in scene rotation with observer motion. Our results suggest that vision plays an important role in representing and updating spatial information encoded through touch and have important implications for the role of vision in the development of neuronal areas involved in spatial cognition.  相似文献   

18.
Perceiving Real-World Viewpoint Changes   总被引:10,自引:0,他引:10  
Retinal images vary as observers move through the environment, but observers seem to have little difficulty recognizing objects and scenes across changes in view. Although real-world view changes can be produced both by object rotations (orientation changes) and by observer movements (viewpoint changes), research on recognition across views has relied exclusively on display rotations. However, research on spatial reasoning suggests a possible dissociation between orientation and viewpoint. Here we demonstrate that scene recognition in the real world depends on more than the retinal projection of the visible array; viewpoint changes have little effect on detection of layout changes, but equivalent orientation changes disrupt performance significantly. Findings from our three experiments suggest that scene recognition across view changes relies on a mechanism that updates a viewer-centered representation during observer movements, a mechanism not available for orientation changes. These results link findings from spatial tasks to work on object and scene recognition and highlight the importance of considering the mechanisms underlying recognition in real environments.  相似文献   

19.
Recent studies in scene perception suggest that much of what observers believe they see is not retained in visual memory. Depending on the roles they play in organizing the perception of a scene, different visual properties may require different amounts of attention to be incorporated into a mental representation of the scene. The goal of this study was to compare how three visual properties of scenes, colour, object position, and object presence, are encoded in visual memory. We used a variation on the change detection “flicker” task and measured the time to detect scene changes when: (1) a cue was provided regarding the type of change; and, (2) no cue was provided. We hypothesized that cueing would enhance the processing of visual properties that require more attention to be encoded into scene representations, whereas cueing would not have an effect for properties that are readily or automatically encoded in visual memory. In Experiment 1, we found that there was a cueing advantage for colour changes, but not for position or presence changes. In Experiment 2, we found the same cueing effect regardless of whether the colour change altered the configuration of the scene or not. These results are consistent with the idea that properties that typically help determine the configuration of the scene, for example, position and presence, are better encoded in scene representations than are surface properties such as colour.  相似文献   

20.
This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object token or the target object rotated in depth. In Experiments 1 and 2, object memory performance was more accurate when the test object alternatives were displayed within the original scene than when they were displayed in isolation, demonstrating object-to-scene binding. Experiment 3 tested the hypothesis that episodic scene representations are formed through the binding of object representations to scene locations. Consistent with this hypothesis, memory performance was more accurate when the test alternatives were displayed within the scene at the same position originally occupied by the target than when they were displayed at a different position.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号