首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The effects of emotion on memory are often described in terms of trade-offs: People often remember central, emotional information at the expense of background details. The present experiment examined the effects of aging and encoding instructions on participants' ability to remember the details of central emotional objects and the backgrounds on which those objects were placed. When young and older adults passively viewed scenes, both age groups showed strong emotion-induced trade-offs. They were able to remember the visual details as well as the general theme of the emotional object, but they had difficulties remembering the visual specifics of the scene background. Age differences emerged, however, when participants were given encoding instructions that emphasized elaborative encoding of the entire scene. With these instructions, young adults overcame the trade-offs (i.e., they no longer showed impairing effects of emotion), whereas older adults continued to show good memory for the emotional object but poor memory for its background. These results suggest that aging impairs the ability to flexibly disengage attention from the negative arousing elements of scenes, preventing the successful encoding of nonemotional aspects of the environment.  相似文献   

2.
Change blindness   总被引:1,自引:0,他引:1  
Although at any instant we experience a rich, detailed visual world, we do not use such visual details to form a stable representation across views. Over the past five years, researchers have focused increasingly on 'change blindness' (the inability to detect changes to an object or scene) as a means to examine the nature of our representations. Experiments using a diverse range of methods and displays have produced strikingly similar results: unless a change to a visual scene produces a localizable change or transient at a specific position on the retina, generally, people will not detect it. We review theory and research motivating work on change blindness and discuss recent evidence that people are blind to changes occurring in photographs, in motion pictures and even in real-world interactions. These findings suggest that relatively little visual information is preserved from one view to the next, and question a fundamental assumption that has underlain perception research for centuries: namely, that we need to store a detailed visual representation in the mind/brain from one view to the next.  相似文献   

3.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

4.
Arrington JG  Levin DT  Varakin DA 《Perception》2006,35(12):1665-1678
It has recently been demonstrated that people often fail to detect between-view changes in their visual environment. This phenomenon, called 'change blindness' (CB), occurs whenever the perceptual transient that usually accompanies a change is somehow blocked, or made less salient. In the well-known flicker paradigm, the transient is blocked by inserting a blank screen between the original and changed scenes. We tested whether transients that do not involve the appearance or disappearance of visual objects would also produce CB. Therefore we tested whether the appearance or disappearance of color information, and increments or decrements in luminance, could cause CB. In three experiments, subjects searched for changes in natural scenes. We found that both color transients and luminance transients significantly reduced change detection (by approximately 30%) relative to a no-transient condition.  相似文献   

5.
We present a computational framework for attention-guided visual scene exploration in sequences of RGB-D data. For this, we propose a visual object candidate generation method to produce object hypotheses about the objects in the scene. An attention system is used to prioritise the processing of visual information by (1) localising candidate objects, and (2) integrating an inhibition of return (IOR) mechanism grounded in spatial coordinates. This spatial IOR mechanism naturally copes with camera motions and inhibits objects that have already been the target of attention. Our approach provides object candidates which can be processed by higher cognitive modules such as object recognition. Since objects are basic elements for many higher level tasks, our architecture can be used as a first layer in any cognitive system that aims at interpreting a stream of images. We show in the evaluation how our framework finds most of the objects in challenging real-world scenes.  相似文献   

6.
What is the nature of the representation formed during the viewing of natural scenes? We tested two competing hypotheses regarding the accumulation of visual information during scene viewing. The first holds that coherent visual representations disintegrate as soon as attention is withdrawn from an object and thus that the visual representation of a scene is exceedingly impoverished. The second holds that visual representations do not necessarily decay upon the withdrawal of attention, but instead can be accumulated in memory from previously attended regions. Target objects in line drawings of natural scenes were changed during a saccadic eye movement away from those objects. Three findings support the second hypothesis. First, changes to the visual form of target objects (token substitution) were successfully detected, as indicated by both explicit and implicit measures, even though the target object was not attended when the change occurred. Second, these detections were often delayed until well after the change. Third, changes to semantically inconsistent target objects were detected better than changes to semantically consistent objects.  相似文献   

7.
8.
Our intuition that we richly represent the visual details of our environment is illusory. When viewing a scene, we seem to use detailed representations of object properties and interobject relations to achieve a sense of continuity across views. Yet, several recent studies show that human observers fail to detect changes to objects and object properties when localized retinal information signaling a change is masked or eliminated (e.g., by eye movements). However, these studies changed arbitrarily chosen objects which may have been outside the focus of attention. We draw on previous research showing the importance of spatiotemporal information for tracking objects by creating short motion pictures in which objects in both arbitrary locations and the very center of attention were changed. Adult observers failed to notice changes in both cases, even when the sole actor in a scene transformed into another person across an instantaneous change in camera angle (or “cut”).  相似文献   

9.
Views of natural scenes unfold over time, and objects of interest that were present a moment ago tend to remain present. While visual crowding places a fundamental limit on object recognition in cluttered scenes, most studies of crowding have suffered from the limitation that they typically involved static scenes. The role of temporal continuity in crowding has therefore been unaddressed. We investigated intertrial effects upon crowding in visual scenes, showing that crowding is considerably diminished when objects remain constant on consecutive visual search trials. Repetition of both the target and distractors decreases the critical distance for crowding from flankers. More generally, our results show how object continuity through between-trial priming releases objects that would otherwise be unidentifiable due to crowding. Crowding, although it is a significant bottleneck on object recognition, can be mitigated by statistically likely temporal continuity of the objects. Crowding therefore depends not only on what is momentarily present, but also on what was previously attended.  相似文献   

10.
It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test the extent to which scene and object selective areas are sensitive to perceived distance information independently from their category-selectivity and retinotopic location. We conducted two studies that used a distance illusion (i.e., the Ponzo lines) and showed that scene regions (the parahippocampal place area, PPA, and transverse occipital sulcus, TOS) are biased toward perceived distal stimuli, whereas the lateral occipital (LO) object region is biased toward perceived proximal stimuli. These results suggest that the ventral visual cortex plays a role in representing distance information, extending recent findings on the sensitivity of these regions to location information. More broadly, our findings imply that distance information is inherent to object recognition.  相似文献   

11.
Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category—that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.  相似文献   

12.
Observers can store thousands of object images in visual long-term memory with high fidelity, but the fidelity of scene representations in long-term memory is not known. Here, we probed scene-representation fidelity by varying the number of studied exemplars in different scene categories and testing memory using exemplar-level foils. Observers viewed thousands of scenes over 5.5 hr and then completed a series of forced-choice tests. Memory performance was high, even with up to 64 scenes from the same category in memory. Moreover, there was only a 2% decrease in accuracy for each doubling of the number of studied scene exemplars. Surprisingly, this degree of categorical interference was similar to the degree previously demonstrated for object memory. Thus, although scenes have often been defined as a superset of objects, our results suggest that scenes and objects may be entities at a similar level of abstraction in visual long-term memory.  相似文献   

13.
Beyond perceiving patterns of motion in simple dynamic displays, we can also perceive higher level properties, such as causality, as when we see one object collide with another object. Although causality is a seemingly high-level property, its perception--like the perception of faces or speech--often appears to be automatic, irresistible, and driven by highly constrained and stimulus-driven rules. Here, in an exploration of such rules, we demonstrate that perceptual grouping and attention can influence the both perception of causality in ambiguous displays. We first report several types of grouping effects, based on connectedness, proximity, and common motion. We further suggest that such grouping effects are mediated by the allocation of attention, and we directly demonstrate that causal perception can be strengthened or attenuated on the basis of where observers are attending, independent of fixation. Like Michotte, we find that the perception of causality is mediated by strict visual rules. Beyond Michotte, we find that these rules operate not only over discrete objects, but also over perceptual groups, constrained by the allocation of attention.  相似文献   

14.
Coherent visual experience of dynamic scenes requires not only that the visual system segment scenes into component objects but that these object representations persist, so that an object can be identified as the same object from an earlier time. Object files (OFs) are visual representations thought to mediate such abilities: OFs lie between lower level sensory processing and higher level recognition, and they track salient objects over time and motion. OFs have traditionally been studied via object-specific preview benefits (OSPBs), in which discriminations of an object's features are speeded when an earlier preview of those features occurred on the same object, as opposed to on a different object, beyond general displaywide priming. Despite its popularity, many fundamental aspects of the OF framework remain unexplored. For example, although OFs are thought to be involved primarily in online visual processing, we do not know how long such representations persist; previous studies found OSPBs for up to 1500 msec but did not test for longer durations. We explored this issue using a modified object reviewing paradigm and found that robust OSPBs persist for more than five times longer than has previously been tested-for at least 8 sec, and possibly for much longer. Object files may be the "glue" that makes visual experience coherent not just in online moment-by-moment processing, but on the scale of seconds that characterizes our everyday perceptual experiences. These findings also bear on research in infant cognition, where OFs are thought to explain infants' abilities to track and enumerate small sets of objects over longer durations.  相似文献   

15.
16.
Picture perception and ordinary perception of real objects differ in several respects. Two of their main differences are: (1) Depicted objects are not perceived as present and (2) We cannot perceive significant spatial shifts as we move with respect to them. Some special illusory pictures escape these visual effects obtained in usual picture perception. First, trompe l'oeil paintings violate (1): the depicted object looks, even momentarily, like a present object. Second, anamorphic paintings violate (2): they lead to appreciate spatial shifts resulting from movement. However, anamorphic paintings do not violate (1): they are still perceived as clearly pictorial, that is, nonpresent. What about the relation between trompe l'oeil paintings and (2)? Do trompe l'oeils allow us to perceive spatial shifts? Nobody has ever focused on this aspect of trompe l'oeil perception. I offer the first speculation about this question. I suggest that, if we follow our most recent theories in philosophy and vision science about the mechanisms of picture perception, then, the only plausible answer, in line with phenomenological intuitions, is that, differently from nonillusory, usual picture perception, and similarly to ordinary perception, trompe l'oeil perception does allow us to perceive spatial shifts resulting from movement. I also discuss the philosophical implications of this claim.  相似文献   

17.
In cluttered scenes, some object boundaries may not be marked by image cues. In such cases, the boundaries must be defined top-down as a result of object recognition. Here we ask if observers can retain the boundaries of several recognized objects in order to segment an unfamiliar object. We generated scenes consisting of neatly stacked objects, and the objects themselves consisted of neatly stacked coloured blocks. Because the blocks were stacked the same way within and across objects, there were no visual cues indicating which blocks belonged to which objects. Observers were trained to recognize several objects and we tested whether they could segment a novel object when it was surrounded by these familiar, studied objects. The observer's task was to count the number of blocks comprising the target object. We found that observers were able to accurately count the target blocks when the target was surrounded by up to four familiar objects. These results indicate that observers can use the boundaries of recognized objects in order to accurately segment, top-down, a novel object.  相似文献   

18.
Visual processing breaks the world into parts and objects, allowing us not only to examine the pieces individually, but also to perceive the relationships among them. There is work exploring how we perceive spatial relationships within structures with existing representations, such as faces, common objects, or prototypical scenes. But strikingly, there is little work on the perceptual mechanisms that allow us to flexibly represent arbitrary spatial relationships, e.g., between objects in a novel room, or the elements within a map, graph or diagram. We describe two classes of mechanism that might allow such judgments. In the simultaneous class, both objects are selected concurrently. In contrast, we propose a sequential class, where objects are selected individually over time. We argue that this latter mechanism is more plausible even though it violates our intuitions. We demonstrate that shifts of selection do occur during spatial relationship judgments that feel simultaneous, by tracking selection with an electrophysiological correlate. We speculate that static structure across space may be encoded as a dynamic sequence across time. Flexible visual spatial relationship processing may serve as a case study of more general visual relation processing beyond space, to other dimensions such as size or numerosity.  相似文献   

19.
Studies of object-based attention have demonstrated poorer performance in dividing attention between two objects in a scene than in focusing attention on a single object. However, objects often are composed of several parts, and parts are central to theories of object recognition. Are parts also important for visual attention? That is, can attention be limited in the number of parts processed simultaneously? We addressed this question in four experiments. In Experiments 1 and 2, participants reported two attributes that appeared on the same part or on different parts of a single multipart object. Participants were more accurate in reporting the attributes on the same part than attributes on different parts. This part-based effect was not influenced by the spatial distance between the parts, ruling out a simple spatial attention interpretation of our results. A control study demonstrated that our spatial manipulation was sufficient to observe shifts of spatial attention. This study revealed an effect of spatial distance, indicating that our spatial manipulation was adequate for observing spatial attention. The absence of a distance effect in Experiments 1 and 2 suggests that part-based attention may not rely entirely on simple shifts of spatial attention. Finally, in Experiment 4 we found evidence for part-based attention, using stimuli controlled for the distance between the parts of an object. The results of these experiments indicate that visual attention can selectively process the parts of an object. We discuss the relationship between parts and objects and the locus of part-based attentional selection.  相似文献   

20.
Objects can control the focus of attention, allowing features on the same object to be selected more easily than features on different objects. In the present experiments, we investigated the perceptual processes that contribute to such object-based attentional effects. Previous research has demonstrated that object-based effects occur for single-region objects but not for multiple-region objects under some conditions (Experiment 1, Watson & Kramer, 1999). Such results are surprising, because most objects in natural scenes are composed of multiple regions. Previous findings could therefore limit the usefulness of an object-based selection mechanism. We explored the generality of these single-region selection results by manipulating the extent to which different (i.e., multiple) regions of a single object perceptually grouped together. Object-based attentional effects were attenuated when multiple regions did not group into a single perceptual object (Experiment 1). However, when multiple regions grouped together based on (1) edge continuation (Experiments 2 and 3) or (2) part and occlusion cues (Experiment 4), we observed object-based effects. Our results suggest that object-based attention is a robust process that can select multiple-region objects, provided the regions of such objects cohere on the basis of perceptual grouping cues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号