首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.  相似文献   

2.
Change blindness, the failure to detect visual changes that occur during a disruption, has increasingly been used to infer the nature of internal representations. If every change were detected, detailed representations of the world would have to be stored and accessible. However, because many changes are not detected, visual representations might not be complete, and access to them might be limited. Using change detection to infer the completeness of visual representations requires an understanding of the reasons for change blindness. This article provides empirical support for one such reason: change blindness resulting from the failure to compare retained representations of both the pre- and postchange information. Even when unaware of changes, observers still retained information about both the pre- and postchange objects on the same trial.  相似文献   

3.
4.
What is the nature of the representation formed during the viewing of natural scenes? We tested two competing hypotheses regarding the accumulation of visual information during scene viewing. The first holds that coherent visual representations disintegrate as soon as attention is withdrawn from an object and thus that the visual representation of a scene is exceedingly impoverished. The second holds that visual representations do not necessarily decay upon the withdrawal of attention, but instead can be accumulated in memory from previously attended regions. Target objects in line drawings of natural scenes were changed during a saccadic eye movement away from those objects. Three findings support the second hypothesis. First, changes to the visual form of target objects (token substitution) were successfully detected, as indicated by both explicit and implicit measures, even though the target object was not attended when the change occurred. Second, these detections were often delayed until well after the change. Third, changes to semantically inconsistent target objects were detected better than changes to semantically consistent objects.  相似文献   

5.
Recognition of everyday objects can be facilitated by top-down predictions. We have proposed that these predictions are derived from rudimentary image information, or gist, extracted rapidly from the low spatial frequencies (LSFs) (Bar Journal of Cognitive Neuroscience 15: 600–609, 2003). Because of the coarse nature of LSF representations, we hypothesized here that such predictions can accommodate changes in viewpoint as well as facilitate the recognition of visually similar objects. In a repetition-priming task, we indeed observed significant facilitation of target recognition that was primed by LSF objects across moderate viewpoint changes, as well as across visually similar exemplars. These results suggest that the LSF representations are specific enough to activate accurate predictions, yet flexible enough to overcome small changes in visual appearance. Such gist representations facilitate object recognition by accommodating changes in visual appearance due to viewing conditions, and help generalize from familiar to novel exemplars.  相似文献   

6.
Viewpoint-dependent recognition of familiar faces   总被引:5,自引:0,他引:5  
Troje NF  Kersten D 《Perception》1999,28(4):483-487
The question whether object representations in the human brain are object-centered or viewer-centered has motivated a variety of experiments with divergent results. A key issue concerns the visual recognition of objects seen from novel views. If recognition performance depends on whether a particular view has been seen before, it can be interpreted as evidence for a viewer-centered representation. Earlier experiments used unfamiliar objects to provide the experimenter with complete control over the observer's previous experience with the object. In this study, we tested whether human recognition shows viewpoint dependence for the highly familiar faces of well-known colleagues and for the observer's own face. We found that observers are poorer at recognizing their own profile, whereas there is no difference in response time between frontal and profile views of other faces. This result shows that extensive experience and familiarity with one's own face is not sufficient to produce viewpoint invariance. Our result provides strong evidence for viewer-centered representations in human visual recognition even for highly familiar objects.  相似文献   

7.
In manual search tasks designed to assess infants' knowledge of the object concept, why does search for objects hidden by darkness precede search for objects hidden by visible occluders by several months? A graded representations account explains this décalage by proposing that the conflicting visual input from occluders directly competes with object representations, whereas darkness merely weakens representations. This study tests the prediction that representations of objects hidden by darkness are strong enough for infants to bind auditory cues to them and support search, whereas representations of objects hidden by occluders are not. Six-and-half-month-olds were presented with audible or silent objects that remained visible, became hidden by darkness, or became hidden by a visible occluder. Search required engaging in the same means-end action in all conditions. As predicted, auditory cues increased search when objects were hidden by darkness but not when they were hidden by a visible occluder. Results are discussed in the context of different facets of object concept development highlighted by graded representations perspectives and core knowledge perspectives and in relation to other work on multimodal object representations.  相似文献   

8.
Object drawing can be supported by a number of cognitive resources, each making available visual information about the object being drawn. These resources include perceptual input, short-term visual memory, and long-term visual memory. Each of these resources has the potential to make available distinct forms of visual representation, including viewpoint-specific and viewpoint-independent representations, object-specific and category representations, and separate representations of object colour. We review neuropsychological and developmental evidence supporting these claims, including evidence that the same drawing can reflect the influence of multiple forms of visual representation. Seven experiments are then reported, investigating object drawing by 4- to 6-year-old children, to confirm the support for drawing provided by different forms of visual representation. Young children are selected for investigation because their drawing is relatively unconstrained by culturally determined norms which, in our culture, dictate that objects should be drawn just as they appear from the vantage point of the drawer. To distinguish the support provided by object and category representations, the experiments exploit the privileged links between count nouns as object labels, and representations of object categories. In addition, pre-established representations, visual or otherwise, are precluded from influencing drawing by asking the children to draw novel objects, and by creating novel count nouns with which to label the objects. The results reveal how viewpoint-specific perceptual representations, object-specific representations of shape and of colour, and category representations of shape can each impact on object drawing, and in some circumstances on the same drawing. It appears that simple drawing tasks have the potential to reveal some of the distinct types of representation able to support visual cognition.  相似文献   

9.
A behavioral and computational treatment of change detection is reported. The behavioral task was to judge whether a single object substitution change occurred between two “flickering” 9-object scenes. Detection performance was found to vary with the similarity of the changing objects; object changes violating orientation and category yielded the fastest and most accurate detection responses. To account for these data, theBOLAR model was developed, which uses color, orientation, and scale selective filters to compute the visual dissimilarity between the pre- and postchange objects from the behavioral study. Relating the magnitude of the BOLAR difference signals to change detection performance revealed that object pairs estimated as visually least similar were the same object pairs most easily detected by observers. The BOLAR model advances change detection theory by (1) demonstrating that the visual similarity between the change patterns can account for much of the variability in change detection behavior, and (2) providing a computational technique for quantifying these visual similarity relationships for real-world objects.  相似文献   

10.
《Acta psychologica》2013,142(2):168-176
In a one-shot change detection task, we investigated the relationship between semantic properties (high consistency, i.e., diagnosticity, versus inconsistency with regard to gist) and perceptual properties (high versus low salience) of objects in guiding attention in visual scenes and in constructing scene representations. To produce the change an object was added or deleted in either the right or left half of coloured drawings of daily-life events. Diagnostic object deletions were more accurately detected than inconsistent ones, indicating rapid inclusion into early scene representation for the most predictable objects. Detection was faster and more accurate for high salience than for low salience changes. An advantage was found for diagnostic object changes in the high salience condition, although it was limited to additions when considering response speed. For inconsistent objects of high salience, deletions were detected faster than additions. These findings may indicate that objects are primarily selected on a perceptual basis with subsequent and supplementary effect of semantic consistency, in the sense of facilitation due to object diagnosticity or lengthening of processing time due to inconsistency.  相似文献   

11.
12.
Four experiments examined the effects of encoding time on object identification priming and recognition memory. After viewing objects in a priming phase, participants identified objects in a rapid stream of non-object distracters; display times were gradually increased until the objects could be identified (Experiments 1-3). Participants also made old/new recognition judgments about previously viewed objects (Experiment 4). Reliable priming for object identification occurred with 150 ms of encoding and reached a maximum after about 300 ms of encoding time. In contrast, reliable recognition judgments occurred with 75 ms of encoding and continued to improve for encoding times of up to 1200 ms. These results suggest that recognition memory may be based on multiple levels of object representation, from rapidly activated representations of low-level features to semantic knowledge associated with the object. In contrast, priming in this object identification task may be tied specifically to the activation of representations of object shape.  相似文献   

13.
Rossion B  Pourtois G 《Perception》2004,33(2):217-236
Theories of object recognition differ to the extent that they consider object representations as being mediated only by the shape of the object, or shape and surface details, if surface details are part of the representation. In particular, it has been suggested that color information may be helpful at recognizing objects only in very special cases, but not during basic-level object recognition in good viewing conditions. In this study, we collected normative data (naming agreement, familiarity, complexity, and imagery judgments) for Snodgrass and Vanderwart's object database of 260 black-and-white line drawings, and then compared the data to exactly the same shapes but with added gray-level texture and surface details (set 2), and color (set 3). Naming latencies were also recorded. Whereas the addition of texture and shading without color only slightly improved naming agreement scores for the objects, the addition of color information unambiguously improved naming accuracy and speeded correct response times. As shown in previous studies, the advantage provided by color was larger for objects with a diagnostic color, and structurally similar shapes, such as fruits and vegetables, but was also observed for man-made objects with and without a single diagnostic color. These observations show that basic-level 'everyday' object recognition in normal conditions is facilitated by the presence of color information, and support a 'shape + surface' model of object recognition, for which color is an integral part of the object representation. In addition, the new stimuli (sets 2 and 3) and the corresponding normative data provide valuable materials for a wide range of experimental and clinical studies of object recognition.  相似文献   

14.
The representation of visual information inside the focus of attention is more precise than the representation of information outside the focus of attention. We found that the visual system can compensate for the cost of withdrawing attention by pooling noisy local features and computing summary statistics. The location of an individual object is a local feature, whereas the center of mass of several objects (centroid) is a summary feature representing the mean object location. Three experiments showed that withdrawing attention degraded the representation of individual positions more than the representation of the centroid. It appears that information outside the focus of attention can be represented at an abstract level that lacks local detail, but nevertheless carries a precise statistical summary of the scene. The term ensemble features refers to a broad class of statistical summary features that we propose collectively make up the representation of information outside the focus of attention.  相似文献   

15.
Despite a growing acceptance that attention and memory interact, and that attention can be focused on an active internal mental representation (i.e., reflective attention), there has been a paucity of work focusing on reflective attention to ‘sound objects’ (i.e., mental representations of actual sound sources in the environment). Further research on the dynamic interactions between auditory attention and memory, as well as its degree of neuroplasticity, is important for understanding how sound objects are represented, maintained, and accessed in the brain. This knowledge can then guide the development of training programs to help individuals with attention and memory problems. This review article focuses on attention to memory with an emphasis on behavioral and neuroimaging studies that have begun to explore the mechanisms that mediate reflective attentional orienting in vision and more recently, in audition. Reflective attention refers to situations in which attention is oriented toward internal representations rather than focused on external stimuli. We propose four general principles underlying attention to short-term memory. Furthermore, we suggest that mechanisms involved in orienting attention to visual object representations may also apply for orienting attention to sound object representations.  相似文献   

16.
Each eye movement introduces changes in the retinal location of objects. How a stable spatiotopic representation emerges from such variable input is an important question for the study of vision. Researchers have classically probed human observers' performance in a task requiring a location judgment about an object presented at different locations across a saccade. Correct performance on this task requires realigning or remapping retinal locations to compensate for the saccade. A recent study showed that performance improved with longer presaccadic viewing time, suggesting that accurate spatiotopic representations take time to build up. The first goal of the study was to replicate that finding. Two experiments, one an exact replication and the second a modified version, failed to replicate improved performance with longer presaccadic viewing time. The second goal of this study was to examine the role of attention in constructing spatiotopic representations, as theoretical and neurophysiological accounts of remapping have proposed that only attended targets are remapped. A third experiment thus manipulated attention with a spatial cueing paradigm and compared transsaccadic location performance of attended versus unattended targets. No difference in spatiotopic performance was found between attended and unattended targets. Although only negative results are reported, they might nevertheless suggest that spatiotopic representations are relatively stable over time.  相似文献   

17.
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects.  相似文献   

18.
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention.  相似文献   

19.
20.
I Biederman  E E Cooper 《Perception》1991,20(5):585-593
The magnitude of priming on naming reaction times and on the error rates, resulting from the perception of a briefly presented picture of an object approximately 7 min before the primed object, was found to be independent of whether the primed object was originally viewed in the same hemifield, left-right or upper-lower, or in the same left-right orientation. Performance for same-name, different-examplar images was worse than for identical images, indicating that not only was there priming from block one to block two, but that some of the priming was visual, rather than purely verbal or conceptual. These results provide evidence for complete translational and reflectional invariance in the representation of objects for purposes of visual recognition. Explicit recognition memory for position and orientation was above chance, suggesting that the representation of objects for recognition is independent of the representations of the location and left-right orientation of objects in space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号