首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent empirical results suggest that there is a decrement in dividing attention between two objects in a scene compared with focusing attention on a single object. However, objects can be made of individual parts. Is there a decrement for dividing attention across different parts of a single object? We addressed this question in two experiments. In Experiment 1, we demonstrated that attention can exhibit part-based selection—that is, the subjects were more accurate in reporting two attributes from the same part of an object than they were in reporting attributes from different parts of an object. In Experiment 2, we demonstrated that part-based attentional decrements occurred simultaneously with object-based attentional decrements. The results from Experiment 2 demonstrated that part-based attention is evident at the same time as objects are processed as coherent wholes. Our results imply that there is an attentional mechanism that can select either objects or their parts.  相似文献   

2.
Visual working memory for global,object, and part-based information   总被引:1,自引:0,他引:1  
We investigated visual working memory for novel objects and parts of novel objects. After a delay period, participants showed strikingly more accurate performance recognizing a single whole object than the parts of that object. This bias to remember whole objects, rather than parts, persisted even when the division between parts was clearly defined and the parts were disconnected from each other so that, in order to remember the single whole object, the participants needed to mentally combine the parts. In addition, the bias was confirmed when the parts were divided by color. These experiments indicated that holistic perceptual-grouping biases are automatically used to organize storage in visual working memory. In addition, our results suggested that the bias was impervious to top-down consciously directed control, because when task demands were manipulated through instruction and catch trials, the participants still recognized whole objects more quickly and more accurately than their parts. This bias persisted even when the whole objects were novel and the parts were familiar. We propose that visual working memory representations depend primarily on the global configural properties of whole objects, rather than part-based representations, even when the parts themselves can be clearly perceived as individual objects. This global configural bias beneficially reduces memory load on a capacity-limited system operating in a complex visual environment, because fewer distinct items must be remembered.  相似文献   

3.
Visual cuing is one paradigm often used to study object- and space-based visual selective attention. A primary finding is that shifts of attention within an object can be accomplished faster than equidistant shifts between objects. The present study used a visual cuing paradigm to examine how an object's size (i.e., internal distance) and shape, influences object- and space-based visual selective attention. The first two experiments manipulated object size and compared attentional shift performance with objects where the within-object distance between cued and uncued target locations was either equal to the between-object distance (1:1 ratio condition) or three times the between-object distance (3:1 ratio condition). Within-object shifts took longer for the larger objects, but an advantage over between-object shifts was still evident. Influences associated with the shapes of the larger objects suggested by the results of the first two experiments were tested and rejected in Experiment 3. Overall, the results indicate that within-object shifts of attention become slower as the within-object distance increases, but nevertheless are still accomplished faster than between-object shifts.  相似文献   

4.
Visual cuing studies have been widely used to demonstrate and explore contributions from both object- and location-based attention systems. A common finding has been a response advantage for shifts of attention occurring within an object, relative to shifts of an equal distance between objects. The present study examined this advantage for within-object shifts in terms of engage and disengage operations within the object- and location-based attention systems. The rationale was that shifts of attention between objects require object-based attention to disengage from one object before shifting to another, something that is not required for shifts of attention within an object or away from a location. One- and two-object displays were used to assess object-based contributions related to disengaging and engaging attention within, between, into, and out of objects. The results suggest that the "object advantage" commonly found in visual cuing experiments in which shifts of attention are required is primarily due to disengage operations associated with object-based attention.  相似文献   

5.
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.  相似文献   

6.
The capacity of visual short-term memory is not a fixed number of objects   总被引:2,自引:0,他引:2  
Luck and Vogel (1997) have reported several striking results in support of the view that visual short-term memory (VSTM) has a fixed capacity of four objects, irrespective of how many relevant features those objects comprise. However, more recent studies have challenged this account, indicating only a weak effect of the number of objects once other factors are more evenly equated across conditions. Here, we employed a symmetry manipulation to verify object segmentation in our displays, to demonstrate that when spatial and masking factors are held constant, the number of objects per se has no effect on VSTM. Instead, VSTM capacity may reflect the number of object "parts" or feature conjunctions in a given display.  相似文献   

7.
Visual objects are high-level primitives that are fundamental to numerous perceptual functions, such as guidance of attention. We report that objects warp visual perception of space in such a way that spatial distances within objects appear to be larger than spatial distances in ground regions. When two dots were placed inside a rectangular object, they appeared farther apart from one another than two dots with identical spacing outside of the object. To investigate whether this effect was object based, we measured the distortion while manipulating the structure surrounding the dots. Object displays were constructed with a single object, multiple objects, a partially occluded object, and an illusory object. Nonobject displays were constructed to be comparable to object displays in low-level visual attributes. In all cases, the object displays resulted in a more powerful distortion of spatial perception than comparable non-object-based displays. These results suggest that perception of space within objects is warped.  相似文献   

8.
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of crowding shows how the cortical magnification factor, among other variables, can cause multiple object surfaces to share a single surface-shroud resonance, thereby preventing recognition of the individual objects.  相似文献   

9.
Kyle R. Cave 《Visual cognition》2013,21(3-5):467-487
Experiments using spatial cues and spatial probes provide strong evidence for an attention mechanism that chooses a location and selects all information at that location. This selection process can work very quickly; so quickly that selection probably begins before segmentation and grouping. It can be implemented in a neural network simply and efficiently without temporal binding. In conjunction with this spatial attention, however, temporal binding can potentially enhance visual selection in complex scenes. First, it would allow a target object to be selected without also selecting a superimposed distractor. Second, it could maintain representations of objects after attention has moved to another object. Third, it could allow multiple parts of an object or scene to be selected, segmented, and analysed simultaneously. Thus, temporal synchrony should be more likely to appear during tasks with overlapping targets and distractors, and tasks that require that multiple objects or multipart objects be analysed and remembered simultaneously.  相似文献   

10.
11.
Part representation is not only critical to object perception but also plays a key role in a number of basic visual cognition functions, such as figure-ground segregation, allocation of attention, and memory for shapes. Yet, virtually nothing is known about the development of part representation. If parts are fundamental components of object shape representation early in life, then the infant visual system should give priority to parts over other aspects of objects. We tested this hypothesis by examining whether part shapes are more salient than cavity shapes to infants. Five-month-olds were habituated to a stimulus that contained a part and a cavity. In a subsequent novelty preference test, 5-month-olds exhibited a preference for the cavity shape, indicating that part shapes were more salient than cavity shapes during habituation. The differential processing of part versus cavity contours in infancy is consistent with theory and empirical findings in the literature on adult figure-ground perception and indicates that basic aspects of part-based object processing are evident early in life.  相似文献   

12.
The performance of bimanual movements involving separate objects presents an obvious challenge to the visuo-motor system: Visual feedback can only be obtained from one target at a time. To overcome this challenge overt shifts in visual attention may occur so that visual feedback from both movements may be used directly (Bingham, Hughes, & Mon-Williams, 2008; Riek, Tresilian, Mon-Williams, Coppard, & Carson, 2003). Alternatively, visual feedback from both movements may be obtained in the absence of eye movements, presumably by covert shifts in attention (Diedrichsen, Nambisan, Kennerley, & Ivry, 2004). Given that the quality of information falls with increasing distance from the fixated point, can we obtain the level of information required to accurately guide each hand for precision grasping of separate objects without moving our eyes to fixate each target separately? The purpose of the current study was to examine how the temporal coordination between the upper limbs is affected by the quality of visual information available during the performance of a bimanual task. A total of 11 participants performed congruent and incongruent movements towards near and/or far objects. Movements were performed in natural, fixate-centre, fixate-left, and fixate-right vision conditions. Analyses revealed that the transport phase of incongruent movements was similar across vision conditions for the temporal aspects of both the transport and grasp, whereas the spatial aspects of grasp formation were influenced by the quality of visual feedback. We suggest that bimanual coordination of the temporal aspects of reach-to-grasp movements are not influenced solely by overt shifts in visual attention but instead are influenced by a combination of factors in a task-constrained way.  相似文献   

13.
Some things look more complex than others. For example, a crenulate and richly organized leaf may seem more complex than a plain stone. What is the nature of this experience—and why do we have it in the first place? Here, we explore how object complexity serves as an efficiently extracted visual signal that the object merits further exploration. We algorithmically generated a library of geometric shapes and determined their complexity by computing the cumulative surprisal of their internal skeletons—essentially quantifying the “amount of information” within each shape—and then used this approach to ask new questions about the perception of complexity. Experiments 1–3 asked what kind of mental process extracts visual complexity: a slow, deliberate, reflective process (as when we decide that an object is expensive or popular) or a fast, effortless, and automatic process (as when we see that an object is big or blue)? We placed simple and complex objects in visual search arrays and discovered that complex objects were easier to find among simple distractors than simple objects are among complex distractors—a classic search asymmetry indicating that complexity is prioritized in visual processing. Next, we explored the function of complexity: Why do we represent object complexity in the first place? Experiments 4–5 asked subjects to study serially presented objects in a self-paced manner (for a later memory test); subjects dwelled longer on complex objects than simple objects—even when object shape was completely task-irrelevant—suggesting a connection between visual complexity and exploratory engagement. Finally, Experiment 6 connected these implicit measures of complexity to explicit judgments. Collectively, these findings suggest that visual complexity is extracted efficiently and automatically, and even arouses a kind of “perceptual curiosity” about objects that encourages subsequent attentional engagement.  相似文献   

14.
Previous research has identified multiple features of individual objects that are capable of guiding visual attention. However, in dynamic multi-element displays not only individual object features but also changing spatial relations between two or more objects might signal relevance. Here we report a series of experiments that investigated the hypothesis that reduced inter-object spacing guides visual attention toward the corresponding objects. Our participants discriminated between different probes that appeared on moving objects while we manipulated spatial proximity between the objects at the moment of probe onset. Indeed, our results confirm that there is a bias toward temporarily close objects, which persists even when such a bias is harmful for the actual task (Experiments 1a and 1b). Remarkably, this bias is mediated by oculomotor processes. Controlling for eye-movements reverses the pattern of results (Experiment 2a), whereas the location of the gaze tends toward the temporarily close objects under free viewing conditions (Experiment 2b). Taken together, our results provide insights into the interplay of attentional and oculomotor processes during dynamic scene processing. Thereby, they also add to the growing body of evidence showing that within dynamic perception, attentional and oculomotor processes act conjointly and are hardly separable.  相似文献   

15.
Attention operates perceptually on items in the environment, and internally on objects in visuospatial working memory. In the present study, we investigated whether spatial and temporal constraints affecting endogenous perceptual attention extend to internal attention. A retro-cue paradigm in which a cue is presented beyond the range of iconic memory and after stimulus encoding was used to manipulate shifts of internal attention. Participants?? memories were tested for colored circles (Experiments 1, 2, 3a, 4) or for novel shapes (Experiment 3b) and their locations within an array. In these experiments, the time to shift internal attention (Experiments 1 and 3) and the eccentricity of encoded objects (Experiments 2?C4) were manipulated. Our data showed that, unlike endogenous perceptual attention, internal shifts of attention are not modulated by stimulus eccentricity. Across several timing parameters and stimuli, we found that shifts of internal attention require a minimum quantal amount of time regardless of the object eccentricity at encoding. Our findings are consistent with the view that internal attention operates on objects whose spatial information is represented in relative terms. Although endogenous perceptual attention abides by the laws of space and time, internal attention can shift across spatial representations without regard for physical distance.  相似文献   

16.
Given Leonardo's constraint that 2 opaque objects cannot be seen in the same direction, how are the regions of objects occluded to 1 eye included in perception? To answer this question, the authors presented 3-dimensional stimuli, similar to the ones that concerned Leonardo, and measured the visual directions of their monocular and binocular regions. When the distance between near and far objects was large, the nonfixated object was seen as double and blurry. Leonardo's constraint was met by seeing the near object as double and transparent or the distant object as double and superimposed. When the distance between near and far objects was small, the constraint was met by a perceptual displacement and compression of parts of the nonfixated object.  相似文献   

17.
We present a computational framework for attention-guided visual scene exploration in sequences of RGB-D data. For this, we propose a visual object candidate generation method to produce object hypotheses about the objects in the scene. An attention system is used to prioritise the processing of visual information by (1) localising candidate objects, and (2) integrating an inhibition of return (IOR) mechanism grounded in spatial coordinates. This spatial IOR mechanism naturally copes with camera motions and inhibits objects that have already been the target of attention. Our approach provides object candidates which can be processed by higher cognitive modules such as object recognition. Since objects are basic elements for many higher level tasks, our architecture can be used as a first layer in any cognitive system that aims at interpreting a stream of images. We show in the evaluation how our framework finds most of the objects in challenging real-world scenes.  相似文献   

18.
Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.  相似文献   

19.
An implicit assumption of studies in the attentional literature has been that global and local levels of attention are involved in object recognition. To investigate this assumption, a divided attention task was used in which hierarchical figures were presented to prime the subsequent discrimination of target objects at different levels of category identity (basic and subordinate). Target objects were identified among distractor objects that varied in their degree of visual similarity to the targets. Hierarchical figures were also presented at different sizes and as individual global and local elements in order to investigate whether attention-priming effects on object discrimination were due to grouping/parsing operations or spatial extent. The results showed that local processing primed subordinate object discriminations when the objects were visually similar. Global processing primed basic object discriminations, regardless of the similarity of the distractors, and subordinate object discriminations when the objects were visually dissimilar. It was proposed that global and local processing aids the selection of perceptual attributes of objects that are diagnostic for recognition and that selection is based on two mechanisms: spatial extent and grouping/parsing operations.  相似文献   

20.
Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., “spinach”; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号