首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.  相似文献   

2.
3.
Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual perception can directly determine how the information is going to be selected, consolidated, and maintained in VWM. We demonstrate the validity of this hypothesis by investigating what kinds of perceptual information can be stored as integrated objects in VWM. Three criteria for object-based storage are introduced: (a) automatic selection of task-irrelevant features, (b) synchronous consolidation of multiple features, and (c) stable maintenance of feature conjunctions. The results show that the outputs of parallel perception meet all three criteria, as opposed to the outputs of serial attentive processing, which fail all three criteria. These results indicate that (a) perception and VWM are not two sequential processes, but are dynamically intertwined; (b) there are dissociated mechanisms in VWM for storing information identified at different stages of perception; and (c) the integrated object representations in VWM originate from the "preattentive" or "proto" objects created by parallel perception. These results suggest how visual perception, attention, and VWM can be explained by a unified framework.  相似文献   

4.
Recent research suggests that information held in working memory can facilitate subsequent attentional processing. Here, we explore the negative corollary of this conception: Under which circumstances does information in working memory disrupt subsequent processing? Seventy participants performed visual discriminations in a dual-task paradigm. They were asked to judge colors or shapes in an online attention task under three different working-memory conditions: Same, Switch, or Unknown. In the Same condition, participants selectively maintained one visual feature in working memory, from the same dimension as in the online attention task. In the Switch condition, participants selectively maintained one visual feature in working memory, but had to focus on another visual dimension in the online attention task. In the Unknown condition, participants could not predict which visual feature would be relevant for the working-memory task. We found that irrelevant features in the online attention task were particularly difficult to ignore in the Switch condition, that is, when the irrelevant features belong to a visual dimension that is simultaneously prioritized in selective working memory. The findings are consistent with accounts in terms of neural overlap between working-memory and attention circuits, and suggest that mechanisms of selection, rather than resource limitations, critically determine the extent of visual interference.  相似文献   

5.
We examined two theories of visual search: resource depletion, grounded in a static, built-in brain architecture, with attention seen as a limited depletable resource, and system reconfiguration, in which the visual system is dynamically reconfigured from moment to moment so as to optimize performance on the task at hand. In a dual-task paradigm, a search display was preceded by a visual discrimination task and was followed, after a stimulus onset asynchrony (SOA) governed by a staircase procedure, by a pattern mask. Search efficiency, as indexed by the slope of the function relating critical SOA to number of distractors, was impaired under dual-task conditions for tasks that were performed efficiently (shallow search slope) when done singly, but not for tasks performed inefficiently (steep slope) when done singly. These results are consistent with system reconfiguration, but not with resource depletion, models and point to a dynamic, rather than a static, architecture of the visual system.  相似文献   

6.
Several recent theories of visual information processing have postulated that errors in recognition may result not only from a failure in feature extraction, but also from a failure to correctly join features after they have been correctly extracted. Errors that result from incorrectly integrating features are called conjunction errors. The present study uses conjunction errors to investigate the principles used by the visual system to integrate features. The research tests whether the visual system is more likely to integrate features located close together in visual space (the location principle) or whether the visual system is more likely to integrate features from stimulus items that come from the same perceptual group or object (the perceptual group principle). In four target-detection experiments, stimuli were created so that feature integration by the location principle and feature integration by the perceptual group principle made different predictions for performance. In all of the experiments, the perceptual group principle predicted feature integration even though the distance between stimulus items and retinal eccentricity were strictly controlled.  相似文献   

7.
Data from visual-search tasks are typically interpreted to mean that searching for targets defined by feature differences does not require attention and thus can be performed in parallel, whereas searching for other targets requires serial allocation of attention. The question addressed here was whether a parallel-serial dichotomy would be obtained if data were collected using a variety of targets representing each of several kinds of defining features. Data analyses included several computations in addition to search rate: (1) target-absent to target-present slope ratios; (2) two separate data transformations to control for errors; (3) minimum reaction time; and (4) slopes of standard deviation as a function of set size. Some targets showed strongly parallel or strongly serial search, but there was evidence for several intermediate search classes. Sometimes, for a given target-distractor pair, the results depended strongly on which character was the target and which was the distractor. Implications from theories of visual search are discussed.  相似文献   

8.
Data from visual-search tasks are typically interpreted to mean that searching for targets defined by feature differences does not require attention and thus can be performed in parallel, whereas searching for other targets requires serial allocation of attention. The question addressed here was whether a parallel-serial dichotomy would be obtained if data were collected using a variety of targets representing each of several kinds of defining features. Data analyses included several computations in addition to search rate: (1) target-absent to target-present slope ratios; (2) two separate data transformations to control for errors; (3) minimum reaction time; and (4) slopes of standard deviation as a function of set size. Some targets showed strongly parallel or strongly serial search, but there was evidence for several intermediate search classes. Sometimes, for a given target-distractor pair, the results depended strongly on which character was the target and which was the distractor. Implications from theories of visual search are discussed.  相似文献   

9.
In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.  相似文献   

10.
Confirmation bias has recently been reported in visual search, where observers who were given a perceptual rule to test (e.g. “Is the p on a red circle?”) search stimuli that could confirm the rule stimuli preferentially (Rajsic, Wilson, & Pratt, Journal of Experimental Psychology: Human Perception and Performance, 41(5), 1353–1364, 2015). In this study, we compared the ability of concrete and abstract visual templates to guide attention using the visual confirmation bias. Experiment 1 showed that confirmatory search tendencies do not result from simple low-level priming, as they occurred when color templates were verbally communicated. Experiment 2 showed that confirmation bias did not occur when targets needed to be reported as possessing or not possessing the absence of a feature (i.e., reporting whether a target was on a nonred circle). Experiment 3 showed that confirmatory search also did not occur when search prompts referred to a set of visually heterogenous features (i.e., reporting whether a target on a colorful circle, regardless of the color). Together, these results show that the confirmation bias likely results from a matching heuristic, such that visual codes involved in representing the search goal prioritize stimuli possessing these features.  相似文献   

11.

Theories of visual attention hypothesize that target selection depends upon matching visual inputs to a memory representation of the target – i.e., the target or attentional template. Most theories assume that the template contains a veridical copy of target features, but recent studies suggest that target representations may shift "off veridical" from actual target features to increase target-to-distractor distinctiveness. However, these studies have been limited to simple visual features (e.g., orientation, color), which leaves open the question of whether similar principles apply to complex stimuli, such as a face depicting an emotion, the perception of which is known to be shaped by conceptual knowledge. In three studies, we find confirmatory evidence for the hypothesis that attention modulates the representation of an emotional face to increase target-to-distractor distinctiveness. This occurs over-and-above strong pre-existing conceptual and perceptual biases in the representation of individual faces. The results are consistent with the view that visual search accuracy is determined by the representational distance between the target template in memory and distractor information in the environment, not the veridical target and distractor features.

  相似文献   

12.
The representation of visual information inside the focus of attention is more precise than the representation of information outside the focus of attention. We found that the visual system can compensate for the cost of withdrawing attention by pooling noisy local features and computing summary statistics. The location of an individual object is a local feature, whereas the center of mass of several objects (centroid) is a summary feature representing the mean object location. Three experiments showed that withdrawing attention degraded the representation of individual positions more than the representation of the centroid. It appears that information outside the focus of attention can be represented at an abstract level that lacks local detail, but nevertheless carries a precise statistical summary of the scene. The term ensemble features refers to a broad class of statistical summary features that we propose collectively make up the representation of information outside the focus of attention.  相似文献   

13.
The model presented here is an attempt to explain the results from a number of different studies in visual attention, including parallel feature searches and serial conjunction searches, variations in search slope with variations in feature contrast and individual subject differences, attentional gradients triggered by cuing, feature-driven spatial selection, split attention, inhibition of distractor locations, and flanking inhibition. The model is implemented in a neural network consisting of a hierarchy of spatial maps. Attentional gates control the flow of information from each level of the hierarchy to the next. The gates are jointly controlled by a Bottom-Up System favoring locations with unique features and a Top-Down System favoring locations with features designated as target features. Because the gating of each location depends on the features present there, the model is called FeatureGate. Received: 4 July 1997 / Accepted: 23 July 1998  相似文献   

14.
We live in a dynamic environment in which objects change location over time. To maintain stable object representations the visual system must determine how newly sampled information relates to existing object representations, the correspondence problem. Spatiotemporal information is clearly an important factor that the visual system takes into account when solving the correspondence problem, but is feature information irrelevant as some theories suggest? The Ternus display provides a context in which to investigate solutions to the correspondence problem. Two sets of three horizontally aligned disks, shifted by one position, were presented in alternation. Depending on how correspondence is resolved, these displays are perceived either as one disk "jumping" from one end of the row to the other (element motion) or as a set of three disks shifting back and forth together (group motion). We manipulated a feature (e.g., color) of the disks such that, if features were taken into account by the correspondence process, it would bias the resolution of correspondence toward one version or the other. Features determined correspondence, whether they were luminance-defined or not. Moreover, correspondence could be established on the basis of similarity, when features were not identical between alternations. Finally, the stronger the feature information supported a certain correspondence solution the more it dominated spatiotemporal information.  相似文献   

15.
Current theories assume that there is substantial overlap between visual working memory (VWM) and visual attention functioning, such that active representations in VWM automatically act as an attentional set, resulting in attentional biases towards objects that match the mnemonic content. Most evidence for this comes from visual search tasks in which a distractor similar to the memory interferes with the detection of a simultaneous target. Here we provide additional evidence using one of the most popular paradigms in the literature for demonstrating an active attentional set: The contingent spatial orienting paradigm of Folk and colleagues. This paradigm allows memory-based attentional biases to be more directly attributed to spatial orienting. Experiment 1 demonstrated a memory-contingent spatial attention effect for colour but not for shape contents of VWM. Experiment 2 tested the hypothesis that the placeholders used for spatial cueing interfered with the shape processing, and showed that memory-based attentional capture for shape returned when placeholders were removed. The results of the present study are consistent with earlier findings from distractor interference paradigms, and provide additional evidence that biases in spatial orienting contribute to memory-based influences on attention.  相似文献   

16.
Models of human visual processing start with an initial stage with parallel independent processing of different physical attributes or features (e.g. color, orientation, motion). A second stage in these models is a temporally serial mechanism (visual attention) that combines or binds information across feature dimensions. Evidence for this serial mechanism is based on experimental results for visual search. I conducted a study of visual search accuracy that carefully controlled for low-level effects: physical similarity of target and distractor, element eccentricity, and eye movements. The larger set-size effects in visual search accuracy for briefly flashed conjunction displays, compared with feature displays, are quantitatively predicted by a simple model in which each feature dimension is processed independently with inherent neural noise and information is combined linearly across feature dimensions. The data are not predicted by a temporally serial mechanism or by a hybrid model with temporally serial and noisy processing. The results do not support the idea that a temporally serial mechanism, visual attention, binds information across feature dimensions and show that the conjunction-feature dichotomy is due to the noisy independent processing of features in the human visual system.  相似文献   

17.
This study examined how 1 symbol is selected to control the allocation of attention when several symbols appear in the visual field. In Experiments 1-3, the critical target feature was color, and it was found that uninformative central arrows that matched the color of the target were selected and produced unintentional shifts of attention (i.e., involuntary, initiated slowly, producing long-lasting facilitatory effects). Experiment 4 tested whether such selection is the result of an attentional filter or of a competition bias due to a match of incoming information against integrated object representations stored in working memory. Here, the critical feature was shape and color was irrelevant, but matching color arrows were still selected. Thus, features of objects in working memory will bias the selection of symbols in the visual field, and such selected symbols are capable of producing unintentional shifts of attention. ((c) 2003 APA, all rights reserved)  相似文献   

18.
Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.  相似文献   

19.
Binding in short-term visual memory   总被引:26,自引:0,他引:26  
The integration of complex information in working memory, and its effect on capacity, shape the limits of conscious cognition. The literature conflicts on whether short-term visual memory represents information as integrated objects. A change-detection paradigm using objects defined by color with location or shape was used to investigate binding in short-term visual memory. Results showed that features from the same dimension compete for capacity, whereas features from different dimensions can be stored in parallel. Binding between these features can occur, but focused attention is required to create and maintain the binding over time, and this integrated format is vulnerable to interference. In the proposed model, working memory capacity is limited both by the independent capacity of simple feature stores and by demands on attention networks that integrate this distributed information into complex but unified thought objects.  相似文献   

20.
It is widely assumed that the separable features of visual objects, such as their colors and shapes, require attention to be integrated. However, the evidence in favor of this claim comes from experiments in which the colors and shapes of objects would have to be integrated and then also subjected to an arbitrary, instruction-based, stimulus-response (S-R) translation in order to have an observable effect. This raises the possibility that attention is not required for feature integration, per se, but is only required when color-shape conjunctions must undergo an arbitrary S-R translation. The present study conducted a more specific test and found strong evidence in favor of feature integration in the absence of attention. The implications of these results are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号