首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study investigates the separate activation versus coactivation issue for redundant targets in a simple letter-detection paradigm with latency as the dependent variable. The results of a one-response visual-search task are reported. Since, on single-target trials, only the target was presented and no accompanying noise element, no “distraction decrement” caused by irrelevant noise elements (Grice et al., 1984) was to be expected. The data obtained showed a clear redundantsignal effect. Subsequent detailed analysis of the latency data using Miller’s (1982) procedure indicated that the results were consistent with a separate activation model and failed to provide convincing evidence in favor of coactivation models. A further analysis of the data indicated that, in the present study, the separate channels were negatively correlated for a range of fast RTs and positively correlated for intermediate and larger RTs. No evidence in favor of Grice et al.’s (1984) distraction-decrement hypothesis was found. The conclusions of this study are that (1) a separate activation model summarizes the essential features of information processing in this simple visual search task, and (2) no convincing evidence in favor of coactivation in visual search tasks has been reported in the literature up to now.  相似文献   

2.
Serial and parallel search in pattern vision?   总被引:1,自引:0,他引:1  
S B Steinman 《Perception》1987,16(3):389-398
The nature of the processing of combinations of stimulus dimensions in human vision has recently been investigated. A study is reported in which visual search for suprathreshold positional information--vernier offsets, stereoscopic disparity, lateral separation, and orientation--was examined. The initial results showed that reaction times for visual search for conjunctions of stereoscopic disparity and either vernier offsets or orientation were independent of the number of distracting stimuli displayed, suggesting that disparity was searched in parallel with vernier offsets or orientation. Conversely, reaction times for detection of conjunctions of vernier offsets and orientation, or lateral separation and each of the other positional judgements, were related linearly to the number of distractors, suggesting serial search. However, practice has a significant effect upon the results, indicative of a shift in the mode of search from serial to parallel for all conjunctions tested as well as for single features. This suggests a reinterpretation of these and perhaps other studies that use the Treisman visual search paradigm, in terms of perceptual segregation of the visual field by disparity, motion, color, and pattern features such as colinearity, orientation, lateral separation, or size.  相似文献   

3.
The primate visual system responds to shapes, colors, and various other features more strongly in some brain areas than others. However, it remains unclear how these features are bound together so that an object with all its attributes is perceived. A patient (R.M.) with bilateral parietal-occipital lesions has been shown previously to miscombine the shape and color of items, making errors known as illusory conjunctions (ICs). In this study, we examined the effects of a third feature (motion) on this patient's IC rates. R.M. was presented with two letters that moved in different ways. He often reported seeing the shape of one of the letters with the other letter's motion. His performance on the same task with three features shows that correctly combining two features did not necessarily lead to correctly binding the third. These data support modularity of feature representations in the human brain and provide supporting evidence that spatial representations associated with the parietal lobe are necessary for normal feature integration.  相似文献   

4.
Increasing perceptual load reduces the processing of visual stimuli outside the focus of attention, but the mechanism underlying these effects remains unclear. Here we tested an account attributing the effects of perceptual load to modulations of visual cortex excitability. In contrast to stimulus competition accounts, which propose that load should affect simultaneous, but not sequential, stimulus presentations, the visual excitability account makes the novel prediction that load should affect detection sensitivity for both simultaneous and sequential presentations. Participants fixated a stimulus stream, responding to targets defined by either a color (low load) or color and orientation conjunctions (high load). Additionally, detection sensitivity was measured for a peripheral critical stimulus (CS) presented occasionally. Increasing load at fixation reduced sensitivity to the peripheral CSs; this effect was similar regardless of whether CSs were presented simultaneously with central stimuli or during the (otherwise empty) interval between them. Controls ruled out explanations of the results in terms of strategic task prioritization. These findings support a cortical excitability account for perceptual load, challenging stimulus competition accounts.  相似文献   

5.
Peripheral cues reduce reaction times (RTs) to targets at the cued location with short cue-target SOAs (cueing benefits) but increase RTs at long SOAs (cueing costs or inhibition of return). In detection tasks, cueing costs occur at shorter SOAs and are larger compared with identification tasks. To account for effects of task, detection cost theory claims that the integration of cue and target into an object file makes it more difficult to detect the target as a new event, which is the principal task-requirement in detection tasks. The integration of cue and target is expected to increase when cue and target are similar. We provided evidence for detection cost theory in the modified spatial cueing paradigm. Two types of cues (onset, color) were paired with two types of targets (onset, color) in separate blocks of trials. In the identification task, we found cueing benefits with matching (i.e., similar) cue-target pairs (onset-onset, color-color) and no cueing effects with nonmatching cue-target pairs (onset-color, color-onset), which replicates previous work. In the detection task, cueing effects with matching cues were reduced and even turned into cueing costs for onset cues with onset targets, suggesting that cue-target integration made it more difficult to detect targets at the cued location as new events. In contrast, the results for nonmatching cue-target pairs were not affected by task. Furthermore, the pattern of false alarms in the detection task provides a measure of similarity that may explain the size of cueing benefits and costs.  相似文献   

6.
Feature-integration theory postulates that a lapse of attention will allow letter features to change position and to recombine as illusory conjunctions (Treisman & Paterson, 1984). To study such errors, we used a set of uppercase letters known to yield illusory conjunctions in each of three tasks. The first, a bar-probe task, showed whole-character mislocations but not errors based on feature migration and recombination. The second, a two-alternative forced-choice detection task, allowed subjects to focus on the presence or absence of subletter features and showed illusory conjunctions based on feature migration and recombination. The third was also a two-alternative forced-choice detection task, but we manipulated the subjects' knowledge of the shape of the stimuli: In the case-certain condition, the stimuli were always in uppercase, but in the case-uncertain condition, the stimuli could appear in either upper- or lowercase. Subjects in the case-certain condition produced illusory conjunctions based on feature recombination, whereas subjects in the case-uncertain condition did not. The results suggest that when subjects can view the stimuli as feature groups, letter features regroup as illusory conjunctions; when subjects encode the stimuli as letters, whole items may be mislocated, but subletter features are not. Thus, illusory conjunctions reflect the subject's processing strategy, rather than the architecture of the visual system.  相似文献   

7.
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.  相似文献   

8.
Multisensory integration of nonspatial features between vision and touch was investigated by examining the effects of redundant signals of visual and tactile inputs. In the present experiments, visual letter stimuli and/or tactile letter stimuli were presented, which participants were asked to identify as quickly as possible. The results of Experiment 1 demonstrated faster reaction times for bimodal stimuli than for unimodal stimuli (the redundant signals effect (RSE)). The RSE was due to coactivation of figural representations from the visual and tactile modalities. This coactivation did not occur for a simple stimulus detection task (Experiment 2) or for bimodal stimuli with the same semantic information but different physical stimulus features (Experiment 3). The findings suggest that the integration process might occur at a relatively early stage of object-identification prior to the decision level.  相似文献   

9.
Casco C  Ganis G 《Perception》1999,28(1):89-108
A series of experiments was conducted to determine whether apparent motion tends to follow the similarity rule (i.e. is attribute-specific) and to investigate the underlying mechanism. Stimulus duration thresholds were measured during a two-alternative forced-choice task in which observers detected either the location or the motion direction of target groups defined by the conjunction of size and orientation. Target element positions were randomly chosen within a nominally defined rectangular subregion of the display (target region). The target region was presented either statically (followed by a 250 ms duration mask) or dynamically, displaced by a small distance (18 min of arc) from frame to frame. In the motion display, the position of both target and background elements was changed randomly from frame to frame within the respective areas to abolish spatial correspondence over time. Stimulus duration thresholds were lower in the motion than in the static task, indicating that target detection in the dynamic condition does not rely on the explicit identification of target elements in each static frame. Increasing the distractor-to-target ratio was found to reduce detectability in the static, but not in the motion task. This indicates that the perceptual segregation of the target is effortless and parallel with motion but not with static displays. The pattern of results holds regardless of the task or search paradigm employed. The detectability in the motion condition can be improved by increasing the number of frames and/or by reducing the width of the target area. Furthermore, parallel search in the dynamic condition can be conducted with both short-range and long-range motion stimuli. Finally, apparent motion of conjunctions is insufficient on its own to support location decision and is disrupted by random visual noise. Overall, these findings show that (i) the mechanism underlying apparent motion is attribute-specific; (ii) the motion system mediates temporal integration of feature conjunctions before they are identified by the static system; and (iii) target detectability in these stimuli relies upon a nonattentive, cooperative, directionally selective motion mechanism that responds to high-level attributes (conjunction of size and orientation).  相似文献   

10.
Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed-accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed.  相似文献   

11.
A model is proposed for identification and response selection of cross-dimensional conjunctive stimuli. The model assumes that the formation of conjunction representations involves processes similar to those used in response selection for single-feature targets. It predicts that discrimination between conjunctive targets leads to separate competitions in each of the relevant component dimensions and that detection of a predefined single conjunctive target is done at the conjunctive map level. Experiments 1 and 2 support these two sets of predictions. Experiment 3 demonstrates that responses to conjunctions of features within the orientation dimension are qualitatively different from those for cross-dimensional conjunctive targets. It is speculated that line-orientation conjunctions are handled by the visual object-recognition system, whereas cross-dimensional conjunctions, as exemplified by the model, may be performed by a different system that is closely associated with response selection processes.  相似文献   

12.
Recent auditory research using sequentially presented, spatially fixed tones has found evidence that, as in vision for simultaneous, spatially distributed objects, attention appears to be important for the integration of perceptual features that enable the identification of auditory events. The present investigation extended these findings to arrays of simultaneously presented, spatially distributed musical tones. In the primary tasks, listeners were required to search for specific cued conjunctions of values for the features of pitch and instrument timbre. In secondary tasks, listeners were required to search for a single cued value of either the pitch or the timbre feature. In the primary tasks, listeners made frequent errors in reporting the presence or absence of target conjunctions. Probability modeling, derived from the visual search literature, revealed that the error rates in the primary tasks reflected the relatively infrequent failure to correctly identify pitch or timbre features, plus the far more frequent illusory conjunction of separately presented pitch and timbre features. Estimates of illusory conjunction rate ranged from 23% to 40%. Thus, a process must exist in audition that integrates separately registered features. The implications of the results for the processing of isolated auditory features, as well as auditory events defined by conjunctions of features, are discussed.  相似文献   

13.
Illusory conjunctions inside and outside the focus of attention   总被引:2,自引:0,他引:2  
This article addresses 2 questions that arise from the finding that visual scenes are first parsed into visual features: (a) the accumulation of location information about objects during their recognition and (b) the mechanism for the binding of the visual features. The first 2 experiments demonstrated that when 2 colored letters were presented outside the initial focus of attention, illusory conjunctions between the color of one letter and the shape of the other were formed only if the letters were less than 1 degree apart. Separation greater than 2 degrees resulted in fewer conjunction errors than expected by chance. Experiments 3 and 4 showed that inside the spread of attention, illusory conjunctions between the 2 letters can occur regardless of the distance between them. In addition, these experiments demonstrated that the span of attention can expand or shrink like a spotlight. The results suggest that features inside the focus of attention are integrated by an expandable focal attention mechanism that conjoins all features that appear inside its focus. Visual features outside the focus of attention may be registered with coarse location information prior to their integration. Alternatively, a quick and imprecise shift of attention to the periphery may lead to illusory conjunctions among adjacent stimuli.  相似文献   

14.
Age differences in the redundant-signals effect and coactivation of visual dimensions were investigated in 2 experiments. In Experiment 1 the task required the conjoining of dimensions, whereas in Experiment 2 the spatial separation of dimensions was manipulated. Although coactivation was evident for both age groups when the redundant dimensions occurred at the same location, older adults showed more evidence for coactivation, perhaps because of compensation for declines in perceptual processing. When the redundant dimensions were separated, neither age group showed evidence for coactivation. These findings indicate that the coactive processing of redundant visual dimensions is spared in healthy older adults and that for both groups, attention must be focused on both dimensions for coactivation to occur.  相似文献   

15.

An abundance of recent empirical data suggest that repeatedly allocating visual attention to task-relevant and/or reward-predicting features in the visual world engenders an attentional bias for these frequently attended stimuli, even when they become task irrelevant and no longer predict reward. In short, attentional selection in the past hinders voluntary control of attention in the present. But do such enduring attentional biases rely on a history of voluntary, goal-directed attentional selection, or can they be generated through involuntary, effortless attentional allocation? An abrupt visual onset triggers such a reflexive allocation of covert spatial attention to its location in the visual field, automatically modulating numerous aspects of visual perception. In this Registered Report, we asked whether a selection history that has been reflexively and involuntarily derived (i.e., through abrupt-onset cueing) also interferes with goal-directed attentional control, even in the complete absence of exogenous cues. To build spatially distinct histories of exogenous selection, we presented abrupt-onset cues twice as often at one of two task locations, and as expected, these cues reflexively modulated visual processing: task accuracy increased, and response times (RTs) decreased, when the cue appeared near the target’s location, relative to that of the distractor. Upon removal of these cues, however, we found no evidence that exogenous selection history modulated task performance: task accuracy and RTs at the previously most-cued and previously least-cued sides were statistically indistinguishable. Thus, unlike voluntarily directed attention, involuntary attentional allocation may not be sufficient to engender historically contingent selection biases.

  相似文献   

16.
If semantic representations are based on particular types of perceptual features, then category knowledge that arises from multimodal sensory experiences should rely on distinct and common sensory brain regions depending on the features involved. Using a similarity-based generation-andcomparison task, we found that semantic categories activated cortical areas associated with taste and smell, biological motion, and visual processing. Fruit names specifically activated medial orbitofrontal regions associated with taste and smell. Labels for body parts and clothing activated lateral temporal occipitoparietal areas associated with perceiving the human body. More generally, visually biased categories activated ventral temporal regions typically ascribed to visual object recognition, whereas functional categories activated lateral frontotemporal areas previously associated with the representation of usage properties. These results indicate that semantic categories that are distinguished by particular perceptual properties rely on distinct cortical regions, whereas semantic categories that rely on similar types of features depend on common brain areas.  相似文献   

17.
Accurate mental representation of visual stimuli requires retaining not only the individual features but also the correct relationship between them. This associative process of binding is mediated by working memory (WM) mechanisms. The present study re-examined reports of WM-related binding deficits with aging. In Experiment 1, 31 older and 31 younger adults completed a visual change detection task with feature–location relations presented either simultaneously or sequentially; the paradigm was also designed specifically to minimize the impact of lengthy retention intervals, elaborative rehearsal, and processing demands of multi-stimulus probes. In Experiment 2, 38 older and 42 younger adults completed a modified task containing both feature–location relations and feature–feature conjunctions. In Experiment 1 although feature–location binding was more difficult with sequential compared with simultaneous presentation, the effect was independent of age. In Experiment 2 while older adults were overall slower and less accurate than young adults, there were no age-specific deficits in WM binding. Overall, after controlling for methodological factors, there was no evidence of an age-related visual WM binding deficit for surface or location features. However, unlike younger adults, older adults appeared less able to restrict processing of irrelevant features, consistent with reported declines with age in strategic capacities of WM.  相似文献   

18.
Delvenne JF 《Cognition》2005,96(3):B79-B88
Visual short-term memory (VSTM) and attention are both thought to have a capacity limit of four items [e.g. Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 309, 279-281; Pylyshyn, Z. W., & Storm, R. W. (1988). Tracking multiple independent targets: evidence for a parallel tracking mechanism. Spatial Vision, 3, 179-197.]. Using the multiple object visual tracking paradigm (MOT), it has recently been shown that twice as many items can be simultaneously attended when they are separated between two visual fields compared to when they are all presented within the same hemifield [Alvarez, G. A., & Cavanagh, P. (2004). Independent attention resources for the left and right visual hemifields (Abstract). Journal of Vision, 4(8), 29a.]. Does VSTM capacity also increase when the items to be remembered are distributed between the two visual fields? The current paper investigated this central issue in two different tasks, namely a color and spatial location change detection task, in which the items were displayed either in the two visual fields or in the same hemifield. The data revealed that only memory capacity for spatial locations and not colors increased when the items were separated between the two visual fields. These findings support the view of VSTM as a chain of capacity limited operations where the spatial selection of stimuli, which dominates in both spatial location VSTM and MOT, occupies the first place and shows independence between the two fields.  相似文献   

19.
Two experiments are reported in which “same”-“different” reaction times (RTs) were collected to pairs of stimuli. In Experiment 1 stimuli were matrix patterns, and in Experiment 2 stimuli were digits. In both experiments, the pairs were presented simultaneously (discrimination task) and successively (memory task) for a set of nine simple and a set of nine complex stimuli. The following results were obtained: discrimination RTs were longer than memory RTs; RTs to complex stimuli were longer than RTs to simple stimuli; “same” RTs were faster than “different” RTs across all conditions except simple pattern discrimination, for which “different” RTs were faster than “same” RTs; and discrimination RTs for complex patterns were longer than would be predicted from the other conditions. Some evidence was obtained that the form of encoding for both patterns and digits in the memory task was visual. These results are discussed in terms of encoding and comparison strategies.  相似文献   

20.
Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target—the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.
árni KristjánssonEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号