首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
It has been indicated that visual search is interfered with in spatial working memory (WM), although not in nonspatial WM. In this study, the effects on visual search of individual differences in spatial and nonspatial WM were examined. Two visual search conditions were used: a conjunction search condition comprising two features (color and shape) and a disjunction condition comprising only one feature (color or shape). 96 participants (42 men, 54 women, M age = 20.9 yr., SD = 3.5) participated in this study. The participants were divided into high and low WM groups based on their spatial and nonspatial WM test scores. As a result, statistically significant group differences in the conjunction search rate were observed in spatial WM but not in nonspatial WM. These results suggest there is a relationship between visual search and the individual spatial WM ability, but this does not hold for nonspatial WM.  相似文献   

2.
Visual system has been proposed to be divided into two, the ventral and dorsal, processing streams. The ventral pathway is thought to be involved in object identification whereas the dorsal pathway processes information regarding the spatial locations of objects and the spatial relationships among objects. Several studies on working memory (WM) processing have further suggested that there is a dissociable domain-dependent functional organization within the prefrontal cortex for processing of spatial and nonspatial visual information. Also the auditory system is proposed to be organized into two domain-specific processing streams, similar to that seen in the visual system. Recent studies on auditory WM have further suggested that maintenance of nonspatial and spatial auditory information activates a distributed neural network including temporal, parietal, and frontal regions but the magnitude of activation within these activated areas shows a different functional topography depending on the type of information being maintained. The dorsal prefrontal cortex, specifically an area of the superior frontal sulcus (SFS), has been shown to exhibit greater activity for spatial than for nonspatial auditory tasks. Conversely, ventral frontal regions have been shown to be more recruited by nonspatial than by spatial auditory tasks. It has also been shown that the magnitude of this dissociation is dependent on the cognitive operations required during WM processing. Moreover, there is evidence that within the nonspatial domain in the ventral prefrontal cortex, there is an across-modality dissociation during maintenance of visual and auditory information. Taken together, human neuroimaging results on both visual and auditory sensory systems support the idea that the prefrontal cortex is organized according to the type of information being maintained in WM.  相似文献   

3.
Even though it is undisputed that prior information regarding the location of a target affects visual selection, the issue of whether information regarding nonspatial features, such as color and shape, has similar effects has been a matter of debate since the early 1980s. In the study described in this article, measures derived from signal detection theory were used to show that perceptual sensitivity is affected by a top-down set for spatial information but not by a top-down set for nonspatial information. This indicates that knowing where the target singleton is affects perceptual selectivity but that knowing what it is does not help selectivity. Furthermore, perceptual sensitivity can be enhanced by nonspatial features, but only through a process related to bottom-up priming. These findings have important implications for models of visual selection.  相似文献   

4.
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory–visual interaction, using an auditory–visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.  相似文献   

5.
Rutgers-The State University, New Brunswick, New Jersey 08903 Is the quality of information obtained from simple auditory and visual signals diminished when both modalities must be attended to simultaneously? This question was investigated in an experiment in which subjects made forced-choice judgments of the location of simple light and tone signals presented in focused- and divided-attention conditions. The data are compared with the predictions of a model that describes the largest performance decrement to be expected in the divided-attention condition on the basis of nonattentional factors. The results of this comparison suggest that the difference in performance between focused- and dividedattention conditions is attributable solely to the increased opportunity to confuse signal with noise as the number of modalities is increased. Thus, there appears to be no evidence that dividing attention between modalities affects the quality of the stimulus representations of individual light and tone signals.  相似文献   

6.
Recent research has found visual object memory can be stored as part of a larger scene representation rather than independently of scene context. The present study examined how spatial and nonspatial contextual information modulate visual object memory. Two experiments tested participants’ visual memory by using a change detection task in which a target object's orientation was either the same as it appeared during initial viewing or changed. In addition, we examined the effect of spatial and nonspatial contextual manipulations on change detection performance. The results revealed that visual object representations can be maintained reliably after viewing arrays of objects. Moreover, change detection performance was significantly higher when either spatial or nonspatial contextual information remained the same in the test image. We concluded that while processing complex visual stimuli such as object arrays, visual object memory can be stored as part of a comprehensive scene representation, and both spatial and nonspatial contextual changes modulate visual memory retrieval and comparison.  相似文献   

7.
Past studies of simultaneous attention to pairs of visual stimuli have used the “dual-task” paradigm to show that identification of the direction of a change in luminance, whether incremental or decremental, is “capacity-limited,” while simple detection of these changes is governed by “capacity-free” processes. On the basis of that finding, it has been suggested that the contrast between identification and detection reflects different processes in the sensory periphery, namely the responses of magno- and parvocellular receptors. The present study questions that assertion and investigates the contribution of central processing in resource limitation by applying the dual task to a situation in which one stimulus is auditory and one is visual. The results are much the same as before, with identification demonstrating the tradeoff in performance generally attributed to a limited capacity but detection showing no loss compared with single-task controls. This implies that limitations on resources operate at a central level of processing rather than in the auditory and visual peripheries.  相似文献   

8.
In visual search, 30-40% of targets with a prevalence rate of 2% are missed, compared to 7% of targets with a prevalence rate of 50% (Wolfe, Horowitz, & Kenner, 2005). This "low-prevalence" (LP) effect is thought to occur as participants are making motor errors, changing their response criteria, and/or quitting their search too soon. We investigate whether colour and spatial cues, known to improve visual search when the target has a high prevalence (HP), benefit search when the target is rare. Experiments 1 and 2 showed that although knowledge of the target's colour reduces miss errors overall, it does not eliminate the LP effect as more targets were missed at LP than at HP. Furthermore, detection of a rare target is significantly impaired if it appears in an unexpected colour-more so than if the prevalence of the target is high (Experiment 2). Experiment 3 showed that, if a rare target is exogenously cued, target detection is improved but still impaired relative to high-prevalence conditions. Furthermore, if the cue is absent or invalid, the percentage of missed targets increases. Participants were given the option to correct motor errors in all three experiments, which reduced but did not eliminate the LP effect. The results suggest that although valid colour and spatial cues improve target detection, participants still miss more targets at LP than at HP. Furthermore, invalid cues at LP are very costly in terms of miss errors. We discuss our findings in relation to current theories and applications of LP search.  相似文献   

9.
People are able to judge the current position of occluded moving objects. This operation is known as motion extrapolation. It has previously been suggested that motion extrapolation is independent of the oculomotor system. Here we revisited this question by measuring eye position while participants completed two types of motion extrapolation task. In one task, a moving visual target travelled rightwards, disappeared, then reappeared further along its trajectory. Participants discriminated correct reappearance times from incorrect (too early or too late) with a two-alternative forced-choice button press. In the second task, the target travelled rightwards behind a visible, rectangular occluder, and participants pressed a button at the time when they judged it should reappear. In both tasks, performance was significantly different under fixation as compared to free eye movement conditions. When eye movements were permitted, eye movements during occlusion were related to participants' judgements. Finally, even when participants were required to fixate, small changes in eye position around fixation (<2°) were influenced by occluded target motion. These results all indicate that overlapping systems control eye movements and judgements on motion extrapolation tasks. This has implications for understanding the mechanism underlying motion extrapolation.  相似文献   

10.
Three experiments tested for developmental changes in attention to simple auditory and visual signals. Subject pressed a single button in response to the onset (Experiment 1) or offset (Experiment 2) of either a tone or a light. During one block of trials subjects knew which stimulus would come on or go off on each trial (precue condition) whereas during the other block of trials no precue was provided. In both experiments subjects as young as 4 years old responded more rapidly with precues, indicating that they were able to allocate their attention to the indicated modality. Experiment 3 utilized a choice reaction paradigm (in which subjects pressed different buttons in response to the onset of the light and the tone) in order to examine their attention allocation when no precues were provided. It was found that the adults and 7-year-olds tended to allocate their attention to vision rather than audition when no precue was provided. The results with the 4-year-olds were not entirely consistent, but suggested a similar biasing of attention to vision on their part as well.  相似文献   

11.
To interpret our environment, we integrate information from all our senses. For moving objects, auditory and visual motion signals are correlated and provide information about the speed and the direction of the moving object. We investigated at what level the auditory and the visual modalities interact and whether the human brain integrates only motion signals that are ecologically valid. We found that the sensitivity for identifying motion was improved when motion signals were provided in both modalities. This improvement in sensitivity can be explained by probability summation. That is, auditory and visual stimuli are combined at a decision level, after the stimuli have been processed independently in the auditory and the visual pathways. Furthermore, this integration is direction blind and is not restricted to ecologically valid motion signals.  相似文献   

12.
Four experiments investigated the representation and integration in memory of spatial and nonspatial relations. Subjects learned two-dimensional spatial arrays in which critical pairs of object names were semantically related (Experiment 1), semantically and episodically related (Experiment 2), or just episodically related (Experiments 3a and 3b). Episodic relatedness was established in a paired-associate learning task that preceded array learning. After learning an array, subjects participated in two tasks: item recognition, in which the measure of interest was priming; and distance estimation. Priming in item recognition was sensitive to the Euclidean distance between object names and, for neighbouring locations, to nonspatial relations. Errors in distance estimations varied as a function of distance but were unaffected by nonspatial relations. These and other results indicated that nonspatial relations influenced the probability of encoding spatial relations between locations but did not lead to distorted spatial memories.  相似文献   

13.
ABSTRACT

In a variety of contexts, arbitrarily associating one’s self with a stimulus improves performance relative to stimuli that are not self-associated, implying enhanced processing of self-associated stimuli (“self-relevance” effects). Self-relevance has been proposed to influence diverse aspects of cognition, including the perceptual prioritization of self-relevant stimuli (“self-prioritization” effects). We sought to elucidate the mechanisms of self-prioritization by using a visual search paradigm. In three experiments, subjects learned two stimulus-label combinations (SELF and OTHER), and then searched for one of those stimuli (cued by the label) on each trial, with a variable number of distractors present on each trial. We hypothesized that, if self-relevance enhances the perceptual salience of the stimuli pre-attentively, then the self-relevance of a target should result in improved search efficiency. In three experiments using conjunction-defined (Experiments 1–2) and feature-defined (Experiment 3) targets, we found that self-relevant targets were associated with overall faster responses than non-self-relevant targets (an intercept effect). However, the slopes of the search size by reaction time (RT) function were never significantly different between the self-relevant and non-self-relevant conditions, counter to the hypothesis that self-prioritization is pre-attentive. These results constitute novel evidence that self-relevance affects visual search performance, but they also cast doubt on the possibility that self-relevance enhances the perceptual salience of a target in a manner similar to physical manipulations. We propose that the self-relevance of a stimulus alters processing only after the self-relevant item has been attended.  相似文献   

14.
Event-related brain potentials (ERPs) were recorded from subjects as they attended to one diagonal of a visual display. The task was to respond only to memory set items (targets) at the attended diagonal and to ignore stimuli at the other diagonal. The probability that the display contained either an attended or an unattended target was 0.30. The spatial separation between attended and unattended stimuli was 1.6°. The ERP elicited by stimuli at the unattended diagonal contained a sequence of phasic components. The early N170 and P250 components were elicited by the onset of the display and the later components N480 and P550 by the offset of the display. The presence of masks delayed N170 and P250. The ERPs elicited by attended non-targets, in addition, contained an increased N350 (Cz, Fz) and P410 (P3a, Pz, Cz). The ERPs elicited by attended and unattended non-targets started to differ after 200 msec. This finding suggested that selection is relatively late if selection must be based on a conjunction of features (location and orientation) and if the spatial separation between attended and unattended stimuli is small. Memory-set size affected the ERPs after 250 msec. The ERPs elicited by attended stimuli contained a broadly distributed (Fz, Cz, Pz) negative endogeneous component. The amplitude of this component was related to memory-set size. Finally, the ERPs elicited by attended targets contained a large P3b (Pz, Oz) with a peak latency around 600 msec. The ERP results suggested the existence of three processing stages: (1) orienting to the attended stimuli; (2) controlled search, and (3) target decision.  相似文献   

15.
Short-term memory for the timing of irregular sequences of signals has been said to be more accurate when the signals are auditory than when they are visual. No support for this contention was obtained when the signals were beeps versus flashes (Experiments 1 and 3) nor when they were sets of spoken versus typewritten digits (Experiments 4 and 5). On the other hand, support was obtained both for beeps versus flashes (Experiments 2 and 5) and for repetitions of a single spoken digit versus repetitions of a single typewritten digit (Experiment 6) when the subjects silently mouthed a nominally irrelevant item during sequence presentation. Also, the timing of sequences of auditory signals, whether verbal (Experiment 7) or nonverbal (Experiments 8 and 9), was more accurately remembered when the signals within each sequence were identical. The findings are considered from a functional perspective.  相似文献   

16.
The authors hypothesized that during a gap in a timed signal, the time accumulated during the pregap interval decays at a rate proportional to the perceived salience of the gap, influenced by sensory acuity and signal intensity. When timing visual signals, albino (Sprague-Dawley) rats, which have poor visual acuity, stopped timing irrespective of gap duration, whereas pigmented (Long-Evans) rats, which have good visual acuity, stopped timing for short gaps but reset timing for long gaps. Pigmented rats stopped timing during a gap in a low-intensity visual signal and reset after a gap in a high-intensity visual signal, suggesting that memory for time in the gap procedure varies with the perceived salience of the gap, possibly through an attentional mechanism.  相似文献   

17.
Previous studies have found that a nonspecific visual event occurring at the fovea 50–150 msec after the onset of a peripheral target delayed the initiation of the saccade to that target. The present studies replicated and extended this finding by studying the effects of both visual and auditory warning signals, by examining the effects of onset and offset warning on manual response latency, and by investigating the effects of presenting the warning events in the periphery of the visual field. The results indicated that the interfering effects occur with visual but not auditory stimuli, with saccades but not motor responses, and when the visual warning event occurs either foveally or in the subject’s periphery. Implications for the processes involved are discussed.  相似文献   

18.
The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the right visual hemifield. This replicated Du and Abrams and also revealed a difference between hemifields in the time course of this effect. Our second experiment suggested that this asymmetry is moderated by the tuning of attentional control settings: when the target was easier to detect the asymmetry was attenuated. Our third experiment showed that this asymmetry is also present during singleton detection: a color singleton distractor produced a larger capture effect in the left hemifield than in the right hemifield. Finally, our fourth experiment suggested that this asymmetry is moderated by the salience of the attention-capturing distractor: when the distractor was not salient, the asymmetry was attenuated. These results suggest that there are boundary conditions in the observed hemifield asymmetry in the contingent capture of attention and several underlying brain systems might be involved.  相似文献   

19.
Grouping effects on spatial attention in visual search.   总被引:6,自引:0,他引:6  
In visual search tasks, spatial attention selects the locations containing a target or a distractor with one of the target's features, implying that spatial attention is driven by target features (M.-S. Kim & K. R. Cave, 1995). The authors measured the effects of location-based grouping processes in visual search. In searches for a color-shape combination (conjunction search), spatial probes indicated that a cluster of same-color or same-shape elements surrounding the target were grouped and selected together. However, in searches for a shape target (feature search), evidence for grouping by an irrelevant feature dimension was weaker or nonexistent. Grouping processes aided search for a visual target by selecting groups of locations that shared a common feature, although there was little or no grouping by an irrelevant feature when the target was defined by a unique salient feature.  相似文献   

20.
We examined the effect of an irrelevant visual transient on the decision where to look for a hidden object. Participants also performed a conventional ‘inhibition of return’ localization task. In Experiments 1 and 2 the two tasks were blocked and in Experiments 3 and 4 they were randomly interleaved. In every experiment there was a bias to select the cued location in the spatial decision task. This facilitory effect was greatest when the cue occurred at a strategically unfavored location and even occurred for participants who reported strategically selecting a non-cued location, indicating that the facilitory effect is automatic and independent of other strategic biases. Inhibition of return was observed only when the tasks were blocked and the localization task preceded the decision task. The findings suggests that spatial decisions engage different attentional control settings than those engaged when detecting visual transients; and that this attentional mode affects the processing of visual transients such that they do not inhibit the subsequent speeded detection of onset targets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号