首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

We tested two subjects following damage to right parietal cortex to see if their failure to detect a left visual stimulus in the presence of a simultaneous right stimulus (visual extinction) could be modulated by perceptual grouping between the left and right stimuli. Subjects performed a simple detection task for brief displays in which items could appear in the left or right visual field, both fields, or neither field. On trials in which items appeared in both fields, we found that left omissions (extinction errors) were dramatically reduced when the two items formed a good perceptual group, either on the basis of Gestalt factors such as similarity and symmetry (Experiment 1). or by forming a familiar configuration (Experiment 2). We suggest that extinction may be a spatially specific exaggeration of a normal attention limitation, in which the contralesional item is disadvantaged in the competition for selection. However, this obstacle to selection can be overcome if, as a result of grouping, ipsilesional and contralesional items become allies rather than competitors for selection.  相似文献   

2.
Our visual system groups objects with similar features, such as colour, orientation, or shape. We argue that similarity grouping is nothing more than global selection of a certain feature value, such as red, horizontal, or circular. This account makes the striking prediction that only one feature group can be created at a time. Here we provide the most direct evidence yet for this proposal, using a number estimation task that forces simultaneous processing of all objects. Multiple similarity cues failed to produce grouping in this task, in contrast to proximity, connectivity, and common region, which all showed strong grouping effects.  相似文献   

3.
Space-based accounts of visual attention assume that we select a limited spatial region independent of the number of objects it contains. In contrast, object-based accounts suggest that we select objects independent of their location. We investigated the boundary conditions on the selection modes of attention in a series of tachistoscopic visual search tasks, where the nature of capacity limitations on search was examined. Observers had to search for a horizontally oriented target ellipse among differently oriented distractor ellipses. Across four experiments, we orthogonally manipulated target-distractor (TD) similarity and distractor-distractor (DD) similarity. Each experiment consisted of a two-way design: Firstly, with a central cue, we indicated the spatial extent of the relevant search area. Secondly, we varied the number and spatial proximity of items in the display. Performance could be accounted for in terms of capacity limited object-based attention, assuming also that the spatial proximity of items enhances performance when there is high DD-similarity (and grouping). In addition, the cueing effect interacted with spatial proximity when DD-similarity was high, suggesting that grouping was influenced by attention. We propose that any capacity limits on visual search are due to object-based attention, and that the formation of perceptual objects and object groups is also subject to attentional modulation.  相似文献   

4.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

5.
In two experiments we examined the perceived grouping of grids of equidistant dots, which are rapidly modulated over time so that alternate rows or columns are presented out of phase. In Experiment 1, we report that observers were able to group the grids consistent with the temporal modulation reliably, even at contrasts/frequencies for which flicker was not detectable. Moreover, flicker thresholds decreased with stimulus duration, whilst grouping thresholds did not change. In Experiment 2, we examined the impact of visual transients, by measuring performance when, either a mask or a contrast ramp was presented before and after the stimulus. Performance dropped substantially for both conditions, but remained significantly above chance. The results are discussed in relation to the role of temporal correlations in stimulus modulations and visual transients in grouping.  相似文献   

6.
Using a response competition paradigm, we investigated the ability to ignore target response-compatible, target response-incompatible, and neutral visual and auditory distractors presented during a visual search task. The perceptual load model of attention (e.g., Lavie & Tsal, 1994) states that task-relevant processing load determines irrelevant distractor processing in such a way that increasing processing load prevents distractor processing. In three experiments, participants searched sets of one (easy search) or six (hard search) similar items. In Experiment 1, visual distractors influenced reaction time (RT) and accuracy only for easy searches, following the perceptual load model. Surprisingly, auditory distractors yielded larger distractor compatibility effects (median RT for incompatible trials minus median RT for compatible trials) for hard searches than for easy searches. In Experiments 2 and 3, during hard searches, consistent RT benefits with response-compatible and RT costs with response-incompatible auditory distractors occurred only for hard searches. We suggest that auditory distractors are processed regardless of visual perceptual load but that the ability to inhibit cross-modal influence from auditory distractors is reduced under high visual load.  相似文献   

7.
This study examines whether an improved intertask coordination skill is acquired during extensive dual-task training and whether it can be transferred to a new dual-task situation. Participants practised a visual–manual task and an auditory–vocal task. These tasks were trained in two groups matched in dual-task performance measures before practice: a single-task practice group and a hybrid practice group (including single-task and dual-task practice). After practice, the single-task practice group was transferred to the same dual-task situation as that for the hybrid practice group (Experiment 1), both groups were transferred to a dual-task situation with a new visual task (Experiment 2), and both groups were transferred to a dual-task situation with a new auditory task matched in task difficulty (Experiment 3). The results show a dual-task performance advantage in the hybrid practice group over the single-task practice group in the practised dual-task situation (Experiment 1), the manipulated visual-task situation (Experiment 2), and the manipulated auditory-task situation (Experiment 3). In all experiments, the dual-task performance advantage was consistently found for the auditory task only. These findings suggest that extended dual-task practice improves the skill to coordinate two tasks, which may be defined as an accelerated switching operation between both tasks. This skill is relatively robust against changes of the component visual and auditory tasks. We discuss how the finding of task coordination could be integrated in present models of dual-task research.  相似文献   

8.
Thirty children and 5 adults participated in two experiments designed to compare visual processing in normal and reading disabled children. The children were aged 8, 10, and 12 years. In Experiment 1, subjects were asked to detect the temporal order of two briefly presented stimuli. In Experiment 2, subjects sorted cards containing bracket stimuli that did or did not produce perceptual grouping effects. Poor readers required more time to make accurate temporal order judgments and showed stronger perceptual grouping effects. For both good and poor readers, the amount of time necessary to make a correct temporal order judgment decreased, and perceptual grouping effects became weaker with age. However, the magnitude of the difference between the groups did not lessen with age. These results suggest that there are visual processing differences between good and poor readers that do not appear to correct by age 12.  相似文献   

9.
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target.  相似文献   

10.
Grouping effects on spatial attention in visual search.   总被引:6,自引:0,他引:6  
In visual search tasks, spatial attention selects the locations containing a target or a distractor with one of the target's features, implying that spatial attention is driven by target features (M.-S. Kim & K. R. Cave, 1995). The authors measured the effects of location-based grouping processes in visual search. In searches for a color-shape combination (conjunction search), spatial probes indicated that a cluster of same-color or same-shape elements surrounding the target were grouped and selected together. However, in searches for a shape target (feature search), evidence for grouping by an irrelevant feature dimension was weaker or nonexistent. Grouping processes aided search for a visual target by selecting groups of locations that shared a common feature, although there was little or no grouping by an irrelevant feature when the target was defined by a unique salient feature.  相似文献   

11.
In visual search tasks, spatial attention selects the locations containing a target or a distractor with one of the target's features, implying that spatial attention is driven by target features (M.S. Kim & K. R. Cave, 1995). The authors measured the effects of location-based grouping processes in visual search. In searches for a color-shape combination (conjunction search), spatial probes indicated that a cluster of same-color or same-shape elements surrounding the target were grouped and selected together. However, in searches for a shape target (feature search), evidence for grouping by an irrelevant feature dimension was weaker or nonexistent. Grouping processes aided search for a visual target by selecting groups of locations that shared a common feature, although there was little or no grouping by an irrelevant feature when the target was defined by a unique salient feature.  相似文献   

12.
One important task for the visual system is to group image elements that belong to an object and to segregate them from other objects and the background. We here present an incremental grouping theory (IGT) that addresses the role of object-based attention in perceptual grouping at a psychological level and, at the same time, outlines the mechanisms for grouping at the neurophysiological level. The IGT proposes that there are two processes for perceptual grouping. The first process is base grouping and relies on neurons that are tuned to feature conjunctions. Base grouping is fast and occurs in parallel across the visual scene, but not all possible feature conjunctions can be coded as base groupings. If there are no neurons tuned to the relevant feature conjunctions, a second process called incremental grouping comes into play. Incremental grouping is a time-consuming and capacity-limited process that requires the gradual spread of enhanced neuronal activity across the representation of an object in the visual cortex. The spread of enhanced neuronal activity corresponds to the labeling of image elements with object-based attention.  相似文献   

13.
Under natural viewing conditions, a large amount of information reaches our senses, and the visual system uses attention and perceptual grouping to reduce the complexity of stimuli in order to make real-time perception possible. Prior studies have shown that attention and perceptual grouping operate in synergy; exogenous attention is deployed not only to the cued item, but also to the entire group. Here, we investigated how attention and perceptual grouping operate during the formation and dissolution of groups. Our results showed that reaction times are higher in the presence of perceptual groups than they are for ungrouped stimuli. On the other hand, attentional benefits of perceptual grouping were observed during both the formation and the dissolution of groups. The dynamics were similar during group formation and dissolution, showing a gradual effect that takes approximately half a second to reach its maximum level. In the case of group dissolution, the attentional benefits persisted for about a quarter of a second after dissolution of the group. Taken together, our results reveal the dynamics of how attention and grouping work in synergy during the transient period when groups form or dissolve.  相似文献   

14.
This study examines whether an improved intertask coordination skill is acquired during extensive dual-task training and whether it can be transferred to a new dual-task situation. Participants practised a visual-manual task and an auditory-vocal task. These tasks were trained in two groups matched in dual-task performance measures before practice: a single-task practice group and a hybrid practice group (including single-task and dual-task practice). After practice, the single-task practice group was transferred to the same dual-task situation as that for the hybrid practice group (Experiment 1), both groups were transferred to a dual-task situation with a new visual task (Experiment 2), and both groups were transferred to a dual-task situation with a new auditory task matched in task difficulty (Experiment 3). The results show a dual-task performance advantage in the hybrid practice group over the single-task practice group in the practised dual-task situation (Experiment 1), the manipulated visual-task situation (Experiment 2), and the manipulated auditory-task situation (Experiment 3). In all experiments, the dual-task performance advantage was consistently found for the auditory task only. These findings suggest that extended dual-task practice improves the skill to coordinate two tasks, which may be defined as an accelerated switching operation between both tasks. This skill is relatively robust against changes of the component visual and auditory tasks. We discuss how the finding of task coordination could be integrated in present models of dual-task research.  相似文献   

15.
Selective attention has been intensively studied using the Stroop task. Evidence suggests that Stroop interference in a color-naming task arises partly because of visual attention sharing between color and word: Removing the target color after 150 msec reduces interference (Neumann, 1986). Moreover, removing both the color and the word simultaneously reduces interference less than does removing the color only (La Heij, van der Heijden, & Plooij, 2001). These findings could also be attributed to Gestalt grouping principles, such as common fate. We report three experiments in which the role of Gestalt grouping was further investigated. Experiment I replicated the reduced interference, using words and color patches. In Experiment 2, the color patch was not removed but only repositioned (<2 degrees) after 100 msec, which also reduced interference. In Experiment 3, the distractor was repositioned while the target remained stationary, again reducing interference. These results indicate a role for Gestalt grouping in selective attention.  相似文献   

16.
CLUSTERS PRECEDE SHAPES IN PERCEPTUAL ORGANIZATION   总被引:2,自引:0,他引:2  
Abstract —Does perceptual grouping require attention? Recent controversy on this question may be caused by a conflation of two aspects of grouping: element clustering (determining which elements belong together) and shape formation (determining cluster boundaries). In Experiment 1, observers enumerated diamonds that were drawn with either lines or dots. These two types of stimuli were subitized (enumerated rapidly and accurately in the range from one to three items) equally well, suggesting that clustering dots into countable entities did not detnand attention. In contrast, when target diamonds were enumerated among distractor squares in Experiment 2. only line-drawn items could be subitized. We propose that clustering and shape formation not only involve different perceptual processes, but play different functional roles in vision.  相似文献   

17.
Jiang and Wagner (2004) demonstrated that individual target-distractor associations were learned in contextual cuing. We examined whether individual associations can be learned in efficient visual searches that do not involve attentional deployment to individual search items. In Experiment 1, individual associations were not learned during the efficient search tasks. However, in Experiment 2, where additional exposure duration of the search display was provided by presenting placeholders marking future locations of the search items, individual associations were successfully learned in the efficient search tasks and transferred to inefficient search. Moreover, Experiment 3 demonstrated that a concurrent task requiring attention does not affect the learning of the local visual context. These results clearly showed that attentional deployment is not necessary for learning individual locations and clarified how the human visual system extracts and preserves regularity in complex visual environments for efficient visual information processing.  相似文献   

18.
Inhibition of return (IOR) facilitates visual search by discouraging the reinspection of recently processed items. We investigated whether IOR operates across two consecutive searches of the same display for different targets. In Experiment 1, we demonstrated that IOR is present within each of the two searches. In Experiment 2, we found no evidence for IOR across searches. In Experiment 3, we showed that IOR is present across the two searches when the first search is interrupted, suggesting that the completion of the search is what causes the resetting of IOR. We concluded that IOR is a partially flexible process that can be reset when the task completes, but not necessarily when it changes. When resetting occurs, this flexibility ensures that the inhibition of previously visited locations does not interfere with the new search.  相似文献   

19.
Two experiments were conducted to determine whether the auditory and visual systems process simultaneously presented pairs of alphanumeric information differently. In Experiment 1, different groups of subjects were given extensive practice recalling pairs of superimposed visual or auditory digits in simultaneous order (the order of arrival) or successive order (one member of each digit pair in turn, followed by the other pair member). For auditory input, successive order of recall was more accurate, particularly for the last two of three pairs presented, whereas for visual input, simultaneous order of recall was more accurate. In Experiment 2, subjects were cued to recall in one or the other order either immediately before or after stimulus input. Recall order results were the same as for Experiment 1, and precuing did not facilitate recall in either order for both modalities. These results suggest that processing in the auditory system can only occur successively across time, whereas as in the visual system processing can only occur simultaneously in space.  相似文献   

20.
Dorsal stream visual encoding was studied in three experiments, by examining effects of peripheral landmark cues on eye movements. Stimulus features and task structure were tailored to physiological and functional characterisations of the dorsal visual stream. Sub-discriminable peripheral stimuli served as landmark cue stimuli. In Experiments 1 and 2, orienting behaviour in response to cues and targets differed for participants with relatively low and relatively high peripheral contrast thresholds. In Experiment 1, low, but not high-threshold participants oriented towards landmark cues that could not be discriminated consciously. However, in Experiment 3, high-, but not low-threshold participants oriented towards near threshold cues. Hence, under appropriate conditions both groups of participants oriented in response to brief, low-contrast, peripheral information. We propose that landmark cueing may provide a useful tool for measuring individual differences in dorsal stream processing and dynamic aspects of visual functioning and awareness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号