首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In the present study, participants identified the location of a visual target presented in a rapidly masked, changing sequence of visual distractors. In Experiment 1, we examined performance when a high tone, embedded in a sequence of low tones, was presented in synchrony with the visual target and observed that the high tone improved visual target identification, relative to a condition in which a low tone was synchronized with the visual target, thus replicating Vroomen and de Gelder's (2000, Experiment 1) findings. In subsequent experiments, we presented a single visual, auditory, vibrotactile, or combined audiotactile cue with the visual target and found similar improvements in participants' performance regardless of cue type. These results suggest that crossmodal perceptual organization may account for only a part of the improvement in participants' visual target identification performance reported in Vroomen and de Gelder's original study. Moreover, in contrast with many previous crossmodal cuing studies, our results also suggest that visual cues can enhance visual target identification performance. Alternative accounts for these results are discussed in terms of enhanced saliency, the presence of a temporal marker, and attentional capture by oddball stimuli as potential explanations for the observed performance benefits.  相似文献   

2.
Visual dominance and attention: the Colavita effect revisited   总被引:4,自引:0,他引:4  
Under many conditions, humans display a robust tendency to rely more on visual information than on other forms of sensory information. Colavita (1974) illustrated this visual dominance effect by showing that naive observers typically fail to respond to clearly suprathreshold tones if these are presented simultaneously with a visual target flash. In the present study, we demonstrate that visual dominance influences performance under more complex stimulation conditions and address the role played by attention in mediating this effect. In Experiment 1, we show the Colavita effect in the simple speeded detection of line drawings and naturalistic sounds, whereas in Experiment 2 we demonstrate visual dominance when the task targets (auditory, visual, or bimodal combinations) are embedded among continuous streams of irrelevant distractors. In Experiments 3-5, we address the consequences of varying the probability of occurrence of targets in each sensory modality. In Experiment 6, we further investigate the role played by attention on visual dominance by manipulating perceptual load in either the visual or the auditory modality. Our results demonstrate that selective attention to a particular sensory modality can modulate--although not completely reverse--visual dominance as illustrated by the Colavita effect.  相似文献   

3.
People often move in synchrony with auditory rhythms (e.g., music), whereas synchronization of movement with purely visual rhythms is rare. In two experiments, this apparent attraction of movement to auditory rhythms was investigated by requiring participants to tap their index finger in synchrony with an isochronous auditory (tone) or visual (flashing light) target sequence while a distractor sequence was presented in the other modality at one of various phase relationships. The obtained asynchronies and their variability showed that auditory distractors strongly attracted participants' taps, whereas visual distractors had much weaker effects, if any. This asymmetry held regardless of the spatial congruence or relative salience of the stimuli in the two modalities. When different irregular timing patterns were imposed on target and distractor sequences, participants' taps tended to track the timing pattern of auditory distractor sequences when they were approximately in phase with visual target sequences, but not the reverse. These results confirm that rhythmic movement is more strongly attracted to auditory than to visual rhythms. To the extent that this is an innate proclivity, it may have been an important factor in the evolution of music.  相似文献   

4.
Behavioral studies of multisensory integration in motion perception have focused on the particular case of visual and auditory signals. Here, we addressed a new case: audition and touch. In Experiment 1, we tested the effects of an apparent motion stream presented in an irrelevant modality (audition or touch) on the perception of apparent motion streams in the other modality (touch or audition, respectively). We found significant congruency effects (lower performance when the direction of motion in the irrelevant modality was incongruent with the direction of the target) for the two possible modality combinations. This congruency effect was asymmetrical, with tactile motion distractors having a stronger influence on auditory motion perception than vice versa. In Experiment 2, we used auditory motion targets and tactile motion distractors while participants adopted one of two possible postures: arms uncrossed or arms crossed. The effects of tactile motion on auditory motion judgments were replicated in the arms-uncrossed posture, but they dissipated in the arms-crossed posture. The implications of these results are discussed in light of current findings regarding the representation of tactile and auditory space.  相似文献   

5.
Using a response competition paradigm, we investigated the ability to ignore target response-compatible, target response-incompatible, and neutral visual and auditory distractors presented during a visual search task. The perceptual load model of attention (e.g., Lavie & Tsal, 1994) states that task-relevant processing load determines irrelevant distractor processing in such a way that increasing processing load prevents distractor processing. In three experiments, participants searched sets of one (easy search) or six (hard search) similar items. In Experiment 1, visual distractors influenced reaction time (RT) and accuracy only for easy searches, following the perceptual load model. Surprisingly, auditory distractors yielded larger distractor compatibility effects (median RT for incompatible trials minus median RT for compatible trials) for hard searches than for easy searches. In Experiments 2 and 3, during hard searches, consistent RT benefits with response-compatible and RT costs with response-incompatible auditory distractors occurred only for hard searches. We suggest that auditory distractors are processed regardless of visual perceptual load but that the ability to inhibit cross-modal influence from auditory distractors is reduced under high visual load.  相似文献   

6.
To investigate the effect of semantic congruity on audiovisual target responses, participants detected a semantic concept that was embedded in a series of rapidly presented stimuli. The target concept appeared as a picture, an environmental sound, or both; and in bimodal trials, the audiovisual events were either consistent or inconsistent in their representation of a semantic concept. The results showed faster detection latencies to bimodal than to unimodal targets and a higher rate of missed targets when visual distractors were presented together with auditory targets, in comparison to auditory targets presented alone. The findings of Experiment 2 showed a cross-modal asymmetry, such that visual distractors were found to interfere with the accuracy of auditory target detection, but auditory distractors had no effect on either the speed or the accuracy of visual target detection. The biased-competition theory of attention (Desimone & Duncan Annual Review of Neuroscience 18: 1995; Duncan, Humphreys, & Ward Current Opinion in Neurobiology 7: 255–261 1997) was used to explain the findings because, when the saliency of the visual stimuli was reduced by the addition of a noise filter in Experiment 4, visual interference on auditory target detection was diminished. Additionally, the results showed faster and more accurate target detection when semantic concepts were represented in a visual rather than an auditory format.  相似文献   

7.
Five experiments were conducted to investigate how subsyllabic, syllabic, and prosodic information is processed in Cantonese monosyllabic word production. A picture-word interference task was used in which a target picture and a distractor word were presented simultaneously or sequentially. In the first 3 experiments with visually presented distractors, null effects on naming latencies were found when the distractor and the picture name shared the onset, the rhyme, the tone, or both the onset and tone. However, significant facilitation effects were obtained when the target and the distractor shared the rhyme + tone (Experiment 2), the segmental syllable (Experiment 3), or the syllable + tone (Experiment 3). Similar results were found in Experiments 4 and 5 with spoken rather than visual distractors. Moreover, a significant facilitation effect was observed in the rhyme-related condition in Experiment 5, and this effect was not affected by the degree of phonological overlap between the target and the distractor. These results are interpreted in an interactive model, which allows feedback sending from the subsyllabic to the lexical level during the phonological encoding stage in Cantonese word production.  相似文献   

8.
The present study demonstrates that incongruent distractor letters at a constant distance from a target letter produce more response competition and negative priming when they share a target’s color than when they have a different color. Moreover, perceptual grouping by means of color, attenuated the effects of spatial proximity. For example, when all items were presented in the same color, near distractors produced more response competition and negative priming than far distractors (Experiment 3A). However, when near distractors were presented in a different color and far distractors were presented in the same color as the target, the response competition × distractor proximity interaction was eliminated and the proximity × negative priming interaction was reversed (Experiment 3B). A final experiment demonstrated that distractors appearing on the same object as a selected target produced comparable amounts of response competition and negative priming whether they were near or far from the target. This suggests that the inhibitory mechanisms of visual attention can be directed to perceptual groups/objects in the environment and not only to unsegmented regions of visual space.  相似文献   

9.
Four experiments were conducted to determine whether or not the presence and placement of distractors in a rapid serial auditory stream has any influence on the emergence of the auditory attentional blink (AB). Experiment 1 revealed that the presence of distractors is necessary to produce the auditory AB. In Experiments 2 and 3, the auditory AB was reduced when the distractor immediately following the probe was replaced by silence but not when the distractor following the target was replaced by silence. Finally, in Experiment 4, only a very small auditory AB was found to remain when all distractors following the probe were replaced by silence. These results suggest that the auditory AB is affected both by the overwriting of the probe by the distractors following it and by a reduction in discriminability generated by all of the distractors presented in the sequence.  相似文献   

10.
The relationship between semantic-syntactic and phonological levels in speaking was investigated using a picture naming procedure with simultaneously presented visual or auditory distractor words. Previous results with auditory distractors have been used to support the independent stage model (e.g., H. Schriefers, A. S. Meyer, & W. J. M. Levelt, 1990), whereas results with visual distractors have been used to support an interactive view (e.g., P.A. Starreveld & W. La Heij, 1996b). Experiment 1 demonstrated that with auditory distractors, semantic effects preceded phonological effects, whereas the reverse pattern held for visual distractors. Experiment 2 indicated that the results for visual distractors followed the auditory pattern when distractor presentation time was limited. Experiment 3 demonstrated an interaction between phonological and semantic relatedness of distractors for auditory presentation, supporting an interactive account of lexical access in speaking.  相似文献   

11.
Auditory redundancy gains were assessed in two experiments in which a simple reaction time task was used. In each trial, an auditory stimulus was presented to the left ear, to the right ear, or simultaneously to both ears. The physical difference between auditory stimuli presented to the two ears was systematically increased across experiments. No redundancy gains were observed when the stimuli were identical pure tones or pure tones of different frequencies (Experiment 1). A clear redundancy gain and evidence of coactivation were obtained, however, when one stimulus was a pure tone and the other was white noise (Experiment 2). Experiment 3 employed a two-alternative forced choice localization task and provided evidence that dichotically presented pure tones of different frequencies are apparently integrated into a single percept, whereas a pure tone and white noise are not fused. The results extend previous findings of redundancy gains and coactivation with visual and bimodal stimuli to the auditory modality. Furthermore, at least within this modality, the results indicate that redundancy gains do not emerge when redundant stimuli are integrated into a single percept.  相似文献   

12.
Attentional capture in serial audiovisual search tasks   总被引:1,自引:0,他引:1  
The phenomenon of attentional capture has typically been studied in spatial search tasks. Dalton and Lavie recently demonstrated that auditory attention can also be captured by a singleton item in a rapidly presented tone sequence. In the experiments reported here, we investigated whether these findings extend cross-modally to sequential search tasks using audiovisual stimuli. Participants searched a stream of centrally presented audiovisual stimuli for targets defined on a particular dimension (e.g., duration) in a particular modality. Task performance was compared in the presence versus absence of a unique singleton distractor. Irrelevant auditory singletons captured attention during visual search tasks, leading to interference when they coincided with distractors but to facilitation when they coincided with targets. These results demonstrate attentional capture by auditory singletons during nonspatial visual search.  相似文献   

13.
Recalling information involves the process of discriminating between relevant and irrelevant information stored in memory. Not infrequently, the relevant information needs to be selected from among a series of related possibilities. This is likely to be particularly problematic when the irrelevant possibilities not only are temporally or contextually appropriate, but also overlap semantically with the target or targets. Here, we investigate the extent to which purely perceptual features that discriminate between irrelevant and target material can be used to overcome the negative impact of contextual and semantic relatedness. Adopting a distraction paradigm, it is demonstrated that when distractors are interleaved with targets presented either visually (Experiment 1) or auditorily (Experiment 2), a within-modality semantic distraction effect occurs; semantically related distractors impact upon recall more than do unrelated distractors. In the semantically related condition, the number of intrusions in recall is reduced, while the number of correctly recalled targets is simultaneously increased by the presence of perceptual cues to relevance (color features in Experiment 1 or speaker’s gender in Experiment 2). However, as is demonstrated in Experiment 3, even presenting semantically related distractors in a language and a sensory modality (spoken Welsh) distinct from that of the targets (visual English) is insufficient to eliminate false recalls completely or to restore correct recall to levels seen with unrelated distractors . Together, the study shows how semantic and nonsemantic discriminability shape patterns of both erroneous and correct recall.  相似文献   

14.
Perceptual judgments can be affected by expectancies regarding the likely target modality. This has been taken as evidence for selective attention to particular modalities, but alternative accounts remain possible in terms of response priming, criterion shifts, stimulus repetition, and spatial confounds. We examined whether attention to a sensory modality would still be apparent when these alternatives were ruled out. Subjects made a speeded detection response (Experiment 1), an intensity or color discrimination (Experiment 2), or a spatial discrimination response (Experiments 3 and 4) for auditory and visual targets presented in a random sequence. On each trial, a symbolic visual cue predicted the likely target modality. Responses were always more rapid and accurate for targets presented in the expected versus unexpected modality, implying that people can indeed selectively attend to the auditory or visual modalities. When subjects were cued to both the probable modality of a target and its likely spatial location (Experiment 4), separable modality-cuing and spatial-cuing effects were observed. These studies introduce appropriate methods for distinguishing attention to a modality from the confounding factors that have plagued previous normal and clinical research.  相似文献   

15.
For many years, researchers have argued that we have separate attentional resources for the processing of information impinging on each of our sensory receptor systems. However, a number of recent studies have demonstrated the existence of shared attentional resources for the processing of auditory, visual and tactile stimuli. In the present study, we examined whether there are also common attentional resources for the processing of chemosensory stimuli. Participants made speeded (left vs. right) footpedal discrimination responses to an unpredictable sequence of visual and chemosensory stimuli presented to either nostril. The participants' attention was directed to one or the other modality by means of a symbolic auditory cue (high or low tone) at the start of each trial, which predicted the likely modality for the upcoming target on the majority (80%) of trials. Participants responded more rapidly when the target occurred in the expected modality than when it occurred in the unexpected modality, implying the existence of shared attentional resources for the processing of chemosensory and visual stimuli.  相似文献   

16.
视-听跨通道汉语词汇信息加工中的抑制机制   总被引:2,自引:0,他引:2  
采用选择性再认的方法考察在汉语词汇加工过程中 ,视 -听跨通道信息与视觉单通道信息加工过程中的抑制机制。结果表明 :对于视觉词汇的总体再认“否”反应 ,单通道干扰条件下的成绩优于跨通道干扰条件下的成绩。在视觉词汇加工过程中 ,对外在干扰材料的抑制效率不受输入干扰刺激的通道的影响。抑制效率受干扰材料语义关系性的影响 ,与目标材料属于同一语义范畴的比异范畴的干扰材料更难以被抑制。  相似文献   

17.
We investigated how both objective and subjective organizations affect perceptual organization and how this perceptual organization, in turn, influences observers’ performance in a localization search task. Two groups of observers viewing exactly the same stimuli (objective organization) performed in significantly different ways, depending on how they were induced to parse the display (subjective organization). In Experiments 1 and 2, the observers were asked to describe the location of a tilted target among a varying number of vertical or horizontal distractors. Subjective organization was induced by instructing observers to parse the display into either three horizontal regions (rows) or three vertical regions (columns). The position of the target was critical: location performance, as assessed by reaction time and errors, was consistently impaired at the locations adjacent to the boundaries defining the regions, producing what we refer to as thesubjective boundary effect. Furthermore, the extent of this effect depended on whether the stimulus-driven and conceptually driven information concurred or conflicted. This made location information more or less accessible. In Experiment 1, the strength of objective grouping was a function of the proximity of the items (near or far conditions) and their orientation in a 6×6 matrix. In Experiment 2, the strength of objective grouping was a function of similarity of color (items were color coded by rows or by columns) and the orientation of the items in a 9×9 matrix. The subjective boundary effect was more pronounced when the display promoted grouping in the direction orthogonal to that of the task (e.g., when observers parsed by rows but vertical distractors were closer together [Experiment 1] or color coded [Experiment 2] to induce global columns). In contrast, this effect decreased when the direction of both objective and subjective organizations was parallel (e.g., when observers parsed by rows and horizontal distractors were closer together [Experiment 1] or were color coded [Experiment 2] to induce global rows). A localization search task proved to be an ideal forum in which objective and subjective organizations interacted. We discuss how these results indicated that observers’ performance in a localization task was determined by the interaction of objective and subjective organizations, and that the resulting perceptual organization constrained coarse location information.  相似文献   

18.
Using a cue-target paradigm, we investigated the interaction between endogenous and exogenous orienting in cross-modal attention. A peripheral (exogenous) cue was presented after a central (endogenous) cue with a variable time interval. The endogenous and exogenous cues were presented in one sensory modality (auditory in Experiment 1 and visual in Experiment 2) whereas the target was presented in another modality. Both experiments showed a significant endogenous cuing effect (longer reaction times in the invalid condition than in the valid condition). However, exogenous cuing produced a facilitatory effect in both experiments in response to the target when endogenous cuing was valid, but it elicited a facilitatory effect in Experiment 1 and an inhibitory effect in Experiment 2 when endogenous cuing was invalid. These findings indicate that endogenous and exogenous cuing can co-operate in orienting attention to the crossmodal target. Moreover, the interaction between endogenous and exogenous orienting of attention is modulated by the modality between the cue and the target.  相似文献   

19.
返回抑制(inhibition of return, IOR)与情绪刺激都具有引导注意偏向、提高搜索效率的特点, 但二者间是否存在一定的交互作用迄今为止尚不明确。研究采用“线索-目标”范式并在视听双通道呈现情绪刺激来考察情绪刺激的加工与IOR的交互作用。实验1中情绪刺激以单通道视觉面孔或一致的视听双通道呈现, 实验2通过在视听通道呈现不一致的情绪刺激进一步考察视听双通道情绪一致刺激对IOR的影响是否是由听觉通道一致的情绪刺激导致的, 即是否对听觉通道的情绪刺激进行了加工。结果发现, 视听双通道情绪一致刺激能够削弱IOR, 但情绪不一致刺激与IOR之间不存在交互作用, 并且单双通道的IOR不存在显著差异。研究结果表明仅在视听双通道呈现情绪一致刺激时, 才会影响同一阶段的IOR, 这进一步支持了IOR的知觉抑制理论。  相似文献   

20.
张明  桑汉斌  鲁柯  王爱君 《心理学报》2021,53(7):681-693
个体对刺激的反应不仅受刺激本身的影响, 还会受到先前刺激的影响, 表现为对当前试次中刺激的反应会受到前一试次的影响, 即试次历史。本研究采用“线索-中性线索-靶子”范式探讨前一试次有效性对跨通道的非空间返回抑制的影响。实验1通过连续两个试次间的线索有效性考察在跨通道非空间返回抑制中试次历史的影响。为了在跨通道非空间返回抑制中减小试次历史的影响, 实验2通过延长试次间时间间隔考察跨通道非空间返回抑制中试次历史的作用是否减小。结果发现, 前一试次线索无效时, 当前试次中的返回抑制效应量显著小于前一试次有效时, 这种影响会根据试次中线索和靶子通道的不同而不同。并且当延长试次间的时间间隔可以有效地减少前一试次对当前试次的影响。因此本研究表明, 试次历史能够对跨通道非空间返回抑制产生影响, 并且这种影响可以通过增大试次间时间间隔来减小。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号