首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

2.
Previous studies of multisensory integration have often stressed the beneficial effects that may arise when information concerning an event arrives via different sensory modalities at the same time, as, for example, exemplified by research on the redundant target effect (RTE). By contrast, studies of the Colavita visual dominance effect (e.g., [Colavita, F. B. (1974). Human sensory dominance. Perception & Psychophysics, 16, 409–412]) highlight the inhibitory consequences of the competition between signals presented simultaneously in different sensory modalities instead. Although both the RTE and the Colavita effect are thought to occur at early sensory levels and the stimulus conditions under which they are typically observed are very similar, the interplay between these two opposing behavioural phenomena (facilitation vs. competition) has yet to be addressed empirically. We hypothesized that the dissociation may reflect two of the fundamentally different ways in which humans can perceive concurrent auditory and visual stimuli. In Experiment 1, we demonstrated both multisensory facilitation (RTE) and the Colavita visual dominance effect using exactly the same audiovisual displays, by simply changing the task from a speeded detection task to a speeded modality discrimination task. Meanwhile, in Experiment 2, the participants exhibited multisensory facilitation when responding to visual targets and multisensory inhibition when responding to auditory targets while keeping the task constant. These results therefore indicate that both multisensory facilitation and inhibition can be demonstrated in reaction to the same bimodal event.  相似文献   

3.
Perceptual judgments can be affected by expectancies regarding the likely target modality. This has been taken as evidence for selective attention to particular modalities, but alternative accounts remain possible in terms of response priming, criterion shifts, stimulus repetition, and spatial confounds. We examined whether attention to a sensory modality would still be apparent when these alternatives were ruled out. Subjects made a speeded detection response (Experiment 1), an intensity or color discrimination (Experiment 2), or a spatial discrimination response (Experiments 3 and 4) for auditory and visual targets presented in a random sequence. On each trial, a symbolic visual cue predicted the likely target modality. Responses were always more rapid and accurate for targets presented in the expected versus unexpected modality, implying that people can indeed selectively attend to the auditory or visual modalities. When subjects were cued to both the probable modality of a target and its likely spatial location (Experiment 4), separable modality-cuing and spatial-cuing effects were observed. These studies introduce appropriate methods for distinguishing attention to a modality from the confounding factors that have plagued previous normal and clinical research.  相似文献   

4.
When two masked targets (T1 and T2) are visually or auditorily presented in rapid succession, processing of T1 produces an attentional blink (AB)--that is, a transient impairment of T2 identification. The present study was conducted to compare the relative impact of masking T1 and T2 between vision and audition. Within a rapidly presented sequence, each of the two verbal targets, discriminated by their offset (Experiment 1) or their onset (Experiment 2), could be followed by either a single item, acting as a mask, or a blank gap. Masking of T2 appeared to be necessary for the occurrence of the AB for both the visual and the auditory modality. However, whereas masking of T1 affected the expression of the visual AB in both experiments, the same effect was observed in the auditory modality only when the targets varied at the onset. These results provide further evidence that processing auditory and visual information is restricted by similar attentional limitations but also suggest that these limits are constrained by properties specific to each sensory system.  相似文献   

5.
Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10 s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.  相似文献   

6.
Advancing age is associated with decrements in selective attention. It was recently hypothesized that age-related differences in selective attention depend on sensory modality. The goal of the present study was to investigate the role of sensory modality in age-related vulnerability to distraction, using a response interference task. To this end, 16 younger (mean age = 23.1 years) and 24 older (mean age = 65.3 years) adults performed four response interference tasks, involving all combinations of visual and auditory targets and distractors. The results showed that response interference effects differ across sensory modalities, but not across age groups. These results indicate that sensory modality plays an important role in vulnerability to distraction, but not in age-related distractibility by irrelevant spatial information.  相似文献   

7.
以简单图形为视觉刺激,以短纯音作为听觉刺激,通过指导被试注意不同通道(注意视觉、注意听觉、注意视听)以形成不同注意状态(选择性注意和分配性注意),考察了注意对多感觉整合的影响,发现只有在分配性注意时被试对双通道目标的反应最快最准确。通过竞争模型分析发现,这种对双通道目标的加工优势源自于视听双通道刺激的整合。上述结果表明,只有在分配性注意状态下才会产生多感觉整合。  相似文献   

8.
It is well known that stimuli grab attention to their location, but do they also grab attention to their sensory modality? The modality shift effect (MSE), the observation that responding to a stimulus leads to reaction time benefits for subsequent stimuli in the same modality, suggests that this may be the case. If noninformative cue stimuli, which do not require a response, also lead to benefits for their modality, this would suggest that the effect is automatic. We investigated the time-course of the visuotactile MSE and the difference between the effects of cues and targets. In Experiment 1, when visual and tactile tasks and stimulus locations were matched, uninformative cues did not lead to reaction time benefits for targets in the same modality. However, the modality of the previous target led to a significant MSE. Only stimuli that require a response, therefore, appear to lead to reaction time benefits for their modality. In Experiment 2, increasing attention to the cue stimuli attenuated the effect of the previous target, but the cues still did not lead to a MSE. In Experiment 3, a MSE was demonstrated between successive targets, and this effect decreased with increasing intertrial intervals. Overall, these studies demonstrate how cue- and target-induced effects interact and suggest that modalities do not automatically capture attention as locations do; rather, the MSE is more similar to other task repetition effects.  相似文献   

9.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,我们考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600毫秒后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900毫秒后跨通道启动效应消失。我们的研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

10.
It is well known that stimuli grab attention to their location, but do they also grab attention to their sensory modality? The modality shift effect (MSE), the observation that responding to a stimulus leads to reaction time benefits for subsequent stimuli in the same modality, suggests that this may be the case. If noninformative cue stimuli, which do not require a response, also lead to benefits for their modality, this would suggest that the effect is automatic. We investigated the time-course of the visuotactile MSE and the difference between the effects of cues and targets. In Experiment 1, when visual and tactile tasks and stimulus locations were matched, uninformative cues did not lead to reaction time benefits for targets in the same modality. However, the modality of the previous target led to a significant MSE. Only stimuli that require a response, therefore, appear to lead to reaction time benefits for their modality. In Experiment 2, increasing attention to the cue stimuli attenuated the effect of the previous target, but the cues still did not lead to a MSE. In Experiment 3, a MSE was demonstrated between successive targets, and this effect decreased with increasing intertrial intervals. Overall, these studies demonstrate how cue- and target-induced effects interact and suggest that modalities do not automatically capture attention as locations do; rather, the MSE is more similar to other task repetition effects.  相似文献   

11.
Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.  相似文献   

12.
Interactions Between Exogenous Auditory and Visual Spatial Attention   总被引:5,自引:0,他引:5  
Six experiments were carried out to investigate the issue of cross-modality between exogenous auditory and visual spatial attention employing Posner's cueing paradigm in detection, localization, and discrimination tasks. Results indicated cueing in detection tasks with visual or auditory cues and visual targets but not with auditory targets (Experiment 1). In the localization tasks, cueing was found with both visual and auditory targets. Inhibition of return was apparent only in the within-modality conditions (Experiment 2). This suggests that it is important whether the attention system is activated directly (within a modality) or indirectly (between modalities). Increasing the cue validity from 50% to 80% influenced performance only in the localization task (Experiment 4). These findings are interpreted as being indicative for modality-specific but interacting attention mechanisms. The results of Experiments 5 and 6 (up/down discrimination tasks) also show cross-modal cueing but not with visual cues and auditory targets. Furthermore, there was no inhibition of return in any condition. This suggests that some cueing effects might be task dependent.  相似文献   

13.
A series of six experiments explored the dominance of vision over audition reported by Colavita (1974). We first confirmed the existence of visual dominance in a paradigm somewhat different from Colavita’s: Mean reaction time (RT) to a light was found to be faster than to a simultaneously presented tone, even though the stimuli were equated in subjective intensity and even though RT to the tone presented alone was faster than to the light presented alone. Additional experiments showed that when subjects did not have to respond to light, tone RT was equal or faster (intersensory facilitation) when a light was present than when it was not. These findings suggest that sensory or perceptual processing of the tone is not affected by the light, i.e., that visual dominance is nonsensory in locus and depends on the relevance of the light stimulus. This interpretation was reinforced by other findings which showed that the degree of visual dominance was sensitive to the probability of light, tone, and light-plus-tone trials and to instructions to attend to a specific modality, but was not sensitive to the intensity of the light.  相似文献   

14.
Using a cue-target paradigm, we investigated the interaction between endogenous and exogenous orienting in cross-modal attention. A peripheral (exogenous) cue was presented after a central (endogenous) cue with a variable time interval. The endogenous and exogenous cues were presented in one sensory modality (auditory in Experiment 1 and visual in Experiment 2) whereas the target was presented in another modality. Both experiments showed a significant endogenous cuing effect (longer reaction times in the invalid condition than in the valid condition). However, exogenous cuing produced a facilitatory effect in both experiments in response to the target when endogenous cuing was valid, but it elicited a facilitatory effect in Experiment 1 and an inhibitory effect in Experiment 2 when endogenous cuing was invalid. These findings indicate that endogenous and exogenous cuing can co-operate in orienting attention to the crossmodal target. Moreover, the interaction between endogenous and exogenous orienting of attention is modulated by the modality between the cue and the target.  相似文献   

15.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

16.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

17.
Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.  相似文献   

18.
Six experiments demonstrated cross-modal influences from the auditory modality on the visual modality at an early level of perceptual organization. Participants had to detect a visual target in a rapidly changing sequence of visual distractors. A high tone embedded in a sequence of low tones improved detection of a synchronously presented visual target (Experiment 1), but the effect disappeared when the high tone was presented before the target (Experiment 2). Rhythmically based or order-based anticipation was unlikely to account for the effect because the improvement was unaffected by whether there was jitter (Experiment 3) or a random number of distractors between successive targets (Experiment 4). The facilitatory effect was greatly reduced when the tone was less abrupt and part of a melody (Experiments 5 and 6). These results show that perceptual organization in the auditory modality can have an effect on perceptibility in the visual modality.  相似文献   

19.
The long-term modality effect is the advantage in recall of the last of a list of auditory to-be-remembered (TBR) items compared with the last of a list of visual TBR items when the list is followed by a filled retention interval. If the auditory advantage is due to echoic sensory memory mechanisms, then recall of the last auditory TBR item should be substantially reduced when it is followed by a redundant, not-to-be-recalled auditory suffix. Contrary to this prediction, Experiment 1 demonstrated that a redundant auditory suffix does not significantly reduce recall of the last auditory TBR item. In Experiment 2 a nonredundant auditory suffix produced a large reduction in the last auditory item. Redundancy is not the only factor controlling the effectiveness of a suffix, however. Experiment 3 demonstrated that a nonredundant visual suffix does not reduce recall of the last auditory TBR item. These results are discussed in reference to a retrieval account of the long-term modality effect.  相似文献   

20.
An initial act of self‐control that impairs subsequent acts of self‐control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self‐control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号