首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

2.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

3.
There is now convincing evidence that an involuntary shift of spatial attention to a stimulus in one modality can affect the processing of stimuli in other modalities, but inconsistent findings across different paradigms have led to controversy. Such inconsistencies have important implications for theories of cross-modal attention. The authors investigated why orienting attention to a visual event sometimes influences responses to subsequent sounds and why it sometimes fails to do so. They examined visual-cue-on-auditory-target effects in two paradigms--implicit spatial discrimination (ISD) and orthogonal cuing (OC)--that have yielded conflicting findings in the past. Consistent with previous research, visual cues facilitated responses to same-side auditory targets in the ISD paradigm but not in the OC paradigm. Furthermore, in the ISD paradigm, visual cues facilitated responses to auditory targets only when the targets were presented directly at the cued location, not when they appeared above or below the cued location. This pattern of results confirms recent claims that visual cues fail to influence responses to auditory targets in the OC paradigm because the targets fall outside the focus of attention. (PsycINFO Database Record (c) 2008 APA, all rights reserved).  相似文献   

4.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

5.
The present study examined the effects of cue-based preparation and cue-target modality mapping in crossmodal task switching. In two experiments, we randomly presented lateralized visual and auditory stimuli simultaneously. Subjects were asked to make a left/right judgment for a stimulus in only one of the modalities. Prior to each trial, the relevant stimulus modality was indicated by a visual or auditory cue. The cueing interval was manipulated to examine preparation. In Experiment 1, we used a corresponding mapping of cue-modality and stimulus modality, whereas in Experiment 2 the mapping of cue and stimulus modalities was reversed. We found reduced modality-switch costs with a long cueing interval, showing that attention shifts to stimulus modalities can be prepared, irrespective of cue-target modality mapping. We conclude that perceptual processing in crossmodal switching can be biased in a preparatory way towards task-relevant stimulus modalities.  相似文献   

6.
The numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes.  相似文献   

7.
This experiment investigated the effect of modality on temporal discrimination in children aged 5 and 8 years and adults using a bisection task with visual and auditory stimuli ranging from 200 to 800 ms. In the first session, participants were required to compare stimulus durations with standard durations presented in the same modality (within-modality session), and in the second session in different modalities (cross-modal session). Psychophysical functions were orderly in all age groups, with the proportion of long responses (judgement that a duration was more similar to the long than to the short standard) increasing with the stimulus duration, although functions were flatter in the 5-year-olds than in the 8-year-olds and adults. Auditory stimuli were judged to be longer than visual stimuli in all age groups. The statistical results and a theoretical model suggested that this modality effect was due to differences in the pacemaker speed of the internal clock. The 5-year-olds also judged visual stimuli as more variable than auditory ones, indicating that their temporal sensitivity was lower in the visual than in the auditory modality.  相似文献   

8.
The present study examined cross-modal selective attention using a task-switching paradigm. In a series of experiments, we presented lateralized visual and auditory stimuli simultaneously and asked participants to make a spatial decision according to either the visual or the auditory stimulus. We observed consistent cross-modal interference in the form of a spatial congruence effect. This effect was asymmetrical, with higher costs when responding to auditory than to visual stimuli. Furthermore, we found stimulus-modality-shift costs, indicating a persisting attentional bias towards the attended stimulus modality. We discuss our findings with respect to visual dominance, directed-attention accounts, and the modality-appropriateness hypothesis.  相似文献   

9.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

10.
唐晓雨  孙佳影  彭姓 《心理学报》2020,52(3):257-268
本研究基于线索-靶子范式, 操纵目标刺激类型(视觉、听觉、视听觉)与线索有效性(有效线索、中性条件、无效线索)两个自变量, 通过3个实验来考察双通道分配性注意对视听觉返回抑制(inhibition of return, IOR)的影响。实验1 (听觉刺激呈现在左/右侧)结果发现, 在双通道分配性注意条件下, 视觉目标产生显著IOR效应, 而视听觉目标没有产生IOR效应; 实验2 (听觉刺激呈现在左/右侧)与实验3 (听觉刺激呈现在中央)结果发现, 在视觉通道选择性注意条件下, 视觉与视听觉目标均产生显著IOR效应但二者无显著差异。结果表明:双通道分配性注意减弱视听觉IOR效应。  相似文献   

11.
跨感觉通路ERP注意成分的研究   总被引:5,自引:2,他引:3  
罗跃嘉  魏景汉 《心理学报》1997,30(2):195-201
采用提高了非注意纯度的“跨通路延迟反应”实验模式,研究跨通路注意的事件相关电位(ERP)成分,以注意ERP减去非注意ERP的早期差异负波(Nd1)为主要分析对象。被试者为12名青年正常人。结果发现:①听觉偏差刺激、视觉标准与偏差刺激皆在其特异性感觉区域的头皮部位,观察到注意引起N1波幅增大的现象。Nd1的起始时间与注意N1相同,而早于非注意N1,支持注意的早期选择学说;②听觉与视觉偏差刺激产生的Nd1最大峰分布在它们各自的初级感觉投射区,而标准刺激诱发的Nd1最大峰却分布于额部,表明跨通路选择性注意对偏差刺激的加工部位主要是通路特异性的,而对标准刺激的加工部位则主要是通路上的。作者据此针对选择性注意中长期争论问题进一步提出,注意的选择性发生时程的早晚可因刺激条件而异,信息过滤器的位置具有可塑性。  相似文献   

12.
Unexpected stimuli are often able to distract us away from a task at hand. The present study seeks to explore some of the mechanisms underpinning this phenomenon. Studies of involuntary attention capture using the oddball task have repeatedly shown that infrequent auditory changes in a series of otherwise repeating sounds trigger an automatic response to the novel or deviant stimulus. This attention capture has been shown to disrupt participants' behavioral performance in a primary task, even when distractors and targets are asynchronous and presented in distinct sensory modalities. This distraction effect is generally considered as a by-product of the capture of attention by the novel or deviant stimulus, but the exact cognitive locus of this effect and the interplay between attention capture and target processing has remained relatively ignored. The present study reports three behavioral experiments using a cross-modal oddball task to examine whether the distraction triggered by auditory novelty affects the processing of the target stimuli. Our results showed that variations in the demands placed on the visual analysis (Experiment 1) or categorical processing of the target (Experiment 2) did not impact on distraction. Instead, the cancellation of distraction by the presentation of an irrelevant visual stimulus presented immediately before the visual target (Experiment 3) suggested that distraction originated in the shifts of attention occurring between attention capture and the onset of the target processing. Possible accounts of these shifts are discussed.  相似文献   

13.
Stelmach, Herdman, and McNeil (1994) suggested recently that the perceived duration for attended stimuli is shorter than that for unattended ones. In contrast, the attenuation hypothesis (Thomas & Weaver, 1975) suggests the reverse relation between directed attention and perceived duration. We conducted six experiments to test the validity of the two contradictory hypotheses. In all the experiments, attention was directed to one of two possible stimulus sources. Experiments 1 and 2 employed stimulus durations from 70 to 270 msec. A stimulus appeared in either the visual or the auditory modality. Stimuli in the attended modality were rated as longer than stimuli in the unattended modality. Experiment 3 replicated this finding using a different psychophysical procedure. Experiments 4-6 showed that the finding applies not only to stimuli from different sensory modalities but also to stimuli appearing at different locations within the visual field. The results of all six experiments support the assumption that directed attention prolongs the perceived duration of a stimulus.  相似文献   

14.
This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p < 0.004). The learning rates over time differed for both modality and the stimuli within modalities, although there was no correlation to global error rates or reaction time differences between the stimulus types. These results demonstrate a modeling method that is well suited to extract detailed information about the success of implicit learning from high variability data. It further shows a cross-modal implicit learning effect, which extends the understanding of the implicit learning system and highlights the possibility for information to be processed in a cross-modal representation without conscious processing.  相似文献   

15.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

16.
视-听跨通道汉语词汇信息加工中的抑制机制   总被引:2,自引:0,他引:2  
采用选择性再认的方法考察在汉语词汇加工过程中 ,视 -听跨通道信息与视觉单通道信息加工过程中的抑制机制。结果表明 :对于视觉词汇的总体再认“否”反应 ,单通道干扰条件下的成绩优于跨通道干扰条件下的成绩。在视觉词汇加工过程中 ,对外在干扰材料的抑制效率不受输入干扰刺激的通道的影响。抑制效率受干扰材料语义关系性的影响 ,与目标材料属于同一语义范畴的比异范畴的干扰材料更难以被抑制。  相似文献   

17.
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Penney, Gibbon, & Meck, 2000). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities.  相似文献   

18.
Implicit memory is often thought to reflect an influence of past experience on perceptual processes, yet priming effects are found when the perceptual format of stimuli changes between study and test episodes. Such cross-modal priming effects have been hypothesized to depend upon stimulus recoding processes whereby a stimulus presented in one modality is converted to other perceptual formats. The present research examined recoding accounts of cross-modal priming by testing patients with verbal production deficits that presumably impair the conversion of visual words into auditory/phonological forms. The patients showed normal priming in a visual stem completion task following visual study (Experiment 1), but showed impairments following auditory study in both implicit (Experiment 2) and explicit (Experiment 3) stem completion. The results are consistent with the hypothesis that verbal production processes contribute to the recoding of visual stimuli and support cross-modal priming. The results also indicate that shared processes contribute to both explicit memory and cross-modal implicit memory.  相似文献   

19.
Selective attention requires the ability to focus on relevant information and to ignore irrelevant information. The ability to inhibit irrelevant information has been proposed to be the main source of age-related cognitive change (e.g., Hasher & Zacks, 1988). Although age-related distraction by irrelevant information has been extensively demonstrated in the visual modality, studies involving auditory and cross-modal paradigms have revealed a mixed pattern of results. A comparative evaluation of these paradigms according to sensory modality suggests a twofold trend: Age-related distraction is more likely (a) in unimodal than in cross-modal paradigms and (b) when irrelevant information is presented in the visual modality, rather than in the auditory modality. This distinct pattern of age-related changes in selective attention may be linked to the reliance of the visual and auditory modalities on different filtering mechanisms. Distractors presented through the auditory modality can be filtered at both central and peripheral neurocognitive levels. In contrast, distractors presented through the visual modality are primarily suppressed at more central levels of processing, which may be more vulnerable to aging. We propose the hypothesis that age-related distractibility is modality dependent, a notion that might need to be incorporated in current theories of cognitive aging. Ultimately, this might lead to a more accurate account for the mixed pattern of impaired and preserved selective attention found in advancing age.  相似文献   

20.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号