首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article reports a detailed examination of timing in the vibrotactile modality and comparison with that of visual and auditory modalities. Three experiments investigated human timing in the vibrotactile modality. In Experiment 1, a staircase threshold procedure with a standard duration of 1,000 ms revealed a difference threshold of 160.35 ms for vibrotactile stimuli, which was significantly higher than that for auditory stimuli (103.25 ms) but not significantly lower than that obtained for visual stimuli (196.76 ms). In Experiment 2, verbal estimation revealed a significant slope difference between vibrotactile and auditory timing, but not between vibrotactile and visual timing. That is, both vibrations and lights were judged as shorter than sounds, and this comparative difference was greater at longer durations than at shorter ones. In Experiment 3, performance on a temporal generalization task showed characteristics consistent with the predications of scalar expectancy theory (SET: Gibbon, 1977) with both mean accuracy and scalar variance exhibited. The results were modelled using the modified Church and Gibbon model (MCG; derived by Wearden, 1992, from Church & Gibbon 1982). The model was found to give an excellent fit to the data, and the parameter values obtained were compared with those for visual and auditory temporal generalization. The pattern of results suggest that timing in the vibrotactile modality conforms to SET and that the internal clock speed for vibrotactile stimuli is significantly slower than that for auditory stimuli, which is logically consistent with the significant differences in difference threshold that were obtained.  相似文献   

2.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

3.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

4.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

5.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

6.
Nonspatial attentional shifts between audition and vision   总被引:2,自引:0,他引:2  
This study investigated nonspatial shifts of attention between visual and auditory modalities. The authors provide evidence that the modality of a stimulus (S1) affected the processing of a subsequent stimulus (S2) depending on whether they shared the same modality. For both vision and audition, the onset of S1 summoned attention exogenously to its modality, causing a delay in processing S2 in a different modality. That undermines the notion that auditory stimuli have a stronger and more automatic alerting effect than visual stimuli (M. I. Posner, M. J. Nissen, & R. M. Klein, 1976). The results are consistent with other recent studies showing cross-modal attentional limitation. The authors suggest that such cross-modal limitation can be produced by simply presenting S1 and S2 in different modalities and that central processing mechanisms are also, at least partially, modality dependent.  相似文献   

7.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

8.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

9.
It has been proposed that the perception of very short duration is governed by sensory mechanisms, whereas the perception of longer duration depends on cognitive capacities. Four duration discrimination tasks (modalities: visual, auditory; base duration: 100 ms, 1000 ms) were used to study the relation between time perception, age, sex, and cognitive abilities (alertness, visual and verbal working memory, general fluid reasoning) in 100 subjects aged between 21 and 84 years. Temporal acuity was higher (Weber fractions are lower) for longer stimuli and for the auditory modality. Age was related to the visual 100 ms condition only, with lower temporal acuity in elder participants. Alertness was significantly related to auditory and visual Weber fractions for shorter stimuli only. Additionally, visual working memory was a significant predictor for shorter visual stimuli. These results indicate that alertness, but also working memory, are associated with temporal discrimination of very brief duration.  相似文献   

10.
Functional magnetic resonance imaging (fMRI) was used to examine differences between children (9–12 years) and adults (21–31 years) in the distribution of brain activation during word processing. Orthographic, phonologic, semantic and syntactic tasks were used in both the auditory and visual modalities. Our two principal results were consistent with the hypothesis that development is characterized by increasing specialization. Our first analysis compared activation in children versus adults separately for each modality. Adults showed more activation than children in the unimodal visual areas of middle temporal gyrus and fusiform gyrus for processing written word forms and in the unimodal auditory areas of superior temporal gyrus for processing spoken word forms. Children showed more activation than adults for written word forms in posterior heteromodal regions (Wernicke's area), presumably for the integration of orthographic and phonologic word forms. Our second analysis compared activation in the visual versus auditory modality separately for children and adults. Children showed primarily overlap of activation in brain regions for the visual and auditory tasks. Adults showed selective activation in the unimodal auditory areas of superior temporal gyrus when processing spoken word forms and selective activation in the unimodal visual areas of middle temporal gyrus and fusiform gyrus when processing written word forms.  相似文献   

11.
Functional magnetic resonance imaging (fMRI) was used to examine differences between children (9-12 years) and adults (21-31 years) in the distribution of brain activation during word processing. Orthographic, phonologic, semantic and syntactic tasks were used in both the auditory and visual modalities. Our two principal results were consistent with the hypothesis that development is characterized by increasing specialization. Our first analysis compared activation in children versus adults separately for each modality. Adults showed more activation than children in the unimodal visual areas of middle temporal gyrus and fusiform gyrus for processing written word forms and in the unimodal auditory areas of superior temporal gyrus for processing spoken word forms. Children showed more activation than adults for written word forms in posterior heteromodal regions (Wernicke's area), presumably for the integration of orthographic and phonologic word forms. Our second analysis compared activation in the visual versus auditory modality separately for children and adults. Children showed primarily overlap of activation in brain regions for the visual and auditory tasks. Adults showed selective activation in the unimodal auditory areas of superior temporal gyrus when processing spoken word forms and selective activation in the unimodal visual areas of middle temporal gyrus and fusiform gyrus when processing written word forms.  相似文献   

12.
This experiment investigated the effect of modality on temporal discrimination in children aged 5 and 8 years and adults using a bisection task with visual and auditory stimuli ranging from 200 to 800 ms. In the first session, participants were required to compare stimulus durations with standard durations presented in the same modality (within-modality session), and in the second session in different modalities (cross-modal session). Psychophysical functions were orderly in all age groups, with the proportion of long responses (judgement that a duration was more similar to the long than to the short standard) increasing with the stimulus duration, although functions were flatter in the 5-year-olds than in the 8-year-olds and adults. Auditory stimuli were judged to be longer than visual stimuli in all age groups. The statistical results and a theoretical model suggested that this modality effect was due to differences in the pacemaker speed of the internal clock. The 5-year-olds also judged visual stimuli as more variable than auditory ones, indicating that their temporal sensitivity was lower in the visual than in the auditory modality.  相似文献   

13.
This research deals with individual differences in the ability to focus and divide attention. Eighty-five subjects performed visual search and auditory detection tasks in three conditions: single channel, focused attention, and divided attention. Reaction time (RT) was fastest in the single channel condition, intermediate in the focused attention condition, and longest in the divided attention condition, and these effects were much stronger in the auditory than the visual task. Correlations among RTs in the three conditions were very high within modality (>.88), and lower between modalities (.5 to .6). The correlational data was well fit by a model that included separate factors for the visual and auditory tasks. Measures from the three attentional conditions within each modality loaded equally on these factors. The data provided no evidence for distinct abilities to divide or focus attention.  相似文献   

14.
Previous research suggests that there are significant differences in the operation of reference memory for stimuli of different modalities, with visual temporal entries appearing to be more durable than auditory entries (Ogden, Wearden, & Jones, 2008 , 2010). Ogden et al. ( 2008 , 2010 ) demonstrated that when participants were required to store multiple auditory temporal standards over a period of delay there was significant systematic interference to the representation of the standard characterized by shifts in the location of peak responding. No such performance deterioration was observed when multiple visually presented durations were encoded and maintained. The current article explored whether this apparent modality-based difference in reference memory operation is unique to temporal stimuli or whether similar characteristics are also apparent when nontemporal stimuli are encoded and maintained. The modified temporal generalization method developed in Ogden et al. (2008) was employed; however, standards and comparisons varied by pitch (auditory) and physical line length (visual) rather than duration. Pitch and line length generalization results indicated that increasing memory load led to more variable responding and reduced recognition of the standard; however, there was no systematic shift in the location of peak responding. Comparison of the results of this study with those of Ogden et al. (2008, 2010) suggests that although performance deterioration as a consequence of increases in memory load is common to auditory temporal and nontemporal stimuli and visual nontemporal stimuli, systematic interference is unique to auditory temporal processing.  相似文献   

15.
Training people on temporal discrimination can substantially improve performance in the trained modality but also in untrained modalities. A pretest–training–posttest design was used to investigate whether consolidation plays a crucial role for training effects within the trained modality and its transfer to another modality. In the pretest, both auditory and visual discrimination performance was assessed. In the training phase, participants performed only the auditory task. After a consolidation interval of either 5 min or 24 h, participants were again tested in both the auditory and visual tasks. Irrespective of the consolidation interval, performance improved from the pretest to the posttest in both modalities. Most importantly, the training effect for the trained auditory modality was independent of the consolidation interval whereas the transfer effect to the visual modality was larger after 24 h than after 5 min. This finding shows that transfer effects benefit from extended consolidation.  相似文献   

16.
跨感觉通路ERP注意成分的研究   总被引:5,自引:2,他引:3  
罗跃嘉  魏景汉 《心理学报》1997,30(2):195-201
采用提高了非注意纯度的“跨通路延迟反应”实验模式,研究跨通路注意的事件相关电位(ERP)成分,以注意ERP减去非注意ERP的早期差异负波(Nd1)为主要分析对象。被试者为12名青年正常人。结果发现:①听觉偏差刺激、视觉标准与偏差刺激皆在其特异性感觉区域的头皮部位,观察到注意引起N1波幅增大的现象。Nd1的起始时间与注意N1相同,而早于非注意N1,支持注意的早期选择学说;②听觉与视觉偏差刺激产生的Nd1最大峰分布在它们各自的初级感觉投射区,而标准刺激诱发的Nd1最大峰却分布于额部,表明跨通路选择性注意对偏差刺激的加工部位主要是通路特异性的,而对标准刺激的加工部位则主要是通路上的。作者据此针对选择性注意中长期争论问题进一步提出,注意的选择性发生时程的早晚可因刺激条件而异,信息过滤器的位置具有可塑性。  相似文献   

17.
Auditory stimuli usually have longer subjective durations than visual ones for the same real duration, although performance on many timing tasks is similar in form with different modalities. One suggestion is that auditory and visual stimuli are initially timed by different mechanisms, but later converted into some common duration code which is amodal. The present study investigated this using a temporal generalization interference paradigm. In test blocks, people decided whether comparison durations were or were not a 400-ms standard on average. Test blocks alternated with interference blocks where durations were systematically shorter or longer than in test blocks, and interference was found, in the direction of the durations in the interference blocks, even when the interfering blocks used stimuli in a different modality from the test block. This provides what may be the first direct experimental evidence for a “common code” for durations initially presented in different modalities at some level of the human timing system.  相似文献   

18.
ABSTRACT

The testing effect refers to improved memory after retrieval practice and has been researched primarily with visual stimuli. In two experiments, we investigated whether the testing effect can be replicated when the to-be-learned information is presented auditorily, or visually?+?auditorily. Participants learned Swahili-English word pairs in one of three presentation modalities – visual, auditory, or visual?+?auditory. This was manipulated between-participants in Experiment 1 and within-participants in Experiment2. All participants studied the word pairs during three study trials. Half of participants practiced recalling the English translations in response to the Swahili cue word twice before the final test whereas the other half simply studied the word pairs twice more. Results indicated an improvement in final test performance in the repeated test condition, but only in the visual presentation modality (Experiments 1 and 2) and in the visual?+?auditory presentation modality (Experiment 2). This suggests that the benefits of practiced retrieval may be limited to information presented in a visual modality.  相似文献   

19.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

20.
视-听跨通道汉语词汇信息加工中的抑制机制   总被引:2,自引:0,他引:2  
采用选择性再认的方法考察在汉语词汇加工过程中 ,视 -听跨通道信息与视觉单通道信息加工过程中的抑制机制。结果表明 :对于视觉词汇的总体再认“否”反应 ,单通道干扰条件下的成绩优于跨通道干扰条件下的成绩。在视觉词汇加工过程中 ,对外在干扰材料的抑制效率不受输入干扰刺激的通道的影响。抑制效率受干扰材料语义关系性的影响 ,与目标材料属于同一语义范畴的比异范畴的干扰材料更难以被抑制。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号