首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

2.
Associating crossmodal auditory and visual stimuli is an important component of perception, with the posterior superior temporal sulcus (pSTS) hypothesized to support this. However, recent evidence has argued that the pSTS serves to associate two stimuli irrespective of modality. To examine the contribution of pSTS to crossmodal recognition, participants (N = 13) learned 12 abstract, non-linguistic pairs of stimuli over 3 weeks. These paired associates comprised four types: auditory–visual (AV), auditory–auditory (AA), visual–auditory (VA), and visual–visual (VV). At week four, participants were scanned using magnetoencephalography (MEG) while performing a correct/incorrect judgment on pairs of items. Using an implementation of synthetic aperture magnetometry that computes real statistics across trials (SAMspm), we directly contrasted crossmodal (AV and VA) with unimodal (AA and VV) pairs from stimulus-onset to 2 s in theta (4–8 Hz), alpha (9–15 Hz), beta (16–30 Hz), and gamma (31–50 Hz) frequencies. We found pSTS showed greater desynchronization in the beta frequency for crossmodal compared with unimodal trials, suggesting greater activity during the crossmodal pairs, which was not influenced by congruency of the paired stimuli. Using a sliding window SAM analysis, we found the timing of this difference began in a window from 250 to 750 ms after stimulus-onset. Further, when we directly contrasted all sub-types of paired associates from stimulus-onset to 2 s, we found that pSTS seemed to respond to dynamic, auditory stimuli, rather than crossmodal stimuli per se. These findings support an early role for pSTS in the processing of dynamic, auditory stimuli, and do not support claims that pSTS is responsible for associating two stimuli irrespective of their modality.  相似文献   

3.
The present study examined the effects of cue-based preparation and cue-target modality mapping in crossmodal task switching. In two experiments, we randomly presented lateralized visual and auditory stimuli simultaneously. Subjects were asked to make a left/right judgment for a stimulus in only one of the modalities. Prior to each trial, the relevant stimulus modality was indicated by a visual or auditory cue. The cueing interval was manipulated to examine preparation. In Experiment 1, we used a corresponding mapping of cue-modality and stimulus modality, whereas in Experiment 2 the mapping of cue and stimulus modalities was reversed. We found reduced modality-switch costs with a long cueing interval, showing that attention shifts to stimulus modalities can be prepared, irrespective of cue-target modality mapping. We conclude that perceptual processing in crossmodal switching can be biased in a preparatory way towards task-relevant stimulus modalities.  相似文献   

4.
Sandhu R  Dyson BJ 《Acta psychologica》2012,140(2):111-118
Competition between the senses can lead to modality dominance, where one sense influences multi-modal processing to a greater degree than another. Modality dominance can be influenced by task demands, speeds of processing, contextual influence and practice. To resolve previous discrepancies in these factors, we assessed modality dominance in an audio-visual paradigm controlling for the first three factors while manipulating the fourth. Following a uni-modal task in which auditory and visual processing were equated, participants completed a pre-practice selective attention bimodal task in which the congruency relationship and task-relevant modality changed across trials. Participants were given practice in one modality prior to completing a post-practice selective attention bimodal task similar to the first. The effects of practice were non-specific as participants were speeded post-practice relative to pre-practice. Congruent stimuli relative to incongruent stimuli, also led to increased processing efficiency. RT data tended to reveal symmetric modality switching costs whereas the error rate data tended to reveal asymmetric modality switching costs in which switching from auditory to visual processing was particularly costly. The data suggest that when a number of safeguards are put in place to equate auditory and visual responding as far as possible, evidence for an auditory advantage can arise.  相似文献   

5.
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances.  相似文献   

6.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

7.
The present study examined cross-modal selective attention using a task-switching paradigm. In a series of experiments, we presented lateralized visual and auditory stimuli simultaneously and asked participants to make a spatial decision according to either the visual or the auditory stimulus. We observed consistent cross-modal interference in the form of a spatial congruence effect. This effect was asymmetrical, with higher costs when responding to auditory than to visual stimuli. Furthermore, we found stimulus-modality-shift costs, indicating a persisting attentional bias towards the attended stimulus modality. We discuss our findings with respect to visual dominance, directed-attention accounts, and the modality-appropriateness hypothesis.  相似文献   

8.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

9.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

10.
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.  相似文献   

11.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

12.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,我们考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600毫秒后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900毫秒后跨通道启动效应消失。我们的研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

13.
In this study, an extended pacemaker-counter model was applied to crossmodal temporal discrimination. In three experiments, subjects discriminated between the durations of a constant standard stimulus and a variable comparison stimulus. In congruent trials, both stimuli were presented in the same sensory modality (i.e., both visual or both auditory), whereas in incongruent trials, each stimulus was presented in a different modality. The model accounts for the finding that temporal discrimination depends on the presentation order of the sensory modalities. Nevertheless, the model fails to explain why temporal discrimination was much better with congruent than with incongruent trials. The discussion considers possibilities to accommodate the model to this and other shortcomings.  相似文献   

14.
Nonspatial attentional shifts between audition and vision   总被引:2,自引:0,他引:2  
This study investigated nonspatial shifts of attention between visual and auditory modalities. The authors provide evidence that the modality of a stimulus (S1) affected the processing of a subsequent stimulus (S2) depending on whether they shared the same modality. For both vision and audition, the onset of S1 summoned attention exogenously to its modality, causing a delay in processing S2 in a different modality. That undermines the notion that auditory stimuli have a stronger and more automatic alerting effect than visual stimuli (M. I. Posner, M. J. Nissen, & R. M. Klein, 1976). The results are consistent with other recent studies showing cross-modal attentional limitation. The authors suggest that such cross-modal limitation can be produced by simply presenting S1 and S2 in different modalities and that central processing mechanisms are also, at least partially, modality dependent.  相似文献   

15.
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory–visual interaction, using an auditory–visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.  相似文献   

16.
Strybel TZ  Vatakis A 《Perception》2004,33(9):1033-1048
Unimodal auditory and visual apparent motion (AM) and bimodal audiovisual AM were investigated to determine the effects of crossmodal integration on motion perception and direction-of-motion discrimination in each modality. To determine the optimal stimulus onset asynchrony (SOA) ranges for motion perception and direction discrimination, we initially measured unimodal visual and auditory AMs using one of four durations (50, 100, 200, or 400 ms) and ten SOAs (40-450 ms). In the bimodal conditions, auditory and visual AM were measured in the presence of temporally synchronous, spatially displaced distractors that were either congruent (moving in the same direction) or conflicting (moving in the opposite direction) with respect to target motion. Participants reported whether continuous motion was perceived and its direction. With unimodal auditory and visual AM, motion perception was affected differently by stimulus duration and SOA in the two modalities, while the opposite was observed for direction of motion. In the bimodal audiovisual AM condition, discriminating the direction of motion was affected only in the case of an auditory target. The perceived direction of auditory but not visual AM was reduced to chance levels when the crossmodal distractor direction was conflicting. Conversely, motion perception was unaffected by the distractor direction and, in some cases, the mere presence of a distractor facilitated movement perception.  相似文献   

17.
An auditory attention-switching paradigm was combined with a judgment-switching paradigm to examine the interaction of a varying auditory attention component and a varying judgment component. Participants heard two dichotically presented stimuli—one spoken by a female speaker and one spoken by a male speaker. In each trial, the stimuli were a spoken letter and a spoken number. A visual explicit cue at the beginning of each trial indicated the auditory attention criterion (speaker sex/ear) to identify the target stimulus (Experiment 1) or the judgment that had to be executed (Experiment 2). Hence, the attentional selection criterion switched independently between speaker sexes (or between ears), while the judgment alternated between letter categorization and number categorization. The data indicate that auditory attention criterion and judgment were not processed independently, regardless of whether the attention criterion or the judgment was cued. The partial repetition benefits of the explicitly cued component suggested a hierarchical organization of the auditory attention component and the judgment component within the task set. We suggest that the hierarchy arises due to the explicit cuing of one component rather than due to a “natural” hierarchy of auditory attention component and judgment component.  相似文献   

18.
To what extent is simultaneous visual and auditory perception subject to capacity limitations and attentional control? Two experiments addressed this question by asking observers to recognize test tones and test letters under selective and divided attention. In Experiment 1, both stimuli occurred on each trial, but subjects were cued in advance to process just one or both of the stimuli. In Experiment 2, subjects processed one stimulus and then the other or processed both stimuli simultaneously. Processing time was controlled using a backward recognition masking task. A significant, but small, attention effect was found in both experiments. The present positive results weaken the interpretation that previous attentional effects were due to the particular duration judgment task that was employed. The answer to the question addressed by the experiments appears to be that the degree of capacity limitations and attentional control during visual and auditory perception is small but significant.  相似文献   

19.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

20.
Shifting attention is an effortful control process and incurs a cost on the cognitive system. Previous research suggests that rewards, such as monetary gains, will selectively enhance the ability to shift attention when this demand for control is explicitly cued. Here, we hypothesized that prospective monetary gains will selectively enhance the ability to shift attention even when control demand is unpredictable and not cued beforehand in a modality shift paradigm. In two experiments we found that target detection was indeed facilitated by reward signals when an unpredictable shift of attention was required. In these crossmodal trials the target stimulus was preceded by an unpredictive stimulus directing attention to the opposite modality (i.e., visual–auditory or auditory–visual). Importantly, there was no reward effect in ipsimodal trials (i.e., visual–visual or auditory–auditory). Furthermore, the absence of the latter effect could not be explained in terms of physical limits in speed of responding. Potential motivation of monetary rewards thus selectively translates into motivational intensity when control (i.e., switching) is demanded in unpredictable ways.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号