首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments investigated the effect of test modality (visual or auditory) on source memory and event-related potentials (ERPs). Test modality influenced source monitoring such that source memory was better when the source and test modalities were congruent. Test modality had less of an influence when alternative information (i.e., cognitive operations) could be used to inform source judgments in Experiment 2. Test modality also affected ERP activity. Variation in parietal ERPs suggested that this activity reflects activation of sensory information, which can be attenuated when the sensory information is misleading. Changes in frontal ERPs support the hypothesis that frontal systems are used to evaluate source-specifying information present in the memory trace.  相似文献   

2.
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory features that optimally explain the unisensory features arising in individual sensory modalities. The model qualitatively accounts for several important aspects of multisensory perception: (a) it integrates information from multiple sensory sources in such a way that it leads to superior performances in, for example, categorization tasks; (b) its performances suggest that multisensory training leads to better learning than unisensory training, even when testing is conducted in unisensory conditions; (c) its multisensory representations are modality invariant; and (d) it predicts ‘‘missing” sensory representations in modalities when the input to those modalities is absent. Our rational analysis indicates that all of these aspects emerge as part of the optimal solution to the problem of learning to represent complex multisensory environments.  相似文献   

3.
Stelmach, Herdman, and McNeil (1994) suggested recently that the perceived duration for attended stimuli is shorter than that for unattended ones. In contrast, the attenuation hypothesis (Thomas & Weaver, 1975) suggests the reverse relation between directed attention and perceived duration. We conducted six experiments to test the validity of the two contradictory hypotheses. In all the experiments, attention was directed to one of two possible stimulus sources. Experiments 1 and 2 employed stimulus durations from 70 to 270 msec. A stimulus appeared in either the visual or the auditory modality. Stimuli in the attended modality were rated as longer than stimuli in the unattended modality. Experiment 3 replicated this finding using a different psychophysical procedure. Experiments 4-6 showed that the finding applies not only to stimuli from different sensory modalities but also to stimuli appearing at different locations within the visual field. The results of all six experiments support the assumption that directed attention prolongs the perceived duration of a stimulus.  相似文献   

4.
Martino G  Marks LE 《Perception》2000,29(6):745-754
At each moment, we experience a melange of information arriving at several senses, and often we focus on inputs from one modality and 'reject' inputs from another. Does input from a rejected sensory modality modulate one's ability to make decisions about information from a selected one? When the modalities are vision and hearing, the answer is "yes", suggesting that vision and hearing interact. In the present study, we asked whether similar interactions characterize vision and touch. As with vision and hearing, results obtained in a selective attention task show cross-modal interactions between vision and touch that depend on the synesthetic relationship between the stimulus combinations. These results imply that similar mechanisms may govern cross-modal interactions across sensory modalities.  相似文献   

5.
The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top-down biases on visual processing. Equally, such information may influence top-down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.  相似文献   

6.
The brain often integrates multisensory sources of information in a way that is close to the optimal according to Bayesian principles. Since sensory modalities are grounded in different, body-relative frames of reference, multisensory integration requires accurate transformations of information. We have shown experimentally, for example, that a rotating tactile stimulus on the palm of the right hand can influence the judgment of ambiguously rotating visual displays. Most significantly, this influence depended on the palm orientation: when facing upwards, a clockwise rotation on the palm yielded a clockwise visual judgment bias; when facing downwards, the same clockwise rotation yielded a counterclockwise bias. Thus, tactile rotation cues biased visual rotation judgment in a head-centered reference frame. Recently, we have generated a modular, multimodal arm model that is able to mimic aspects of such experiments. The model co-represents the state of an arm in several modalities, including a proprioceptive, joint angle modality as well as head-centered orientation and location modalities. Each modality represents each limb or joint separately. Sensory information from the different modalities is exchanged via local forward and inverse kinematic mappings. Also, re-afferent sensory feedback is anticipated and integrated via Kalman filtering. Information across modalities is integrated probabilistically via Bayesian-based plausibility estimates, continuously maintaining a consistent global arm state estimation. This architecture is thus able to model the described effect of posture-dependent motion cue integration: tactile and proprioceptive sensory information may yield top–down biases on visual processing. Equally, such information may influence top–down visual attention, expecting particular arm-dependent motion patterns. Current research implements such effects on visual processing and attention.  相似文献   

7.
Previous work documented that sensorimotor adaptation transfers between sensory modalities: When subjects adapt with one arm to a visuomotor distortion while responding to visual targets, they also appear to be adapted when they are subsequently tested with auditory targets. Vice versa, when they adapt to an auditory-motor distortion while pointing to auditory targets, they appear to be adapted when they are subsequently tested with visual targets. Therefore, it was concluded that visuomotor as well as auditory-motor adaptation use the same adaptation mechanism. Furthermore, it has been proposed that sensory information from the trained modality is weighted larger than sensory information from an untrained one, because transfer between sensory modalities is incomplete. The present study tested these hypotheses for dual arm adaptation. One arm adapted to an auditory-motor distortion and the other either to an opposite directed auditory-motor or visuomotor distortion. We found that both arms adapted significantly. However, compared to reference data on single arm adaptation, adaptation in the dominant arm was reduced indicating interference from the non-dominant to the dominant arm. We further found that arm-specific aftereffects of adaptation, which reflect recalibration of sensorimotor transformation rules, were stronger or equally strong when targets were presented in the previously adapted compared to the non-adapted sensory modality, even when one arm adapted visually and the other auditorily. The findings are discussed with respect to a recently published schematic model on sensorimotor adaptation.  相似文献   

8.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600ms后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900ms后跨通道启动效应消失。研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

9.
The existence of a rostrocaudal gradient of medial temporal lobe (MTL) activation during memory encoding has historically received support from positron emission tomography studies, but less so from functional MRI (FMRI) studies. More recently, FMRI studies have demonstrated that characteristics of the stimuli can affect the location of activation seen in the MTL when those stimuli are encoded. The current study tested the hypothesis that MTL activation during memory encoding is related to the modality of stimulus presentation. Subjects encoded auditorily or visually presented words in an FMRI novelty paradigm. Imaging and analysis parameters were optimized to minimize susceptibility artifact in the anterior MTL. Greater activation was observed in the anterior than posterior MTL for both modalities of stimulus presentation. The results indicate that anterior MTL activation occurred during encoding, independent of stimulus modality and provide support for the hypothesis that verbal-semantic memory processing occurs in anterior MTL. The authors suggest that technical factors are critical for observing the rostrocaudal gradient in MTL memory activation.  相似文献   

10.
Two experiments were conducted that examined information integration and rule-based category learning, using stimuli that contained auditory and visual information. The results suggest that it is easier to perceptually integrate information within these sensory modalities than across modalities. Conversely, it is easier to perform a disjunctive rule-based task when information comes from different sensory modalities, rather than from the same modality. Quantitative model-based analyses suggested that the information integration deficit for across-modality stimulus dimensions was due to an increase in the use of hypothesis-testing strategies to solve the task and to an increase in random responding. The modeling also suggested that the across-modality advantage for disjunctive, rule-based category learning was due to a greater reliance on disjunctive hypothesis-testing strategies, as opposed to unidimensional hypothesis-testing strategies and random responding.  相似文献   

11.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

12.

How do people automatize their dual-task performance through bottleneck bypassing (i.e., accomplish parallel processing of the central stages of two tasks)? In the present work we addressed this question, evaluating the impact of sensory–motor modality compatibility—the similarity in modality between the stimulus and the consequences of the response. We hypothesized that incompatible sensory–motor modalities (e.g., visual–vocal) create conflicts within modality-specific working memory subsystems, and therefore predicted that tasks producing such conflicts would be performed less automatically after practice. To probe for automaticity, we used a transfer psychological refractory period (PRP) procedure: Participants were first trained on a visual task (Exp. 1) or an auditory task (Exp. 2) by itself, which was later presented as Task 2, along with an unpracticed Task 1. The Task 1–Task 2 sensory–motor modality pairings were either compatible (visual–manual and auditory–vocal) or incompatible (visual–vocal and auditory–manual). In both experiments we found converging indicators of bottleneck bypassing (small dual-task interference and a high rate of response reversals) for compatible sensory–motor modalities, but indicators of bottlenecking (large dual-task interference and few response reversals) for incompatible sensory–motor modalities. Relatedly, the proportion of individuals able to bypass the bottleneck was high for compatible modalities but very low for incompatible modalities. We propose that dual-task automatization is within reach when the tasks rely on codes that do not compete within a working memory subsystem.

  相似文献   

13.
To construct a coherent percept of the world, the brain continuously combines information across multiple sensory modalities. Simple stimuli from different modalities are usually assumed to be processed in distinct brain areas. However, there is growing evidence that simultaneous stimulation of multiple modalities can influence the activity in unimodal sensory areas and improve or impair performance in unimodal tasks. Do these effects reflect a genuine cross-modal integration of sensory signals, or are they due to changes in the perceiver's ability to locate the stimulus in time and space? We used a behavioral measure to differentiate between these explanations. Our results demonstrate that, under certain circumstances, a noninformative flash of light can have facilitative or detrimental effects on a simple tactile discrimination. The effect of the visual flash mimics that produced by a constant tactile pedestal stimulus. These findings reveal that sensory signals from different modalities can be integrated, even for perceptual judgments within a single modality.  相似文献   

14.
In two experiments, we studied the temporal dynamics of feature integration with auditory (Experiment 1) and audiovisual (Experiment 2) stimuli and manual responses. Consistent with previous observations, performance was better when the second of two consecutive stimuli shared all or none of the features of the first, rather than when only one of the features overlapped. Comparable partial-overlap costs were obtained for combinations of stimulus features and responses. These effects decreased systematically with increasing time between the two stimulus-and-response events, and the decreased rate was comparable for unimodal and multimodal bindings. General effect size reflected the degree of task relevance of the dimension or modality of the respective feature, but the effects of relevance and of temporal delay did not interact. This suggests that the processing of stimuli on task-relevant sensory modalities and feature dimensions is facilitated by task-specific attentional sets, whereas the temporal dynamics might reflect that bindings “decay” or become more difficult to access over time.  相似文献   

15.
The sensory modality of a task and the modality of a retroactive interfering activity were systematically covaried in order to test Connolly and Jones' and Pick's translation models of intersensory functioning. Forty 10-year-old boys and girls were asked to recall distance and location cues of length under intrasensory and intersensory task conditions (visual and kinesthetic). Visual and kinesthetic interpolated activities were used in an attempt to provide modality specific interference with the recall of length under the various sensory task conditions. Results of the data analyses provided no support for the Connolly and Jones model of modality specific storage with translation. Rather, the findings of the study were interpreted as supportive of Pick's hypothesis which emphasizes the coding of stimulus information (regardless of modality of input) into a form specific to whatever modality is specialized for detection of the information.  相似文献   

16.
When simultaneous presentation of odor and taste cues precedes illness, rats acquire robust aversion to both conditioned stimuli. Such a phenomenon referred to as taste-potentiated odor aversion (TPOA) requires information processing from two sensory modalities. Whether similar or different brain networks are activated when TPOA memory is retrieved by either the odor or the taste presentation remains an unsolved question. By means of Fos mapping, we investigated the neuronal substrate underlying TPOA retrieval elicited by either the odor or the taste conditioned stimulus. Whatever the sensory modality used to reactivate TPOA memory, a significant change in Fos expression was observed in the hippocampus, the basolateral nucleus of amygdala and the medial and the orbito-frontal cortices. Moreover, only the odor presentation elicited a significantly higher Fos immunoreactivity in the piriform cortex, the entorhinal cortex and the insular cortex. Lastly, according to the stimulus tested to induce TPOA retrieval, the BLA was differentially activated and a higher Fos expression was induced by the odor than by the taste in this nucleus. The present study indicates that even if they share some brain regions, the cerebral patterns induced by either the odor or the taste are different. Data are discussed in view of the relevance of each conditioned stimulus to reactivate TPOA memory and of the involvement of the different labeled brain areas in information processing and TPOA retrieval.  相似文献   

17.
In this study, an extended pacemaker-counter model was applied to crossmodal temporal discrimination. In three experiments, subjects discriminated between the durations of a constant standard stimulus and a variable comparison stimulus. In congruent trials, both stimuli were presented in the same sensory modality (i.e., both visual or both auditory), whereas in incongruent trials, each stimulus was presented in a different modality. The model accounts for the finding that temporal discrimination depends on the presentation order of the sensory modalities. Nevertheless, the model fails to explain why temporal discrimination was much better with congruent than with incongruent trials. The discussion considers possibilities to accommodate the model to this and other shortcomings.  相似文献   

18.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

19.
The present study examined the effects of cue-based preparation and cue-target modality mapping in crossmodal task switching. In two experiments, we randomly presented lateralized visual and auditory stimuli simultaneously. Subjects were asked to make a left/right judgment for a stimulus in only one of the modalities. Prior to each trial, the relevant stimulus modality was indicated by a visual or auditory cue. The cueing interval was manipulated to examine preparation. In Experiment 1, we used a corresponding mapping of cue-modality and stimulus modality, whereas in Experiment 2 the mapping of cue and stimulus modalities was reversed. We found reduced modality-switch costs with a long cueing interval, showing that attention shifts to stimulus modalities can be prepared, irrespective of cue-target modality mapping. We conclude that perceptual processing in crossmodal switching can be biased in a preparatory way towards task-relevant stimulus modalities.  相似文献   

20.
Subjects matched successively presented stimuli within and across modalities. In conditions in which they were informed of the modalities of the two stimuli, no differences in matching performance were obtained between the four types of match (visual-visual, auditory-auditory, visual-auditory, and auditory-visual). Thus, there appeared to be no difference between the modalities in ability to perceive or retain the particular stimuli used. In conditions in which subjects were informed of the modality of the first stimulus but only of the modality in which the second stimulus would appear on 80% of trials, there was again no significant difference between auditory-auditory and visual-visual matching. However, auditory-visual matching was much faster than visual-auditory matching when the second stimulus appeared in the unexpected modality. The results suggest that subjects prepare for both possible types of match when uncertain of the second stimulus modality and that the cross-modal asymmetry reflects the additional attentional load that this incurs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号