首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Orienting attention involuntarily to the location of a sensory event influences responses to subsequent stimuli that appear in different modalities with one possible exception: orienting attention involuntarily to a sudden light sometimes fails to affect responses to subsequent sounds (e.g., Spence & Driver, 1997). Here we investigated the effects of involuntary attention to a brief flash on the processing of subsequent sounds in a design that eliminates stimulus-response compatibility effects and criterion shifts as confounding factors. In addition, the neural processes mediating crossmodal attention were studied by recording event-related brain potentials. Our data show that orienting attention to the location of a spatially nonpredictive visual cue modulates behavioural and neural responses to subsequent auditory targets when the stimulus onset asynchrony is short (between 100 and 300 ms). These findings are consistent with the hypothesis that involuntary shifts of attention are controlled by supramodal brain mechanisms rather than by modality-specific ones.  相似文献   

2.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

3.
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory–visual interaction, using an auditory–visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.  相似文献   

4.
In this study, we addressed how the particular context of stimulus congruency influences audiovisual interactions. We combined an audiovisual congruency task with a proportion-of-congruency manipulation. In Experiment 1, we demonstrated that the perceived duration of a visual stimulus is modulated by the actual duration of a synchronously presented auditory stimulus. In the following experiments, we demonstrated that this crossmodal congruency effect is modulated by the proportion of congruent trials between (Exp. 2) and within (Exp. 4) blocks. In particular, the crossmodal congruency effect was reduced in the context with a high proportion of incongruent trials. This effect was attributed to changes in participants' control set as a function of the congruency context, with greater control applied in the context where the majority of the trials were incongruent. These data contribute to the ongoing debate concerning crossmodal interactions and attentional processes. In sum, context can provide a powerful cue for selective attention to modulate the interaction between stimuli from different sensory modalities.  相似文献   

5.
Traditional studies of spatial attention consider only a single sensory modality at a time (e.g. just vision, or just audition). In daily life, however, our spatial attention often has to be coordinated across several modalities. This is a non-trivial problem, given that each modality initially codes space in entirely different ways. In the last five years, there has been a spate of studies on crossmodal attention. These have demonstrated numerous crossmodal links in spatial attention, such that attending to a particular location in one modality tends to produce corresponding shifts of attention in other modalities. The spatial coordinates of these crossmodal links illustrate that the internal representation of external space depends on extensive crossmodal integration. Recent neuroscience studies are discussed that suggest possible brain mechanisms for the crossmodal links in spatial attention.  相似文献   

6.
Models of information processing tasks such as character identification often do not consider the nature of the initial sensory representation from which task-relevant information is extracted. An important component of this representation is temporal inhibition, in which the response to a stimulus may inhibit, or in some cases facilitate, processing of subsequent stimuli. Three experiments demonstrate the existence of temporal inhibitory processes in information processing tasks such as character identification and digit recall. An existing information processing model is extended to account for these effects, based in part on models from the detection literature. These experiments also discriminate between candidate neural mechanisms of the temporal inhibition. Implications for the transient deficit theory of dyslexia are discussed.  相似文献   

7.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,我们考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600毫秒后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900毫秒后跨通道启动效应消失。我们的研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

8.
Previous research has demonstrated distinct neural correlates for maintenance of abstract, relational versus concrete, sensory information in working memory (WM). Storage of spatial relations in WM results in suppression of posterior sensory regions, which suggests that sensory information is task-irrelevant when relational representations are maintained in WM. However, the neural mechanisms by which abstract representations are derived from sensory information remain unclear. Here, using electroencephalography, we investigated the role of alpha oscillations in deriving spatial relations from a sensory stimulus and maintaining them in WM. Participants encoded two locations into WM, then after an initial maintenance period, a cue indicated whether to convert the spatial information to another sensory representation or to a relational representation. Results revealed that alpha power increased over posterior electrodes when sensory information was converted to a relational representation, but not when the information was converted to another sensory representation. Further, alpha phase synchrony between posterior and frontal regions increased for relational compared to sensory trials during the maintenance period. These results demonstrate that maintaining spatial relations and locations in WM rely on distinct neural oscillatory patterns.  相似文献   

9.
In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain “know” which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.  相似文献   

10.
Change blindness is the name given to people's inability to detect changes introduced between two consecutively-presented scenes when they are separated by a distractor that masks the transients that are typically associated with change. Change blindness has been reported within vision, audition, and touch, but has never before been investigated when successive patterns are presented to different sensory modalities. In the study reported here, we investigated change detection performance when the two to-be-compared stimulus patterns were presented in the same sensory modality (i.e., both visual or both tactile) and when one stimulus pattern was tactile while the other was presented visually or vice versa. The two to-be-compared patterns were presented consecutively, separated by an empty interval, or else separated by a masked interval. In the latter case, the masked interval could either be tactile or visual. The first experiment investigated visual-tactile and tactile-visual change detection performance. The results showed that in the absence of masking, participants detected changes in position accurately, despite the fact that the two to-be-compared displays were presented in different sensory modalities. Furthermore, when a mask was presented between the two to-be-compared displays, crossmodal change blindness was elicited no matter whether the mask was visual or tactile. The results of two further experiments showed that performance was better overall in the unimodal (visual or tactile) conditions than in the crossmodal conditions. These results suggest that certain of the processes underlying change blindness are multisensory in nature. We discuss these findings in relation to recent claims regarding the crossmodal nature of spatial attention.  相似文献   

11.
A brief, vivid phase of auditory sensory storage that outlasts the stimulus could be used in perception in two ways: First, all of the neural activity resulting from the stimulus, including that of the sensory store, could contribute to a sensation of growing loudness; second, the sensory store could permit the continued extraction of information about the sound's acoustic properties. This study includes a task for which these two processes lead to different predictions; a third prediction is based on the two processes combined. The task required loudness judgments for two brief tones presented with a variable intertone interval. The results of Experiments 1-3 were as one would expect if both the growth of sensation and information extraction contributed to the pattern of loudness judgments. Experiment 4 strengthened the two-process account by demonstrating the separability of the two processes. Approaches to mathematical modeling of these results are discussed.  相似文献   

12.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

13.
Crossmodal correspondences are a feature of human perception in which two or more sensory dimensions are linked together; for example, high-pitched noises may be more readily linked with small than with large objects. However, no study has yet systematically examined the interaction between different visual–auditory crossmodal correspondences. We investigated how the visual dimensions of luminance, saturation, size, and vertical position can influence decisions when matching particular visual stimuli with high-pitched or low-pitched auditory stimuli. For multidimensional stimuli, we found a general pattern of summation of the individual crossmodal correspondences, with some exceptions that may be explained by Garner interference. These findings have applications for the design of sensory substitution systems, which convert information from one sensory modality to another.  相似文献   

14.
Response-related mechanisms of multitasking were studied by analyzing simultaneous processing of responses in different modalities (i.e., crossmodal action). Participants responded to a single auditory stimulus with a saccade, a manual response (single-task conditions), or both (dual-task condition). We used a spatially incompatible stimulus-response mapping for one task, but not for the other. Critically, inverting these mappings varied temporal task overlap in dual-task conditions while keeping spatial incompatibility across responses constant. Unlike previous paradigms, temporal task overlap was manipulated without utilizing sequential stimulus presentation, which might induce strategic serial processing. The results revealed dual-task costs, but these were not affected by an increase of temporal task overlap. This finding is evidence for parallel response selection in multitasking. We propose that crossmodal action is processed by a central mapping-selection mechanism in working memory and that the dual-task costs are mainly caused by mapping-related crosstalk.  相似文献   

15.
Synchronized neural activity in the frequency range above 20 Hz, the gamma-band, has been proposed as a signature of temporal feature binding. Here we suggest that selective attention facilitates synchronization of neural activity. Selective attention can be guided by bottom-up, stimulus driven, or top-down task-driven processes. Both processes will cause that stimuli are processed preferentially. While bottom-up processes might facilitate synchronization of neurons due to the salience of the stimulus, top-down processes may bias information selection by facilitating synchronization of neurons coding a certain location in space and/or of neurons related to the processing of certain features. Animal as well as human EEG studies support the notion of a link between induced gamma-band responses and attentive, sensory stimulus processing.  相似文献   

16.
Dedicated and intrinsic models of time perception   总被引:3,自引:0,他引:3  
Two general frameworks have been articulated to describe how the passage of time is perceived. One emphasizes that the judgment of the duration of a stimulus depends on the operation of dedicated neural mechanisms specialized for representing the temporal relationships between events. Alternatively, the representation of duration could be ubiquitous, arising from the intrinsic dynamics of nondedicated neural mechanisms. In such models, duration might be encoded directly through the amount of activation of sensory processes or as spatial patterns of activity in a network of neurons. Although intrinsic models are neurally plausible, we highlight several issues that must be addressed before we dispense with models of duration perception that are based on dedicated processes.  相似文献   

17.
In a visual-tactile interference paradigm, subjects judged whether tactile vibrations arose on a finger or thumb (upper vs. lower locations), while ignoring distant visual distractor lights that also appeared in upper or lower locations. Incongruent visual distractors (e.g. a lower light combined with upper touch) disrupt such tactile judgements, particularly when appearing near the tactile stimulus (e.g. on the same side of space as the stimulated hand). Here we show that actively wielding tools can change this pattern of crossmodal interference. When such tools were held in crossed positions (connecting the left hand to the right visual field, and vice-versa), the spatial constraints on crossmodal interference reversed, so that visual distractors in the other visual field now disrupted tactile judgements most for a particular hand. This phenomenon depended on active tool-use, developing with increased experience in using the tool. We relate these results to recent physiological and neuropsychological findings.  相似文献   

18.
Selective attention is usually considered an egocentric mechanism, biasing sensory information based on its behavioural relevance to oneself. This study provides evidence for an equivalent allocentric mechanism that allows passive observers to selectively attend to information from the perspective of another person. In a negative priming task, participants reached for a red target stimulus whilst ignoring a green distractor. Distractors located close to their hand were inhibited strongly, consistent with an egocentric frame of reference. When participants took turns with another person, the pattern of negative priming shifted to an allocentric frame of reference: locations close to the hand of the observed agent (but far away from the participant’s hand) were inhibited strongly. This suggests that witnessing another’s action leads the observer to simulate the same selective attention mechanisms such that they effectively perceive their surroundings from the other person’s perspective.  相似文献   

19.
In first-order Pavlovian conditioning, learning is acquired by pairing a conditioned stimulus (CS) with an intrinsically motivating unconditioned stimulus (US; e.g., food or shock). In higher-order Pavlovian conditioning (sensory preconditioning and second-order conditioning), the CS is paired with a stimulus that has motivational value that is acquired rather than intrinsic. This review describes some of the ways higher-order conditioning paradigms can be used to elucidate substrates of learning and memory, primarily focusing on fear conditioning. First-order conditioning, second-order conditioning, and sensory preconditioning allow for the controlled demonstration of three distinct forms of memory, the neural substrates of which can thus be analyzed. Higher-order conditioning phenomena allow one to distinguish more precisely between processes involved in transmission of sensory or motor information and processes involved in the plasticity underlying learning. Finally, higher-order conditioning paradigms may also allow one to distinguish between processes involved in behavioral expression of memory retrieval versus processes involved in memory retrieval itself.  相似文献   

20.
Recently, it was proposed that the Simon effect would result not only from two interfering processes, as classical dual-route models assume, but from three processes. It was argued that priming from the spatial code to the nonspatial code might facilitate the identification of the nonspatial stimulus feature in congruent Simon trials. In the present study, the authors provide evidence that the identification of the nonspatial information can be facilitated by the activation of an associated spatial code. In three experiments, participants first associated centrally presented animal and fruit pictures with spatial responses. Subsequently, participants decided whether laterally presented letter strings were words (animal, fruit, or other words) or nonwords; stimulus position could be congruent or incongruent to the associated spatial code. As hypothesized, animal and fruit words were identified faster at congruent than at incongruent stimulus positions from the association phase. The authors conclude that the activation of the spatial code spreads to the nonspatial code, resulting in facilitated stimulus identification in congruent trials. These results speak to the assumption of a third process involved in the Simon task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号