首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Two different models (convergent and parallel) potentially describe how recognition memory, the ability to detect the re-occurrence of a stimulus, is organized across different senses. To contrast these two models, rats with or without perirhinal cortex lesions were compared across various conditions that controlled available information from specific sensory modalities. Intact rats not only showed visual, tactile, and olfactory recognition, but also overcame changes in the types of sensory information available between object sampling and subsequent object recognition, e.g., between sampling in the light and recognition in the dark, or vice versa. Perirhinal lesions severely impaired object recognition whenever visual cues were available, but spared olfactory recognition and tactile-based object recognition when tested in the dark. The perirhinal lesions also blocked the ability to recognize an object sampled in the light and then tested for recognition in the dark, or vice versa. The findings reveal parallel recognition systems for different senses reliant on distinct brain areas, e.g., perirhinal cortex for vision, but also show that: (1) recognition memory for multisensory stimuli involves competition between sensory systems and (2) perirhinal cortex lesions produce a bias to rely on vision, despite the presence of intact recognition memory systems serving other senses.  相似文献   

2.
Left-Hemisphere Dominance for Motion Processing in Deaf Signers   总被引:4,自引:0,他引:4  
Evidence from neurophysiological studies in animals as well as humans has demonstrated robust changes in neural organization and function following early-onset sensory deprivation. Unfortunately, the perceptual consequences of these changes remain largely unexplored. The study of deaf individuals who have been auditorily deprived since birth and who rely on a visual language (i.e., American Sign Language, ASL) for communication affords a unique opportunity to investigate the degree to which perception in the remaining, intact senses (e.g., vision) is modified as a result of altered sensory and language experience. We studied visual motion perception in deaf individuals and compared their performance with that of hearing subjects. Thresholds and reaction times were obtained for a motion discrimination task, in both central and peripheral vision. Although deaf and hearing subjects had comparable absolute scores on this task, a robust and intriguing difference was found regarding relative performance for left-visual-field (LVF) versus right-visual-field (RVF) stimuli: Whereas hearing subjects exhibited a slight LVF advantage, the deaf exhibited a strong RVF advantage. Thus, for deaf subjects, the left hemisphere may be specialized for motion processing. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing, in the case of ASL) are recruited (or "captured") by the left, language-dominant hemisphere.  相似文献   

3.
ABSTRACT

Motor learning, in particular motor adaptation, is driven by information from multiple senses. For example, when arm control is faulty, vision, touch, and proprioception can all report on the arm's movements and help guide the adjustments necessary for correcting motor error. In recent years we have learned a lot about how the brain integrates information from multiple senses for the purpose of perception. However, less is known about how multisensory data guide motor learning. Most models of, and studies on, motor learning focus almost exclusively on the ensuing changes in motor performance without exploring the implications on sensory plasticity. Nor do they consider how discrepancies in sensory information (e.g., vision and proprioception) related to hand position may affect motor learning. Here, we discuss research from our lab and others that shows how motor learning paradigms affect proprioceptive estimates of hand position, and how even the mere discrepancy between visual and proprioceptive feedback can affect learning and plasticity. Our results suggest that sensorimotor learning mechanisms do not exclusively rely on motor plasticity and motor memory, and that sensory plasticity, in particular proprioceptive recalibration, plays a unique and important role in motor learning.  相似文献   

4.
The development of neuroimaging methods has had a significant impact on the study of the human brain. Functional MRI, with its high spatial resolution, provides investigators with a method to localize the neuronal correlates of many sensory and cognitive processes. Magneto- and electroencephalography, in turn, offer excellent temporal resolution allowing the exact time course of neuronal processes to be investigated. Applying these methods to multisensory processing, many research laboratories have been successful in describing cross-sensory interactions and their spatio-temporal dynamics in the human brain. Here, we review data from selected neuroimaging investigations showing how vision can influence and interact with other senses, namely audition, touch, and olfaction. We highlight some of the similarities and differences in the cross-processing of the different sensory modalities and discuss how different neuroimaging methods can be applied to answer specific questions about multisensory processing.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

5.
To understand the development of sensory processes, it is necessary not only to look at the maturation of each of the sensory systems in isolation, but also to study the development of the nervous systems capacity to integrate information across the different senses. It is through such multisensory integration that a coherent perceptual gestalt of the world comes to be generated. In the adult brain, multisensory convergence and integration take place at a number of brainstem and cortical sites, where individual neurons have been found that respond to multisensory stimuli with patterns of activation that depend on the nature of the stimulus complex and the intrinsic properties of the neuron. Parallels between the responses of these neurons and multisensory behavior and perception suggest that they are the substrates that underlie these cognitive processes. In both cat and monkey models, the development of these multisensory neurons and the appearance of their integrative capacity is a gradual postnatal process. For subcortical structures (i.e., the superior colliculus) this maturational process appears to be gated by the appearance of functional projections from regions of association cortex. The slow postnatal maturation of multisensory processes, coupled with its dependency on functional corticotectal connections, suggested that the development of multisensory integration may be tied to sensory experiences acquired during postnatal life. In support of this, eliminating experience in one sensory modality (i.e., vision) during postnatal development severely compromises the integration of multisensory cues. Research is ongoing to better elucidate the critical development antecedents for the emergence of normal multisensory capacity.Edited by Marie-Hélène Giard and Mark WallaceThis revised version was published in May 2004 with corrections to Fig. 1.  相似文献   

6.
Maintaining balance is fundamentally a multisensory process, with visual, haptic, and proprioceptive information all playing an important role in postural control. The current project examined the interaction between such sensory inputs, manipulating visual (presence versus absence), haptic (presence versus absence of contact with a stable or unstable finger support surface), and proprioceptive (varying stance widths, including shoulder width stance, Chaplin [heels together, feet splayed at approximately 60°] stance, feet together stance, and tandem stance) information. Analyses of mean velocity of the Centre of Pressure (CoP) revealed significant interactions between these factors, with stability gains observed as a function of increasing sensory information (e.g., visual, haptic, visual + haptic), although the nature of these gains was modulated by the proprioceptive information and the reliability of the haptic support surface (i.e., unstable versus stable finger supports). Subsequent analyses on individual difference parameters (e.g., height, leg length, weight, and areas of base of support) revealed that these variables were significantly related to postural measures across experimental conditions. These findings are discussed relative to their implications for multisensory postural control, and with respect to inverted pendulum models of balance. (185 words).  相似文献   

7.
Martino G  Marks LE 《Perception》2000,29(6):745-754
At each moment, we experience a melange of information arriving at several senses, and often we focus on inputs from one modality and 'reject' inputs from another. Does input from a rejected sensory modality modulate one's ability to make decisions about information from a selected one? When the modalities are vision and hearing, the answer is "yes", suggesting that vision and hearing interact. In the present study, we asked whether similar interactions characterize vision and touch. As with vision and hearing, results obtained in a selective attention task show cross-modal interactions between vision and touch that depend on the synesthetic relationship between the stimulus combinations. These results imply that similar mechanisms may govern cross-modal interactions across sensory modalities.  相似文献   

8.
ABSTRACT— Although it is estimated that as many as 4% of people experience some form of enhanced cross talk between (or within) the senses, known as synaesthesia, very little is understood about the level of information processing required to induce a synaesthetic experience. In work presented here, we used a well-known multisensory illusion called the McGurk effect to show that synaesthesia is driven by late, perceptual processing, rather than early, unisensory processing. Specifically, we tested 9 linguistic-color synaesthetes and found that the colors induced by spoken words are related to what is perceived (i.e., the illusory combination of audio and visual inputs) and not to the auditory component alone. Our findings indicate that color-speech synaesthesia is triggered only when a significant amount of information processing has occurred and that early sensory activation is not directly linked to the synaesthetic experience.  相似文献   

9.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

10.
Previous studies of multisensory integration have often stressed the beneficial effects that may arise when information concerning an event arrives via different sensory modalities at the same time, as, for example, exemplified by research on the redundant target effect (RTE). By contrast, studies of the Colavita visual dominance effect (e.g., [Colavita, F. B. (1974). Human sensory dominance. Perception & Psychophysics, 16, 409–412]) highlight the inhibitory consequences of the competition between signals presented simultaneously in different sensory modalities instead. Although both the RTE and the Colavita effect are thought to occur at early sensory levels and the stimulus conditions under which they are typically observed are very similar, the interplay between these two opposing behavioural phenomena (facilitation vs. competition) has yet to be addressed empirically. We hypothesized that the dissociation may reflect two of the fundamentally different ways in which humans can perceive concurrent auditory and visual stimuli. In Experiment 1, we demonstrated both multisensory facilitation (RTE) and the Colavita visual dominance effect using exactly the same audiovisual displays, by simply changing the task from a speeded detection task to a speeded modality discrimination task. Meanwhile, in Experiment 2, the participants exhibited multisensory facilitation when responding to visual targets and multisensory inhibition when responding to auditory targets while keeping the task constant. These results therefore indicate that both multisensory facilitation and inhibition can be demonstrated in reaction to the same bimodal event.  相似文献   

11.
ABSTRACT

The perceptual brain is designed around multisensory input. Areas once thought dedicated to a single sense are now known to work with multiple senses. It has been argued that the multisensory nature of the brain reflects a cortical architecture for which task, rather than sensory system, is the primary design principle. This supramodal thesis is supported by recent research on human echolocation and multisensory speech perception. In this review, we discuss the behavioural implications of a supramodal architecture, especially as they pertain to auditory perception. We suggest that the architecture implies a degree of perceptual parity between the senses and that cross-sensory integration occurs early and completely. We also argue that a supramodal architecture implies that perceptual experience can be shared across modalities and that this sharing should occur even without bimodal experience. We finish by briefly suggesting areas of future research.  相似文献   

12.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

13.
It is often that the spatial senses (vision, hearing and the tactual senses) operate as distinct and independent modalities and, moreover, that vision is crucial to the development of spatial abilities. However, well controlled studies of blind persons with adequate experience show that they can function usefully in space. In other words, vision is not a necessary condition for spatial awareness. On the other hand, thought the blind may be equal or even superior to the sighted when performing spatial tasks within the body space, they may be deficient, either developmentally or absolutely, in tasks which involve events at a distance from the body, principally in auditory localization. One possible explanation of the differences between blind and sighted (McKinney, 1964; Attneave & Benson, 1969, Warren, 1970) is that vision is the primary spatial reference, and inputs from other modalities are fitted to a visual map. Several criticisms of this theory are adduced and an alternative theory derived from Sherrington (1947), in which all sensory inputs map on to efferent patterns, is sketched.  相似文献   

14.
Although sensory perception and neurobiology are traditionally investigated one modality at a time, real world behaviour and perception are driven by the integration of information from multiple sensory sources. Mounting evidence suggests that the neural underpinnings of multisensory integration extend into early sensory processing. This article examines the notion that neocortical operations are essentially multisensory. We first review what is known about multisensory processing in higher-order association cortices and then discuss recent anatomical and physiological findings in presumptive unimodal sensory areas. The pervasiveness of multisensory influences on all levels of cortical processing compels us to reconsider thinking about neural processing in unisensory terms. Indeed, the multisensory nature of most, possibly all, of the neocortex forces us to abandon the notion that the senses ever operate independently during real-world cognition.  相似文献   

15.
Multisensory Information in the Control of Complex Motor Actions   总被引:1,自引:0,他引:1  
ABSTRACT— For many of the complex motor actions we perform, perceptual information is available from several different senses including vision, touch, hearing, and the vestibular system. Here I discuss the use of multisensory information for the control of motor action in three particular domains: aviation, sports, and driving. It is shown that performers in these domains use information from multiple senses—frequently with beneficial effects on performance but sometimes with dangerous consequences. Applied psychologists have taken advantage of our natural tendency to integrate sensory information by designing multimodal displays that compensate for situations in which information from one or more of our senses is unreliable or is unattended due to distraction.  相似文献   

16.
The aim of this systematic review was to integrate and assess evidence for the effectiveness of multisensory stimulation (i.e., stimulating at least two of the following sensory systems: visual, auditory, and somatosensory) as a possible rehabilitation method after stroke. Evidence was considered with a focus on low-level, perceptual (visual, auditory and somatosensory deficits), as well as higher-level, cognitive, sensory deficits. We referred to the electronic databases Scopus and PubMed to search for articles that were published before May 2015. Studies were included which evaluated the effects of multisensory stimulation on patients with low- or higher-level sensory deficits caused by stroke. Twenty-one studies were included in this review and the quality of these studies was assessed (based on eight elements: randomization, inclusion of control patient group, blinding of participants, blinding of researchers, follow-up, group size, reporting effect sizes, and reporting time post-stroke). Twenty of the twenty-one included studies demonstrate beneficial effects on low- and/or higher-level sensory deficits after stroke. Notwithstanding these beneficial effects, the quality of the studies is insufficient for valid conclusion that multisensory stimulation can be successfully applied as an effective intervention. A valuable and necessary next step would be to set up well-designed randomized controlled trials to examine the effectiveness of multisensory stimulation as an intervention for low- and/or higher-level sensory deficits after stroke. Finally, we consider the potential mechanisms of multisensory stimulation for rehabilitation to guide this future research.  相似文献   

17.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identificationand localization ofobject features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a "what" task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a "where" task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a "where" interference task was embedded in a "what" primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

18.
Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identification and localization of object features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a “what” task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a “where” task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a “where” interference task was embedded in a “what” primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.  相似文献   

19.
Stillman JA 《Perception》2002,31(12):1491-1500
On the face of it, basic tactile sensation might seem the only essential sensory requirement for the delivery of foods and beverages to the digestive system. In practice, however, the appropriate delivery of raw materials for the maintenance and repair of the body requires complex sensory and cognitive processes, such that flavour sensation arguably constitutes the pre-eminent example of an integrated multicomponent perceptual experience. To raise the profile of the chemical senses amongst researchers in other perceptual domains, I review here the contribution of various sense modalities to the flavour of foods and beverages. Further, in the light of these multisensory inputs, the physiological and psychophysical research summarised in this paper invites optimism that novel ways will be found to intervene when nutritional status is compromised either by specific dietary restraints, or by taste and smell disorders.  相似文献   

20.
In multistable perception, the brain alternates between several perceptual explanations of ambiguous sensory signals. It is unknown whether multistable processes can interact across the senses. In the study reported here, we presented subjects with unisensory (visual or tactile), spatially congruent visuotactile, and spatially incongruent visuotactile apparent motion quartets. Congruent stimulation induced pronounced visuotactile interactions, as indicated by increased dominance times for both vision and touch, and an increased percentage bias for the percept already dominant under unisensory stimulation. Thus, the joint evidence from vision and touch stabilizes the more likely perceptual interpretation and thereby decelerates the rivalry dynamics. Yet the temporal dynamics depended also on subjects' attentional focus and was generally slower for tactile than for visual reports. Our results support Bayesian approaches to perceptual inference, in which the probability of a perceptual interpretation is determined by combining visual, tactile, or visuotactile evidence with modality-specific priors that depend on subjects' attentional focus. Critically, the specificity of visuotactile interactions for spatially congruent stimulation indicates multisensory rather than cognitive-bias mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号