首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

The perceptual brain is designed around multisensory input. Areas once thought dedicated to a single sense are now known to work with multiple senses. It has been argued that the multisensory nature of the brain reflects a cortical architecture for which task, rather than sensory system, is the primary design principle. This supramodal thesis is supported by recent research on human echolocation and multisensory speech perception. In this review, we discuss the behavioural implications of a supramodal architecture, especially as they pertain to auditory perception. We suggest that the architecture implies a degree of perceptual parity between the senses and that cross-sensory integration occurs early and completely. We also argue that a supramodal architecture implies that perceptual experience can be shared across modalities and that this sharing should occur even without bimodal experience. We finish by briefly suggesting areas of future research.  相似文献   

2.
ABSTRACT

J. J. Gibson (1966) rejected many classical assumptions about perception but retained 1 that dates back to classical antiquity: the assumption of separate senses. We suggest that Gibson's retention of this assumption compromised his novel concept of perceptual systems. We argue that lawful, 1:1 specification of the animal–environment interaction, which is necessary for perception to be direct, cannot exist in individual forms of ambient energy, such as light, or sound. We argue that specification exists exclusively in emergent, higher order patterns that extend across different forms of ambient energy. These emergent, higher order patterns constitute the global array. If specification exists exclusively in the global array, then direct perception cannot be based upon detection of patterns that are confined to individual forms of ambient energy and, therefore, Gibson's argument for the existence of several distinct perceptual systems cannot be correct. We argue that the senses function as a single, irreducible perceptual system that is sensitive exclusively to patterns in the global array. That is, rather than distinct perceptual systems there exists only 1 perceptual system.  相似文献   

3.
The perception of distance in open fields was widely studied with static observers. However, it is a fact that we and the world around us are in continuous relative movement, and that our perceptual experience is shaped by the complex interactions between our senses and the perception of our self-motion. This poses interesting questions about how our nervous system integrates this multisensory information to resolve specific tasks of our daily life, for example, distance estimation. This study provides new evidence about how visual and motor self-motion information affects our perception of distance and a hypothesis about how these two sources of information can be integrated to calibrate the estimation of distance. This model accounts for the biases found when visual and proprioceptive information is inconsistent.  相似文献   

4.
ABSTRACT

One important contribution of Carol Fowler's direct approach to speech perception is its account of multisensory perception. This supramodal account proposes a speech function that detects supramodal information available across audition, vision, and touch. This detection allows for the recovery of articulatory primitives that provide the basis of a common currency shared between modalities as well as between perception and production. Common currency allows for perceptual experience to be shared between modalities and supports perceptually guided speaking as well as production-guided perception. In this report, we discuss the contribution and status of the supramodal approach relative to recent research in multisensory speech perception. We argue that the approach has helped motivate a multisensory revolution in perceptual psychology. We then review the new behavioral and neurophysiological research on (a) supramodal information, (b) cross-sensory sharing of experience, and (c) perceptually guided speaking as well as production guided speech perception. We conclude that Fowler's supramodal theory has fared quite well in light of this research.  相似文献   

5.
In multistable perception, the brain alternates between several perceptual explanations of ambiguous sensory signals. It is unknown whether multistable processes can interact across the senses. In the study reported here, we presented subjects with unisensory (visual or tactile), spatially congruent visuotactile, and spatially incongruent visuotactile apparent motion quartets. Congruent stimulation induced pronounced visuotactile interactions, as indicated by increased dominance times for both vision and touch, and an increased percentage bias for the percept already dominant under unisensory stimulation. Thus, the joint evidence from vision and touch stabilizes the more likely perceptual interpretation and thereby decelerates the rivalry dynamics. Yet the temporal dynamics depended also on subjects' attentional focus and was generally slower for tactile than for visual reports. Our results support Bayesian approaches to perceptual inference, in which the probability of a perceptual interpretation is determined by combining visual, tactile, or visuotactile evidence with modality-specific priors that depend on subjects' attentional focus. Critically, the specificity of visuotactile interactions for spatially congruent stimulation indicates multisensory rather than cognitive-bias mechanisms.  相似文献   

6.
Although sensory perception and neurobiology are traditionally investigated one modality at a time, real world behaviour and perception are driven by the integration of information from multiple sensory sources. Mounting evidence suggests that the neural underpinnings of multisensory integration extend into early sensory processing. This article examines the notion that neocortical operations are essentially multisensory. We first review what is known about multisensory processing in higher-order association cortices and then discuss recent anatomical and physiological findings in presumptive unimodal sensory areas. The pervasiveness of multisensory influences on all levels of cortical processing compels us to reconsider thinking about neural processing in unisensory terms. Indeed, the multisensory nature of most, possibly all, of the neocortex forces us to abandon the notion that the senses ever operate independently during real-world cognition.  相似文献   

7.
It is tempting to think that one's perceptual evidence comprises just what issues from perceiving with each of the respective sensory modalities. However, empirical, rational, and phenomenological considerations show that one's perceptual evidence can outstrip what one possesses due to perceiving with each separate sense. Some novel perceptual evidence stems from the coordinated use of multiple senses. This paper argues that some perceptual evidence in this respect is distinctively multisensory.  相似文献   

8.
Color conveys critical information about the flavor of food and drink by providing clues as to edibility, flavor identity, and flavor intensity. Despite the fact that more than 100 published papers have investigated the influence of color on flavor perception in humans, surprisingly little research has considered how cognitive and contextual constraints may mediate color–flavor interactions. In this review, we argue that the discrepancies demonstrated in previously-published color–flavor studies may, at least in part, reflect differences in the sensory expectations that different people generate as a result of their prior associative experiences. We propose that color–flavor interactions in flavor perception cannot be understood solely in terms of the principles of multisensory integration (the currently dominant theoretical framework) but that the role of higher-level cognitive factors, such as expectations, must also be considered.  相似文献   

9.
To understand the development of sensory processes, it is necessary not only to look at the maturation of each of the sensory systems in isolation, but also to study the development of the nervous systems capacity to integrate information across the different senses. It is through such multisensory integration that a coherent perceptual gestalt of the world comes to be generated. In the adult brain, multisensory convergence and integration take place at a number of brainstem and cortical sites, where individual neurons have been found that respond to multisensory stimuli with patterns of activation that depend on the nature of the stimulus complex and the intrinsic properties of the neuron. Parallels between the responses of these neurons and multisensory behavior and perception suggest that they are the substrates that underlie these cognitive processes. In both cat and monkey models, the development of these multisensory neurons and the appearance of their integrative capacity is a gradual postnatal process. For subcortical structures (i.e., the superior colliculus) this maturational process appears to be gated by the appearance of functional projections from regions of association cortex. The slow postnatal maturation of multisensory processes, coupled with its dependency on functional corticotectal connections, suggested that the development of multisensory integration may be tied to sensory experiences acquired during postnatal life. In support of this, eliminating experience in one sensory modality (i.e., vision) during postnatal development severely compromises the integration of multisensory cues. Research is ongoing to better elucidate the critical development antecedents for the emergence of normal multisensory capacity.Edited by Marie-Hélène Giard and Mark WallaceThis revised version was published in May 2004 with corrections to Fig. 1.  相似文献   

10.
Stillman JA 《Perception》2002,31(12):1491-1500
On the face of it, basic tactile sensation might seem the only essential sensory requirement for the delivery of foods and beverages to the digestive system. In practice, however, the appropriate delivery of raw materials for the maintenance and repair of the body requires complex sensory and cognitive processes, such that flavour sensation arguably constitutes the pre-eminent example of an integrated multicomponent perceptual experience. To raise the profile of the chemical senses amongst researchers in other perceptual domains, I review here the contribution of various sense modalities to the flavour of foods and beverages. Further, in the light of these multisensory inputs, the physiological and psychophysical research summarised in this paper invites optimism that novel ways will be found to intervene when nutritional status is compromised either by specific dietary restraints, or by taste and smell disorders.  相似文献   

11.
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.  相似文献   

12.
Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio‐haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child‐friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio‐haptic performance resulted in a reduction of perceptual uncertainty compared to auditory‐only and haptic‐only performance as predicted by maximum‐likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low‐vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult‐like audio‐haptic integration develops around 13–15 years of age, and remains stable until late adulthood. While early‐blind individuals, even at the youngest ages, integrate audio‐haptic information in an optimal fashion, late‐blind individuals do not. Optimal integration in low‐vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio‐haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.  相似文献   

13.
Multisensory Information in the Control of Complex Motor Actions   总被引:1,自引:0,他引:1  
ABSTRACT— For many of the complex motor actions we perform, perceptual information is available from several different senses including vision, touch, hearing, and the vestibular system. Here I discuss the use of multisensory information for the control of motor action in three particular domains: aviation, sports, and driving. It is shown that performers in these domains use information from multiple senses—frequently with beneficial effects on performance but sometimes with dangerous consequences. Applied psychologists have taken advantage of our natural tendency to integrate sensory information by designing multimodal displays that compensate for situations in which information from one or more of our senses is unreliable or is unattended due to distraction.  相似文献   

14.
This paper addresses the nature of touch or ‘tactual perception’. I argue that touch encompasses a wide range of perceptual achievements, that treating it as a number of separate senses will not work, and that the permissive conception we are left with is so permissive that it is unclear how touch might be distinguished from the other senses. I conclude that no criteria will succeed in individuating touch. Although I do not rule out the possibility that this also applies to other senses, I suggest that the heterogeneity of touch makes it both distinctive and particularly problematic.  相似文献   

15.
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory features that optimally explain the unisensory features arising in individual sensory modalities. The model qualitatively accounts for several important aspects of multisensory perception: (a) it integrates information from multiple sensory sources in such a way that it leads to superior performances in, for example, categorization tasks; (b) its performances suggest that multisensory training leads to better learning than unisensory training, even when testing is conducted in unisensory conditions; (c) its multisensory representations are modality invariant; and (d) it predicts ‘‘missing” sensory representations in modalities when the input to those modalities is absent. Our rational analysis indicates that all of these aspects emerge as part of the optimal solution to the problem of learning to represent complex multisensory environments.  相似文献   

16.
Attentional bottlenecks force animals to deeply process only a selected fraction of sensory inputs. This motivates a unifying central-peripheral dichotomy (CPD), which separates multisensory processing into functionally defined central and peripheral senses. Peripheral senses (e.g., human audition and peripheral vision) select a fraction of the sensory inputs by orienting animals’ attention; central senses (e.g., human foveal vision) allow animals to recognize the selected inputs. Originally used to understand human vision, CPD can be applied to multisensory processes across species. I first describe key characteristics of central and peripheral senses, such as the degree of top-down feedback and density of sensory receptors, and then show CPD as a framework to link ecological, behavioral, neurophysiological, and anatomical data and produce falsifiable predictions.  相似文献   

17.
We live in a world rich in sensory information, and consequently the brain is challenged with deciphering which cues from the various sensory modalities belong together. Determinations regarding the relatedness of sensory information appear to be based, at least in part, on the spatial and temporal relationships between the stimuli. Stimuli that are presented in close spatial and temporal correspondence are more likely to be associated with one another and thus 'bound' into a single perceptual entity. While there is a robust literature delineating behavioral changes in perception induced by multisensory stimuli, maturational changes in multisensory processing, particularly in the temporal realm, are poorly understood. The current study examines the developmental progression of multisensory temporal function by analyzing responses on an audiovisual simultaneity judgment task in 6- to 23-year-old participants. The overarching hypothesis for the study was that multisensory temporal function will mature with increasing age, with the developmental trajectory for this change being the primary point of inquiry. Results indeed reveal an age-dependent decrease in the size of the 'multisensory temporal binding window', the temporal interval within which multisensory stimuli are likely to be perceptually bound, with changes occurring over a surprisingly protracted time course that extends into adolescence.  相似文献   

18.
Crossmodal correspondences refer to the tendency to associate a pair of features across different senses; specifically, consumers can associate the color of packaging with a certain flavor label for packaged foods after repeated exposure to the packaging of mainstream, everyday products. We conducted two studies to examine how the incongruency between packaging color and flavor labeling influences consumers' evaluations of a food product and their perceptions of a brand. The results revealed that the participants liked a food product less when its packaging color was incongruent with its flavor label, but the magnitude of this color–flavor incongruency effect decreased after participants repeatedly searched for these products on the shelves of a virtual supermarket. Participants also considered the brand of packaged foods to be more innovative when the products' packaging colors were incongruent with flavor labels, and the magnitude of this color–flavor incongruency effect on brand perception was not influenced by their experience of searching for a product in virtual reality. Together, these results suggested that crossmodal congruency is an important factor to consider in packaging design and can be used as a marketing tool to increase product likability and attract consumers' attention.  相似文献   

19.
ABSTRACT— Although it is estimated that as many as 4% of people experience some form of enhanced cross talk between (or within) the senses, known as synaesthesia, very little is understood about the level of information processing required to induce a synaesthetic experience. In work presented here, we used a well-known multisensory illusion called the McGurk effect to show that synaesthesia is driven by late, perceptual processing, rather than early, unisensory processing. Specifically, we tested 9 linguistic-color synaesthetes and found that the colors induced by spoken words are related to what is perceived (i.e., the illusory combination of audio and visual inputs) and not to the auditory component alone. Our findings indicate that color-speech synaesthesia is triggered only when a significant amount of information processing has occurred and that early sensory activation is not directly linked to the synaesthetic experience.  相似文献   

20.
Previous research has shown that sounds facilitate perception of visual patterns appearing immediately after the sound but impair perception of patterns appearing after some delay. Here we examined the spatial gradient of the fast crossmodal facilitation effect and the slow inhibition effect in order to test whether they reflect separate mechanisms. We found that crossmodal facilitation is only observed at visual field locations overlapping with the sound, whereas crossmodal inhibition affects the whole hemifield. Furthermore, we tested whether multisensory perceptual learning with misaligned audio-visual stimuli reshapes crossmodal facilitation and inhibition. We found that training shifts crossmodal facilitation towards the trained location without changing its range. By contrast, training narrows the range of inhibition without shifting its position. Our results suggest that crossmodal facilitation and inhibition reflect separate mechanisms that can both be reshaped by multisensory experience even in adult humans. Multisensory links seem to be more plastic than previously thought.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号