首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dyslexia has been associated with a problem in visual–audio integration mechanisms. Here, we investigate for the first time the contribution of unisensory cues on multisensory audio and visual integration in 32 dyslexic children by modelling results using the Bayesian approach. Non‐linguistic stimuli were used. Children performed a temporal task: they had to report whether the middle of three stimuli was closer in time to the first one or to the last one presented. Children with dyslexia, compared with typical children, exhibited poorer unimodal thresholds, requiring greater temporal distance between items for correct judgements, while multisensory thresholds were well predicted by the Bayesian model. This result suggests that the multisensory deficit in dyslexia is due to impaired audio and visual inputs rather than impaired multisensory processing per se. We also observed that poorer temporal skills correlated with lower reading skills in dyslexic children, suggesting that this temporal capability can be linked to reading abilities.  相似文献   

2.
In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain “know” which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.  相似文献   

3.
Saccades operate a continuous selection between competing targets at different locations. This competition has been mostly investigated in the visual context, and it is well known that a visual distractor can interfere with a saccade toward a visual target. Here, we investigated whether multimodal, audio-visual targets confer stronger resilience against visual distraction. Saccades to audio-visual targets had shorter latencies than saccades to unisensory stimuli. This facilitation exceeded the level that could be explained by simple probability summation, indicating that multisensory integration had occurred. The magnitude of inhibition induced by a visual distractor was comparable for saccades to unisensory and multisensory targets, but the duration of the inhibition was shorter for multimodal targets. We conclude that multisensory integration can allow a saccade plan to be reestablished more rapidly following saccadic inhibition.  相似文献   

4.
In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial attention towards a location where to-be-remembered visual stimuli were or were not presented (cued/uncued trials, respectively). The results suggest that the effect of peripheral visual cues in biasing the access of information into VSWM depend on the size of the attentional focus, while auditory cues did not have direct effects in biasing VSWM. Finally, spatially congruent multisensory cues showed an enlarged attentional effect in VSWM as compared to unimodal visual cues, as a likely consequence of multisensory integration. This latter result sheds new light on the interplay between spatial attention and VSWM, pointing to the special role exerted by multisensory (audiovisual) cues.  相似文献   

5.
Tolhurst DJ  Tadmor Y 《Perception》2000,29(9):1087-1100
We have developed a protocol for testing experimentally the hypothesis that the human visual system is optimised for making visual discriminations amongst natural scenes. Visual stimuli were made by gradual blending of the Fourier spectra of digitised photographs of natural scenes. The statistics of the stimuli were made unnatural to varying degrees by changing the overall slopes of the amplitude spectra of the stimuli. Thresholds were measured for discriminating small amounts of spectral blending at different spectral slopes. We found that thresholds were lowest when the spectral slope was natural; thresholds were increased when the slopes were either shallower or steeper than natural. A number of spurious cues were considered, such as differences in mean luminance or overall spectral power or contrast between test and reference stimuli. Control experiments were performed to remove such spurious cues, and the discrimination thresholds were still lowest for stimuli that were most natural. Thus, these experiments do provide experimental support for the idea that human vision and the human visual system are optimised for processing natural visual information [corrected].  相似文献   

6.
Multisensory tools are commonly employed within educational settings (e.g. Carter & Stephenson, 2012 ), and there is a growing body of literature advocating the benefits of presenting children with multisensory information over unisensory cues for learning (Baker & Jordan, 2015 ; Jordan & Baker, 2011 ). This is even the case when the informative cues are only arbitrarily related (Broadbent, White, Mareschal, & Kirkham, 2017 ). However, the delayed retention of learning following exposure to multisensory compared to unisensory cues has not been evaluated, and has important implications for the utility of multisensory educational tools. This study examined the retention of incidental categorical learning in 5‐, 7‐ and 9‐year‐olds (N = 181) using either unisensory or multisensory cues. Results found significantly greater retention of learning following multisensory cue exposure than with unisensory information when category knowledge was tested following a 24‐hour period of delay. No age‐related changes were found, suggesting that multisensory information can facilitate the retention of learning across this age range.  相似文献   

7.
This study explores the influence of bilingualism on the cognitive processing of language and music. Specifically, we investigate how infants learning a non-tone language perceive linguistic and musical pitch and how bilingualism affects cross-domain pitch perception. Dutch monolingual and bilingual infants of 8–9 months participated in the study. All infants had Dutch as one of the first languages. The other first languages, varying among bilingual families, were not tone or pitch accent languages. In two experiments, infants were tested on the discrimination of a lexical (N = 42) or a violin (N = 48) pitch contrast via a visual habituation paradigm. The two contrasts shared identical pitch contours but differed in timbre. Non-tone language learning infants did not discriminate the lexical contrast regardless of their ambient language environment. When perceiving the violin contrast, bilingual but not monolingual infants demonstrated robust discrimination. We attribute bilingual infants’ heightened sensitivity in the musical domain to the enhanced acoustic sensitivity stemming from a bilingual environment. The distinct perceptual patterns between language and music and the influence of acoustic salience on perception suggest processing diversion and association in the first year of life. Results indicate that the perception of music may entail both shared neural network with language processing, and unique neural network that is distinct from other cognitive functions.  相似文献   

8.
The current study addressed the question whether audiovisual (AV) speech can improve speech perception in older and younger adults in a noisy environment. Event-related potentials (ERPs) were recorded to investigate age-related differences in the processes underlying AV speech perception. Participants performed an object categorization task in three conditions, namely auditory-only (A), visual-only (V), and AVspeech. Both age groups revealed an equivalent behavioral AVspeech benefit over unisensory trials. ERP analyses revealed an amplitude reduction of the auditory P1 and N1 on AVspeech trials relative to the summed unisensory (A + V) response in both age groups. These amplitude reductions are interpreted as an indication of multisensory efficiency as fewer neural resources were recruited to achieve better performance. Of interest, the observed P1 amplitude reduction was larger in older adults. Younger and older adults also showed an earlier auditory N1 in AVspeech relative to A and A + V trials, an effect that was again greater in the older adults. The degree of multisensory latency shift was predicted by basic auditory functioning (i.e., higher hearing thresholds were associated with larger latency shifts) in both age groups. Together, the results show that AV speech processing is not only intact in older adults, but that the facilitation of neural responses occurs earlier in and to a greater extent than in younger adults. Thus, older adults appear to benefit more from additional visual speech cues than younger adults, possibly to compensate for more impoverished unisensory inputs because of sensory aging.  相似文献   

9.
This study presents the first evidence that preschool children perform more accurately in a numerical matching task when given multisensory rather than unisensory information about number. Three- to 5-year-old children learned to play a numerical matching game on a touchscreen computer, which asked them to match a sample numerosity with a numerically equivalent choice numerosity. Samples consisted of a series of visual squares on some trials, a series of auditory tones on other trials, and synchronized squares and tones on still other trials. Children performed at chance on this matching task when provided with either type of unisensory sample, but improved significantly when provided with multisensory samples. There was no speed–accuracy tradeoff between unisensory and multisensory trial types. Thus, these findings suggest that intersensory redundancy may improve young children’s abilities to match numerosities.  相似文献   

10.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

11.
Pitch is derived by the auditory system through complex spectrotemporal processing. Pitch extraction is thought to depend on both spectral cues arising from lower harmonics that are resolved by cochlear filters in the inner ear, and on temporal cues arising from the pattern of action potentials contained in the cochlear output. Adults are capable of extracting pitch in the absence of robust spectral cues, taking advantage of the temporal cues that remain. However, recent behavioral evidence suggests that infants have difficulty discriminating between stimuli with different pitches when resolvable spectral cues are absent. In the current experiments, we used the mismatch negativity (MMN) component of the event related potential derived from electroencephalographic (EEG) recordings to examine a cortical representation of pitch discrimination for iterated rippled noise (IRN) stimuli in 4- and 8-month-old infants. IRN stimuli are pitch-evoking sounds generated by repeatedly adding a segment of white noise to itself at a constant delay. We created IRN stimuli (delays of 5 and 6 ms creating pitch percepts of 200 and 167 Hz) and high-pass filtered them to remove all resolvable spectral pitch cues. In experiment 1, we did not find EEG evidence that infants could detect the change in the pitch of these IRN stimuli. However, in Experiment 2, after a brief period of pitch-priming during which we added a sine wave component to the IRN stimulus at its perceived pitch, infants did show significant MMN in response to pitch changes in the IRN stimuli with sine waves removed. This suggests that (1) infants can use temporal cues to process pitch, although such processing is not mature and (2) that a short amount of pitch-priming experience can alter pitch representations in auditory cortex during infancy.  相似文献   

12.
《Brain and cognition》2014,84(3):271-278
Pitch is derived by the auditory system through complex spectrotemporal processing. Pitch extraction is thought to depend on both spectral cues arising from lower harmonics that are resolved by cochlear filters in the inner ear, and on temporal cues arising from the pattern of action potentials contained in the cochlear output. Adults are capable of extracting pitch in the absence of robust spectral cues, taking advantage of the temporal cues that remain. However, recent behavioral evidence suggests that infants have difficulty discriminating between stimuli with different pitches when resolvable spectral cues are absent. In the current experiments, we used the mismatch negativity (MMN) component of the event related potential derived from electroencephalographic (EEG) recordings to examine a cortical representation of pitch discrimination for iterated rippled noise (IRN) stimuli in 4- and 8-month-old infants. IRN stimuli are pitch-evoking sounds generated by repeatedly adding a segment of white noise to itself at a constant delay. We created IRN stimuli (delays of 5 and 6 ms creating pitch percepts of 200 and 167 Hz) and high-pass filtered them to remove all resolvable spectral pitch cues. In experiment 1, we did not find EEG evidence that infants could detect the change in the pitch of these IRN stimuli. However, in Experiment 2, after a brief period of pitch-priming during which we added a sine wave component to the IRN stimulus at its perceived pitch, infants did show significant MMN in response to pitch changes in the IRN stimuli with sine waves removed. This suggests that (1) infants can use temporal cues to process pitch, although such processing is not mature and (2) that a short amount of pitch-priming experience can alter pitch representations in auditory cortex during infancy.  相似文献   

13.
康冠兰  罗霄骁 《心理科学》2020,(5):1072-1078
多通道信息交互是指来自某个感觉通道的信息与另一感觉通道的信息相互作用、相互影响的一系列加工过程。主要包括两个方面:一是不同感觉通道的输入如何整合;二是跨通道信息的冲突控制。本文综述了视听跨通道信息整合与冲突控制的行为心理机制和神经机制,探讨了注意对视听信息整合与冲突控制的影响。未来需探究视听跨通道信息加工的脑网络机制,考察特殊群体的跨通道整合和冲突控制以帮助揭示其认知和社会功能障碍的机制。  相似文献   

14.
Gallace A  Auvray M  Spence C 《Perception》2007,36(7):1003-1018
Research has shown that a variety of different sensory manipulations, including visual illusions, transcutaneous nerve stimulation, vestibular caloric stimulation, optokinetic stimulation, and prism adaptation, can all influence people's performance on spatial tasks such as line bisection. It has been suggested that these manipulations may act upon the 'higher-order' levels of representation used to code spatial information. We investigated whether we could influence haptic line bisection in normal participants crossmodally by varying the visual background that participants viewed. In experiment 1, participants haptically bisected wooden rods while looking at a variant of the Oppel - Kundt visual illusion. Haptic-bisection judgments were influenced by the orientation of the visual illusion (in line with previous unimodal visual findings). In experiment 2, haptic-bisection judgments were also influenced by the presence of a leftward or rightward moving visual background. In experiments 3 and 4, the position of the to-be-bisected stimuli was varied with respect to the participant's body midline. The results confirmed an effect of optokinetic stimulation, but not of the Oppel -Kundt illusion, on participants' tactile-bisection errors, suggesting that the two manipulations might differentially affect haptic processing. Taken together, these results suggest that the 'higher-order' levels of spatial representation upon which perceptual judgments and/or motor responses are made may have multisensory or amodal characteristics.  相似文献   

15.
How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory features that optimally explain the unisensory features arising in individual sensory modalities. The model qualitatively accounts for several important aspects of multisensory perception: (a) it integrates information from multiple sensory sources in such a way that it leads to superior performances in, for example, categorization tasks; (b) its performances suggest that multisensory training leads to better learning than unisensory training, even when testing is conducted in unisensory conditions; (c) its multisensory representations are modality invariant; and (d) it predicts ‘‘missing” sensory representations in modalities when the input to those modalities is absent. Our rational analysis indicates that all of these aspects emerge as part of the optimal solution to the problem of learning to represent complex multisensory environments.  相似文献   

16.
To successfully interact with a rich and ambiguous visual environment, the human brain learns to differentiate visual stimuli and to produce the same response to subsets of these stimuli despite their physical difference. Although this visual categorization function is traditionally investigated from a unisensory perspective, its early development is inherently constrained by multisensory inputs. In particular, an early‐maturing sensory system such as olfaction is ideally suited to support the immature visual system in infancy by providing stability and familiarity to a rapidly changing visual environment. Here, we test the hypothesis that rapid visual categorization of salient visual signals for the young infant brain, human faces, is shaped by another highly relevant human‐related input from the olfactory system, the mother's body odor. We observe that a right‐hemispheric neural signature of single‐glance face categorization from natural images is significantly enhanced in the maternal versus a control odor context in individual 4‐month‐old infant brains. A lack of difference between odor conditions for the common brain response elicited by both face and non‐face images rules out a mere enhancement of arousal or visual attention in the maternal odor context. These observations show that face‐selective neural activity in infancy is mediated by the presence of a (maternal) body odor, providing strong support for multisensory inputs driving category acquisition in the developing human brain and having important implications for our understanding of human perceptual development.  相似文献   

17.
Benefits of multisensory learning   总被引:2,自引:0,他引:2  
Studies of learning, and in particular perceptual learning, have focused on learning of stimuli consisting of a single sensory modality. However, our experience in the world involves constant multisensory stimulation. For instance, visual and auditory information are integrated in performing many tasks that involve localizing and tracking moving objects. Therefore, it is likely that the human brain has evolved to develop, learn and operate optimally in multisensory environments. We suggest that training protocols that employ unisensory stimulus regimes do not engage multisensory learning mechanisms and, therefore, might not be optimal for learning. However, multisensory-training protocols can better approximate natural settings and are more effective for learning.  相似文献   

18.
Multisensory integration is a process whereby information converges from different sensory modalities to produce a response that is different from that elicited by the individual modalities presented alone. A neural basis for multisensory integration has been identified within a variety of brain regions, but the most thoroughly examined model has been that of the superior colliculus (SC). Multisensory processing in the SC of anaesthetized animals has been shown to be dependent on the physical parameters of the individual stimuli presented (e.g., intensity, direction, velocity) as well as their spatial relationship. However, it is unknown whether these stimulus features are important, or evident, in the awake behaving animal. To address this question, we evaluated the influence of physical properties of sensory stimuli (visual intensity, direction, and velocity; auditory intensity and location) on sensory activity and multisensory integration of SC neurons in awake, behaving primates. Monkeys were trained to fixate a central visual fixation point while visual and/or auditory stimuli were presented in the periphery. Visual stimuli were always presented within the contralateral receptive field of the neuron whereas auditory stimuli were presented at either ipsi- or contralateral locations. Many of the SC neurons responsive to these sensory stimuli (n = 66/84; 76%) had stronger responses when the visual and auditory stimuli were combined at contralateral locations than when the auditory stimulus was located on the ipsilateral side. This trend was significant across the population of auditory-responsive neurons. In addition, some SC neurons (n = 31) were presented a battery of tests in which the quality of one stimulus of a pair was systematically manipulated. A small proportion of these neurons (n = 8/31; 26%) showed preferential responses to stimuli with specific physical properties, and these preferences were not significantly altered when multisensory stimulus combinations were presented. These data demonstrate that multisensory processing in the awake behaving primate is influenced by the spatial congruency of the stimuli as well as their individual physical properties.  相似文献   

19.
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech‐related processing deficits. Here, we examined the influence of visual articulatory information (lip‐read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip‐read information that disambiguates noise‐masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio‐visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.  相似文献   

20.
Audio-visual simultaneity judgments   总被引:3,自引:0,他引:3  
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号