首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

2.
Multisensory information has been shown to modulate attention in infants and facilitate learning in adults, by enhancing the amodal properties of a stimulus. However, it remains unclear whether this translates to learning in a multisensory environment across middle childhood, and particularly in the case of incidental learning. One hundred and eighty‐one children aged between 6 and 10 years participated in this study using a novel Multisensory Attention Learning Task (MALT). Participants were asked to respond to the presence of a target stimulus whilst ignoring distractors. Correct target selection resulted in the movement of the target exemplar to either the upper left or right screen quadrant, according to category membership. Category membership was defined either by visual‐only, auditory‐only or multisensory information. As early as 6 years of age, children demonstrated greater performance on the incidental categorization task following exposure to multisensory audiovisual cues compared to unisensory information. These findings provide important insight into the use of multisensory information in learning, and particularly on incidental category learning. Implications for the deployment of multisensory learning tasks within education across development will be discussed.  相似文献   

3.
Immediate serial recall of visually presented verbal stimuli is impaired by the presence of irrelevant auditory background speech, the so-called irrelevant speech effect. Two of the three main accounts of this effect place restrictions on when it will be observed, limiting its occurrence either to items processed by the phonological loop (the phonological loop hypothesis) or to items that are not too dissimilar from the irrelevant speech (the feature model). A third, the object-oriented episodic record (O-OER) model, requires only that the memory task involves seriation. The present studies test these three accounts by examining whether irrelevant auditory speech will interfere with a task that does not involve the phonological loop, does not use stimuli that are compatible with those to be remembered, but does require seriation. Two experiments found that irrelevant speech led to lower levels of performance in a visual statistical learning task, offering more support for the O-OER model and posing a challenge for the other two accounts.  相似文献   

4.
To determine the relationship between individual exponents for cross-modal stimulus matches for both directions of matching, and to assess the transitivity of individual exponents, subjects were asked to adjust both the judgment and criterion stimuli. Experiment 1 involved four continua paired in 12 ordered combinations; Experiment 2 involved five continua and 20 ordered combinations. Two subjects served in each experiment. The individual exponent of the power function for matches of A to B was close to the inverse of the exponent for matches of B to A, but there was systematic deviation, indicating the presence of a small regression effect. Transitivity of individual exponents was determined by forming ratios of exponents from matches involving other continuum pairs to predict obtained exponents; the means of the distribution of deviations of obtained from predicted values was ?.02 log units, and the standard deviation was .15 log units, indicating that, on the average, predicted values were close to obtained exponents—that is, transitivity of exponents holds for individuals.  相似文献   

5.
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable‐level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14‐month‐olds’ abilities to segment two artificial languages using transitional probability cues. In Experiment 1, monolingual infants successfully segmented the speech streams when the languages were presented individually. However, monolinguals did not segment the same language stimuli when they were presented together in interleaved segments, mimicking the language switches inherent to bilingual speech. To assess the effect of real‐world bilingual experience on dual language speech segmentation, Experiment 2 tested infants with regular exposure to two languages using the same interleaved language stimuli as Experiment 1. The bilingual infants in Experiment 2 successfully segmented the languages, indicating that early exposure to two languages supports infants’ abilities to segment dual language speech using transitional probability cues. These findings support the notion that early bilingual exposure prepares infants to navigate challenging aspects of dual language environments as they begin to acquire two languages.  相似文献   

6.
7.
Statistical learning (SL), sensitivity to probabilistic regularities in sensory input, has been widely implicated in cognitive and perceptual development. Little is known, however, about the underlying mechanisms of SL and whether they undergo developmental change. One way to approach these questions is to compare SL across perceptual modalities. While a decade of research has compared auditory and visual SL in adults, we present the first direct comparison of visual and auditory SL in infants (8–10 months). Learning was evidenced in both perceptual modalities but with opposite directions of preference: Infants in the auditory condition displayed a novelty preference, while infants in the visual condition showed a familiarity preference. Interpreting these results within the Hunter and Ames model (1988), where familiarity preferences reflect a weaker stage of encoding than novelty preferences, we conclude that there is weaker learning in the visual modality than the auditory modality for this age. In addition, we found evidence of different developmental trajectories across modalities: Auditory SL increased while visual SL did not change for this age range. The results suggest that SL is not an abstract, amodal ability; for the types of stimuli and statistics tested, we find that auditory SL precedes the development of visual SL and is consistent with recent work comparing SL across modalities in older children.  相似文献   

8.
Attentional bottlenecks force animals to deeply process only a selected fraction of sensory inputs. This motivates a unifying central-peripheral dichotomy (CPD), which separates multisensory processing into functionally defined central and peripheral senses. Peripheral senses (e.g., human audition and peripheral vision) select a fraction of the sensory inputs by orienting animals’ attention; central senses (e.g., human foveal vision) allow animals to recognize the selected inputs. Originally used to understand human vision, CPD can be applied to multisensory processes across species. I first describe key characteristics of central and peripheral senses, such as the degree of top-down feedback and density of sensory receptors, and then show CPD as a framework to link ecological, behavioral, neurophysiological, and anatomical data and produce falsifiable predictions.  相似文献   

9.
A central issue in sequence learning is whether learning operates on stimulus-independent abstract elements, or whether surface features are integrated, resulting in stimulus-dependent learning. Using the serial reaction-time (SRT) task, we test whether a previously presented sequence is transferrable from one domain to another. Contrary to previous artificial grammar learning studies, there is mapping between pre- and posttransfer stimuli, but contrary to previous SRT studies mapping is not obvious. In the pre-transfer training phase, participants face a dot-counting task in which the location of the dots follows a predefined sequence. In the test phase, participants face an auditory SRT task in which the spatial organization of the response locations is either the same as spatial sequence in the training phase, or not. Sequence learning is compared to two control conditions: one with a non-sequential random-dot counting in the training phase, and one with no training phase. Results show that sequential training proactively interferes with later sequence learning, regardless of whether the sequence is the same or different in the two phases. Results argue for the existence of a general sequence processor with limited capacity, and that sequence structures and sequenced elements are integrated into a single sequential representation.  相似文献   

10.
11.
It is common to judge the duration of an audiovisual event, and yet it remains controversial how the judgment of duration is affected by signals from other modalities. We used an oddball paradigm to examine the effect of sound on the judgment of visual duration and that of a visual object on the judgment of an auditory duration. In a series of standards and oddballs, the participants compared the duration of the oddballs to that of the standards. Results showed asymmetric cross-modal effects, supporting the auditory dominance hypothesis: a sound extends the perceived visual duration, whereas a visual object has no effect on perceived auditory duration. The possible mechanisms (pacemaker or mode switch) proposed in the Scalar Expectancy Theory [Gibbon, J., Church, R. M., & Meck, W. H. (1984). Scalar timing in memory. In J. Gibbon & L. Allan (Eds.), Annals of the New York Academy of Sciences: Vol. 423. Timing and time perception (pp. 52–77). New York: New York Academy of Sciences] were examined using different standard durations. We conclude that sound increases the perceived visual duration by accelerating the pulse rate in the visual pacemaker.  相似文献   

12.
5 subjects matched pairs of auditory and vibrotactile stimuli on intensity, making judgments of suprathreshold magnitudes. Slope values for the 100-Hz cross-modal lingual vibrotactile stimulus-standard frequency condition were steeper than those for 250- or 400-Hz conditions. Slope values became steeper at about 25- to 30-dB SL, so frequency seems an important parameter to control in such research.  相似文献   

13.
According to embodied cognition, bodily interactions with our environment shape the perception and representation of our body and the surrounding space, that is, peripersonal space. To investigate the adaptive nature of these spatial representations, we introduced a multisensory conflict between vision and proprioception in an immersive virtual reality. During individual bimanual interaction trials, we gradually shifted the visual hand representation. As a result, participants unknowingly shifted their actual hands to compensate for the visual shift. We then measured the adaptation to the invoked multisensory conflict by means of a self-localization and an external localization task. While effects of the conflict were observed in both tasks, the effects systematically interacted with the type of localization task and the available visual information while performing the localization task (i.e., the visibility of the virtual hands). The results imply that the localization of one’s own hands is based on a multisensory integration process, which is modulated by the saliency of the currently most relevant sensory modality and the involved frame of reference. Moreover, the results suggest that our brain strives for consistency between its body and spatial estimates, thereby adapting multiple, related frames of reference, and the spatial estimates within, due to a sensory conflict in one of them.  相似文献   

14.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

15.
A recent report in Consciousness and Cognition provided evidence from a study of the rubber hand illusion (RHI) that supports the multisensory principle of inverse effectiveness (PoIE). I describe two methods of assessing the principle of inverse effectiveness (‘a priori’ and ‘post-hoc’), and discuss how the post-hoc method is affected by the statistical artefact of ‘regression towards the mean’. I identify several cases where this artefact may have affected particular conclusions about the PoIE, and relate these to the historical origins of ‘regression towards the mean’. Although the conclusions of the recent report may not have been grossly affected, some of the inferential statistics were almost certainly biased by the methods used. I conclude that, unless such artefacts are fully dealt with in the future, and unless the statistical methods for assessing the PoIE evolve, strong evidence in support of the PoIE will remain lacking.  相似文献   

16.
This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p < 0.004). The learning rates over time differed for both modality and the stimuli within modalities, although there was no correlation to global error rates or reaction time differences between the stimulus types. These results demonstrate a modeling method that is well suited to extract detailed information about the success of implicit learning from high variability data. It further shows a cross-modal implicit learning effect, which extends the understanding of the implicit learning system and highlights the possibility for information to be processed in a cross-modal representation without conscious processing.  相似文献   

17.
大脑可以对来自不同感觉通道的信息进行处理与整合。与单一感觉通道相比, 个体对同时呈现在不同感觉通道的目标信号的响应会更快。对于这种现象的一种主要理论解释是共同激活模型, 该模型认为来自不同通道的刺激在特定的脑区汇聚整合, 比如顶叶内沟、颞上沟和前额叶皮层区域。整合后的信号强度更大, 可以更快地触发反应, 但是信号的整合发生在认知加工的哪一阶段目前尚未有明确结论。当个体对出现在不同感觉通道之间的任务转换进行加工时, 产生与感觉通道相关的任务转换的损失小于跨感觉通道转换损失与任务转换损失的总和, 这为与感觉通道相关的转换代价来源于任务设置的惯性和干扰提供了证据。而在单通道和多通道之间发生转换时, 跨通道转换代价会减小甚至消失, 这是由于同时发生的多感觉整合抵消了一部分损失, 这种现象支持了共同激活模型理论。然而, 多感觉信号整合对任务转换的神经加工过程产生怎样的影响并不清楚, 在未来的研究中可以把多感觉整合范式同经典的任务转换范式结合改进, 进而确定跨通道转换的加工机制和多感觉信号整合的发生阶段。  相似文献   

18.
The purpose of the present study was to investigate possible effects of exposure upon suprathreshold psychological responses when auditory magnitude estimation and cross-modal matching with audition as the standard are conducted within the same experiment. Four groups of 10 subjects each whose over-all age range was 18 to 23 yr. were employed. During the cross-modal matching task the Groups 1 and 2 subjects adjusted a vibrotactile stimulus presented to the dorsal surface of the tongue and the Groups 3 and 4 subjects adjusted a vibrotactile stimulus presented to the thenar eminence of the right hand to match binaurally presented auditory stimuli. The magnitude-estimation task was conducted before the cross-modal matching task for Groups 1 and 3 and the cross-modal matching task was conducted before the magnitude-estimation task for Groups 2 and 4. The psychophysical methods of magnitude estimation and cross-modal matching showed no effect of one upon the other when used in the same experiment.  相似文献   

19.
The ability to solve tactual oddity problems, and transfer of oddity learning across the visual and the tactual modalities, was studied in 3- to 8-year-old children (N = 294). Oddity tasks consisting of one odd and two equal objects were made from stimuli that were easily discriminated visually and tactually. The results showed that tactual oddity learning increased gradually with age. The growth in tactual performance begins later than visual, suggesting that children are more adept at encoding visual stimulus invariances or relational properties than tactual ones. Bidirectional cross-modal transfer of oddity learning was found, supporting the suggestion that such transfer occurs when training and transfer oddity tasks share a common vehicle dimension. The cross-modal effect also shows that oddity learning is independent of a specific modality-labeled perceptual context. Our results are consistent with the view that development of oddity learning depends on a single rather than a dual process, and that the oddity relation may be treated as an amodal stimulus feature.  相似文献   

20.
Implicit learning and statistical learning: one phenomenon, two approaches   总被引:4,自引:0,他引:4  
The domain-general learning mechanisms elicited in incidental learning situations are of potential interest in many research fields, including language acquisition, object knowledge formation and motor learning. They have been the focus of studies on implicit learning for nearly 40 years. Stemming from a different research tradition, studies on statistical learning carried out in the past 10 years after the seminal studies by Saffran and collaborators, appear to be closely related, and the similarity between the two approaches is strengthened further by their recent evolution. However, implicit learning and statistical learning research favor different interpretations, focusing on the formation of chunks and statistical computations, respectively. We examine these differing approaches and suggest that this divergence opens up a major theoretical challenge for future studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号