首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

2.
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.  相似文献   

3.
Investigated were differences in paired-associate learning for auditory versus visual modalities and within each modality the anticipation and study test methods of item presentation were compared. Extant reports re these two sensory modalities and of the two learning methods had been inconsistent. In this study of 40 university students, the learning of CVC-CVC nonsense syllable pairs was significantly better with the visual than with the auditory modality. The study-test method was significantly superior to the anticipation method in the visual mode. With auditory presentations, however, acquisition levels for both methods were the same. Significant interactions were observed between sensory modalities and methods of presentation. At present the retention interval theory (Izawa 1972–1979b) appears to account best for the varied findings with respect to the two methods of presentation.  相似文献   

4.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

5.
The present study investigated modality-specific differences in processing of temporal information in the subsecond range. For this purpose, participants performed auditory and visual versions of a rhythm perception and three different duration discrimination tasks to allow for a direct, systematic comparison across both sensory modalities. Our findings clearly indicate higher temporal sensitivity in the auditory than in the visual domain irrespective of type of timing task. To further evaluate whether there is evidence for a common modality-independent timing mechanism or for multiple modality-specific mechanisms, we used structural equation modeling to test three different theoretical models. Neither a single modality-independent timing mechanism, nor two independent modality-specific timing mechanisms fitted the empirical data. Rather, the data are well described by a hierarchical model with modality-specific visual and auditory temporal processing at a first level and a modality-independent processing system at a second level of the hierarchy.  相似文献   

6.

How do people automatize their dual-task performance through bottleneck bypassing (i.e., accomplish parallel processing of the central stages of two tasks)? In the present work we addressed this question, evaluating the impact of sensory–motor modality compatibility—the similarity in modality between the stimulus and the consequences of the response. We hypothesized that incompatible sensory–motor modalities (e.g., visual–vocal) create conflicts within modality-specific working memory subsystems, and therefore predicted that tasks producing such conflicts would be performed less automatically after practice. To probe for automaticity, we used a transfer psychological refractory period (PRP) procedure: Participants were first trained on a visual task (Exp. 1) or an auditory task (Exp. 2) by itself, which was later presented as Task 2, along with an unpracticed Task 1. The Task 1–Task 2 sensory–motor modality pairings were either compatible (visual–manual and auditory–vocal) or incompatible (visual–vocal and auditory–manual). In both experiments we found converging indicators of bottleneck bypassing (small dual-task interference and a high rate of response reversals) for compatible sensory–motor modalities, but indicators of bottlenecking (large dual-task interference and few response reversals) for incompatible sensory–motor modalities. Relatedly, the proportion of individuals able to bypass the bottleneck was high for compatible modalities but very low for incompatible modalities. We propose that dual-task automatization is within reach when the tasks rely on codes that do not compete within a working memory subsystem.

  相似文献   

7.
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Penney, Gibbon, & Meck, 2000). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities.  相似文献   

8.
It is suggested that the distinction between global versus local processing styles exists across sensory modalities. Activation of one-way of processing in one modality should affect processing styles in a different modality. In 12 studies, auditory, haptic, gustatory or olfactory global versus local processing was induced, and participants were tested with a measure of their global versus local visual attention; the content of this measure was unrelated to the inductions. In a different set of 4 studies, the effect of local versus global visual processing on the way people listen to a poem or touch, taste, and smell objects was examined. In all experiments, global/local processing in 1 modality shifted to global/local processing in the other modality. A final study found more pronounced shifts when compatible processing styles were induced in 2 rather than 1 modality. Moreover, the study explored mediation by relative right versus left hemisphere activation as measured with the line bisection task and accessibility of semantic associations. It is concluded that the effects reflect procedural rather than semantic priming effects that occurred out of participants' awareness. Because global/local processing has been shown to affect higher order processing, future research may activate processing styles in other sensory modalities to produce similar effects. Furthermore, because global/local processing is triggered by a variety of real world variables, one may explore effects on other sensory modalities than vision. The results are consistent with the global versus local processing model, a systems account (GLOMOsys; F?rster & Dannenberg, 2010).  相似文献   

9.
In two experiments, subjects identified temporal patterns. The patterns consisted of eight dichotomous (left-right) elements, e.g. LLRRLRLR, continuously repeated until the subject was able to identify the pattern. In Experiment 1, one pattern was presented either separately in the auditory, tactual, or visual modalities or one pattern was presented simultaneously in two modalities (compatible presentation). In Experiment 2, one pattern was simultaneously presented in two modalities, but the pattern was presented in one modality and the complement of the pattern (the complement of LLRRLRLR is RRLLRLRL) was presented in the second modality. Therefore, opposite spatial elements appeared in each modality (incompatible presentation).

The results indicated that the rate of pattern identification was the same for compatible and incompatible presentation. These methods produce better performance than individual modality presentation at fast presentation rates (2 elements/sec. and faster) although individual modality presentation was better at slower rates. This suggests that when a pattern is presented in two modalities, the pattern in each modality is integrated, not the particular spatial elements in each modality. Furthermore, the rate of pattern identification using individual modalities did not predict the difficulty using pairs of modalities. These results demonstrate the Gestalt nature of pattern perception; the pattern is perceptually salient and the performance of pairs of modalities depends on the inherent properties of the individual modalities.  相似文献   

10.
Statistical learning (SL), sensitivity to probabilistic regularities in sensory input, has been widely implicated in cognitive and perceptual development. Little is known, however, about the underlying mechanisms of SL and whether they undergo developmental change. One way to approach these questions is to compare SL across perceptual modalities. While a decade of research has compared auditory and visual SL in adults, we present the first direct comparison of visual and auditory SL in infants (8–10 months). Learning was evidenced in both perceptual modalities but with opposite directions of preference: Infants in the auditory condition displayed a novelty preference, while infants in the visual condition showed a familiarity preference. Interpreting these results within the Hunter and Ames model (1988), where familiarity preferences reflect a weaker stage of encoding than novelty preferences, we conclude that there is weaker learning in the visual modality than the auditory modality for this age. In addition, we found evidence of different developmental trajectories across modalities: Auditory SL increased while visual SL did not change for this age range. The results suggest that SL is not an abstract, amodal ability; for the types of stimuli and statistics tested, we find that auditory SL precedes the development of visual SL and is consistent with recent work comparing SL across modalities in older children.  相似文献   

11.
Methodological biases may help explain the modality effect, which is superior recall of auditory recency (end of list) items relative to visual recency items. In 1985 Nairne and McNabb used a counting procedure to reduce methodological biases, and they produced modality-like effects, such that recall of tactile recency items was superior to recall of visual recency items. The present study extended Nairne and McNabb's counting procedure and controlled several variables which may have enhanced recall of tactile end items or disrupted recall of visual end items in their study. Although the results of the present study indicated general serial position effects across tactile, visual, and auditory presentation modalities, the tactile condition showed lower recall for the initial items in the presentation list than the other two conditions. Moreover, recall of the final list item did not differ across the three presentation modalities; modality effects were not found. These results did not replicate the findings of Nairne and McNabb, or much of the past research showing superior recall of auditory recency items. Implications of these findings are discussed.  相似文献   

12.
Common processing systems involved during reading and listening were investigated. Semantic, phonological, and physical systems were examined using an experimental procedure that involved simultaneous presentation of two words: one visual and one auditory. Subjects were instructed to attend to only one modality and to make responses on the basis of words presented in that modality. Influence of unattended words on semantic and phonological decisions indicated that these processing systems are common to the two modalities. Decisions in the physical task were based on modality-specific codes operating prior to the convergence of information from the two modalities.  相似文献   

13.
This paper reports a study which examined an interaction between action planning and processing of perceptual information in two different sensory modalities. In line with the idea that action planning consists in representing the action’s sensory outcomes, it was assumed that different types of actions should be coupled with different modalities. A visual and auditory oddball paradigm was combined with two types of actions: pointing and knocking (unrelated to the perceptual task). Results showed an interactive effect between the action type and the sensory modality of the oddballs, with impaired detection of auditory oddballs for knocking (congruent) action, as compared to a pointing (incongruent) action. These findings reveal that action planning can interact with modality-specific perceptual processing and that preparing an action presumably binds the respective perceptual features with an action plan, thereby making these features less available for other tasks.  相似文献   

14.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

15.
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.  相似文献   

16.
《Acta psychologica》2013,143(1):58-64
The current study investigates whether probabilistic categorization on the Weather Prediction task involves a single, modality/domain general learning mechanism or there are modality/domain differences. The same probabilistic categorization task was used in three modalities/domains and two modes of presentation. Cues consisted of visual, auditory-verbal or auditory-nonverbal stimuli, and were presented either sequentially or simultaneously. Results show that while there was no general difference in performance across modalities/domains, the mode of presentation affected them differently. In the visual modality, simultaneous performance had a general advantage over sequential presentation, while in the auditory conditions, there was an initial advantage of simultaneous presentation, which disappeared, and in the non-verbal condition, gave over to a sequential advantage in the later stages of learning. Data suggest that there are strong peripheral modality effects; however, there are no signs of modality/domain of stimuli centrally affecting categorization.  相似文献   

17.
Serial recall of lip-read, auditory, and audiovisual memory lists with and without a verbal suffix was examined. Recency effects were the same in the three presentation modalities. The disrupting effect of a suffix was largest when it was presented in the same modality as the list items. The results suggest that abstract linguistic as well as modality-specific codes play a role in memory for auditory and visual speech.  相似文献   

18.
Tracking an audio-visual target involves integrating spatial cues about target position from both modalities. Such sensory cue integration is a developmental process in the brain involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian learning-based adaptive neural circuit for multi-modal cue integration. The circuit temporally correlates stimulus cues within each modality via intramodal learning as well as symmetrically across modalities via crossmodal learning to independently update modality-specific neural weights on a sample-by-sample basis. It is realised as a robotic agent that must orient towards a moving audio-visual target. It continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues that is directly mapped to robot wheel velocities to elicit an orientation response. Visual directional cues are noise-free and continuous but arising from a relatively narrow receptive field while auditory directional cues are noisy and intermittent but arising from a relatively wider receptive field. Comparative trials in simulation demonstrate that concurrent intramodal learning improves both the overall accuracy and precision of the orientation responses of symmetric crossmodal learning. We also demonstrate that symmetric crossmodal learning improves multisensory responses as compared to asymmetric crossmodal learning. The neural circuit also exhibits multisensory effects such as sub-additivity, additivity and super-additivity.  相似文献   

19.
Two experiments were conducted that examined information integration and rule-based category learning, using stimuli that contained auditory and visual information. The results suggest that it is easier to perceptually integrate information within these sensory modalities than across modalities. Conversely, it is easier to perform a disjunctive rule-based task when information comes from different sensory modalities, rather than from the same modality. Quantitative model-based analyses suggested that the information integration deficit for across-modality stimulus dimensions was due to an increase in the use of hypothesis-testing strategies to solve the task and to an increase in random responding. The modeling also suggested that the across-modality advantage for disjunctive, rule-based category learning was due to a greater reliance on disjunctive hypothesis-testing strategies, as opposed to unidimensional hypothesis-testing strategies and random responding.  相似文献   

20.
The present study investigated the effects of modality presentation on the verbal learning performance of 26 older adults and 26 younger cohorts. A multitrial free-recall paradigm was implemented incorporating three modalities: Auditory, Visual, and simultaneous Auditory plus Visual. Older subjects learned fewer words than younger subjects but their rate of learning was similar to that of the younger group. The visual presentation of objects (with or without the simultaneous auditory presentation of names) resulted in better learning, recall, and retrieval of information than the auditory presentation alone.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号