首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study investigated the effects of stimulus presentation modality on working memory performance in elementary school-age children ages 7–13. The experimental paradigm implemented a multitrial learning task incorporating three presentation modalities: Auditory, Visual, and simultaneous Auditory plus Visual. The first experiment compared the learning and memory performance of older and younger elementary school children. The second experiment compared verbal learning and memory performance in elementary school children with major depressive disorder (MDD) to the performance of nondepressed children. All participants benefited from the pictorial presentation of information during learning and recall of information as compared to the auditory presentation alone. Both age and socioeconomic status affected working memory performance in typically developing children. Children with depression demonstrated a more passive learning style during the auditory list acquisition. The present study supports the pictorial superiority hypothesis in verbal learning tasks and the theory that working memory matures during elementary school years. Furthermore, current results indicate that complex working memory measures are not entirely independent of previous experience.  相似文献   

2.
Evidence for picture superiority in verbal learning following moderate to severe closed head injury (CHI) was found in a study involving 31 participants with CHI and 31 noninjured participants. A multitrial free-recall paradigm was implemented incorporating three modalities: Auditory, visual, and simultaneous auditory plus visual. Participants with moderate to severe CHI learned fewer words and at a slower rate than the noninjured participants. The visual presentation of objects (with or without the simultaneous auditory presentation of names) resulted in better learning than the auditory presentation alone.  相似文献   

3.
Lists of digits 5 and 7 items in length were presented to second graders, sixth graders, and low-IQ sixth graders in either the visual or auditory modality. Half the auditory lists were followed by the redundant nonrecalled, auditorily presented word “recall” which served as a list suffix. The second graders had the most errors in the ordered recall task followed by the low-IQ sixth- and normal sixth-graders in that order. The size of the modality and suffix effects for the various groups seemed to indicate that, for the younger subjects, a larger proportion of the recall after auditory presentation comes from the Prelinguistic Auditory Store than for the older subjects.  相似文献   

4.
Auditory text presentation improves learning with pictures and texts. With sequential text–picture presentation, cognitive models of multimedia learning explain this modality effect in terms of greater visuo‐spatial working memory load with visual as compared to auditory texts. Visual texts are assumed to demand the same working memory subsystem as pictures, while auditory texts make use of an additional cognitive resource. We provide two alternative assumptions that relate to more basic processes: First, acoustic‐sensory information causes a retention advantage for auditory over visual texts which occurs no matter if a picture is presented or not. Second, eye movements during reading hamper visuo‐spatial rehearsal. Two experiments applying elementary procedures provide first evidence for these assumptions. Experiment 1 demonstrates that, regarding text recall, the auditory advantage is independent of visuo‐spatial working memory load. Experiment 2 reveals worse matrix recognition performance after reading text requiring eye movements than after listening or reading without eye movements. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Two experiments with 5- and 7-year-old children tested the hypotheses that auditory attention is used to (a) monitor a TV program for important visual content, and (b) semantically process program information through language to enhance comprehension and visual attention. A direct measure of auditory attention was the latency of the child's restoration of gradually degraded sound quality. Restoration of auditory clarity did not vary as a function of looking. Restoration of visual clarity was faster when looking than when not looking. Restoration was faster for visual than auditory degrades, but audiovisual degrades were restored most rapidly of all, suggesting that dual modality presentation maximizes children's attention. Narration enhanced visual attention and comprehension including comprehension of visually presented material. Auditory comprehension did not depend on looking, suggesting that children can semantically process verbal content without looking at the TV. Auditory attention did not differ with the presence or absence of narration, but did predict auditory comprehension best while visual attention predicted visual comprehension best. In the absence of narration, auditory attention predicted visual comprehension, suggesting its monitoring function. Visual attention indexed overall interest and appeared to be most critical for comprehension in the absence of narration.  相似文献   

6.
During presentation of auditory and visual lists of words, different groups of subjects generated words that either rhymed with the presented words or that were associates. Immediately after list presentation, subjects recalled either the presented or the generated words. After presentation and test of all lists, a final free recall test and a recognition test were given. Visual presentation generally produced higher recall and recognition than did auditory presentation for both encoding conditions. The results are not consistent with explanations of modality effects in terms of echoic memory or greater temporal distinctiveness of auditory items. The results are more in line with the separate-streams hypothesis, which argues for different kinds of input processing for auditory and visual items.  相似文献   

7.
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.  相似文献   

8.
Modality effects and the structure of short-term verbal memory   总被引:6,自引:0,他引:6  
The effects of auditory and visual presentation upon short-term retention of verbal stimuli are reviewed, and a model of the structure of short-term memory is presented. The main assumption of the model is that verbal information presented to the auditory and visual modalities is processed in separate streams that have different properties and capabilities. Auditory items are automatically encoded in both the A (acoustic) code, which, in the absence of subsequent input, can be maintained for some time without deliberate allocation of attention, and a P (phonological) code. Visual items are retained in both the P code and a visual code. Within the auditory stream, successive items are strongly associated; in contrast, in the visual modality, it is simultaneously presented items that are strongly associated. These assumptions about the structure of short-term verbal memory are shown to account for many of the observed effect of presentation modality.  相似文献   

9.
Two experiments testing immediate ordered recall are presented; in these experiments, subjects engaged in repetitive speech (“articulatory suppression”) during a visual presentation in order to prevent auditory recoding of the stimuli. In both experiments, a simultaneous presentation produced results that suggested the use of visual short-term memory, whereas a sequential presentation did not. In Experiment 1, visual confusion errors occurred more often than would be expected by chance for a simultaneous presentation but not for a sequential presentation. In Experiment 2, recall from visual short-term memory was expected to suffer more when subjects wrote a prefix than when they spoke a prefix; this effect occurred for a simultaneous presentation but not for a sequential presentation. These results suggest that existence of a visual short term store that retains a simultaneous presentation but not a sequential presentation.  相似文献   

10.
This study examined the relation between a subjective and a behavioral measure of the vividness of auditory imagery as well as the disposition towards hallucination in normal subjects. In addition to the Launay-Slade Hallucination Scale, subjects (57 university students) completed the Betts questionnaire in which they rated the vividness of their experienced mental images and performed a behavioral task aimed at measuring auditory imagery. The task consisted of a perception and an imagery condition in which subjects had to indicate the odd one of three everyday sounds. Performance on the behavioral task did not correlate with the auditory or scores on the Visual subscale of the Betts. In addition, neither scores on the behavioral measure nor the Auditory subscale of the Betts correlated significantly with hallucinatory predisposition as rated on the Launay-Slade Hallucination Scale. In contrast, the Visual subscale of the Betts did correlate with scores on the Launay-Slade Hallucination Scale, consistent with previous research. We conclude that there is no straightforward relationship between imagery vividness and hallucinatory experiences and that subjective and objective indices of imagery vividness measure different aspects of mental function.  相似文献   

11.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

12.
Implicit statistical learning (ISL) is exclusive to neither a particular sensory modality nor a single domain of processing. Even so, differences in perceptual processing may substantially affect learning across modalities. In three experiments, statistically equivalent auditory and visual familiarizations were presented under different timing conditions that either facilitated or disrupted temporal processing (fast or slow presentation rates). We find an interaction of rate and modality of presentation: At fast rates, auditory ISL was superior to visual. However, at slow presentation rates, the opposite pattern of results was found: Visual ISL was superior to auditory. Thus, we find that changes to presentation rate differentially affect ISL across sensory modalities. Additional experiments confirmed that this modality-specific effect was not due to cross-modal interference or attentional manipulations. These findings suggest that ISL is rooted in modality-specific, perceptually based processes.  相似文献   

13.
Visual dominance was investigated in a motor learning task with the criterion movement being in the lateral plane of the body. The criterion movement was a 10-in. abduction of the arm. All subjects received four presentation trials for the criterion movement in each of the following conditions: dominant rotated arm, dominant unrotated arm, nondominant rotated arm, and nondominant unrotated arm. Three independent groups of 10 college-age subjects differed according to sensory stimuli given during presentation trials. The Kinesthetic group was blindfolded for presentation trials. The Visual and Kinesthetic group was unblindfolded for presentation trials. The Alternating group was blindfolded on half of the presentation trials and unblindfolded on the other half. All subjects carried out five blindfolded reproduction trials for each of the four conditions. Absolute error for the length of the reproduced movements was measured and no significant difference between groups was found. This suggests that visual dominance is reduced in movements outside the frontal plane when focal vision is not used. Planned comparison testing indicated the Alternating group was significantly more accurate than the Visual and Kinesthetic group.  相似文献   

14.
Bonobos (Pan paniscus; n = 4), chimpanzees (Pan troglodytes; n = 12), gorillas (Gorilla gorilla; n = 8), and orangutans (Pongo pygmaeus; n = 6) were presented with 2 cups (1 baited) and given visual or auditory information about their contents. Visual information consisted of letting subjects look inside the cups. Auditory information consisted of shaking the cup so that the baited cup produced a rattling sound. Subjects correctly selected the baited cup both when they saw or heard the food. Nine individuals were above chance in both visual and auditory conditions. More important, subjects as a group selected the baited cup when only the empty cup was either shown or shaken, which means that subjects chose correctly without having seen or heard the food (i.e., inference by exclusion). Control tests showed that subjects were not more attracted to noisy cups, avoided shaken noiseless cups, or learned to use auditory information as a cue during the study. It is concluded that subjects understood that the food caused the noise, not simply that the noise was associated with the food.  相似文献   

15.
Research on the effects of context and task on learning and memory has included approaches that emphasize processes during learning (e.g., Craik & Tulving, 1975) and approaches that emphasize a match of conditions during learning with conditions during a later test of memory (e.g., Morris, Bransford, & Franks, 1977; Proteau, 1992; Tulving & Thomson, 1973). We investigated the effects of auditory context on learning and retrieval in three experiments on memorized music performance (a form of serial recall). Auditory feedback (presence or absence) was manipulated while pianists learned musical pieces from notation and when they later played the pieces from memory. Auditory feedback during learning significantly improved later recall. However, auditory feedback at test did not significantly affect recall, nor was there an interaction between conditions at learning and test. Auditory feedback in music performance appears to be a contextual factor that affects learning but is relatively independent of retrieval conditions.  相似文献   

16.
Analogous auditory and visual central-incidental learning tasks were administered to 24 second-, fourth-, and sixth-grade and college-age subjects to study the effects of modality of presentation on memory for central and incidental stimulus materials. There was no strong evidence to indicate that modality of presentation was an important factor in the development of selective attention. Central task learning increased with age for both auditory and visual presentations; incidental learning declined at the oldest age level for both auditory and visual tasks. The serial position analysis revealed that the observed developmental increase in recall performance was due primarily to differences in the initial serial positions. The use of active strategies for focusing attention on the relevant stimulus materials seemed to be the crucial determinant of level of performance.  相似文献   

17.
Auditory and visual presentation of verbal material were compared in a single patient having an auditory verbal S.T.M. deficit. A Peterson short-term forgetting experiment and an immediate memory span task are reported. Striking differences in performance related to modality of input were obtained. Auditory short-term forgetting was more rapid, whereas with visual presentation short-term decay functions were relatively normal. With visual presentation there was no evidence of acoustic confusion errors but there was some evidence of visual confusion errors. The findings are interpreted in terms of a separate post-perceptual visual S.T.M. system.  相似文献   

18.
To determine the relations of auditory discrimination and intelligence to reading achievement in first grade the California Test of Mental Maturity, the California Achievement Test (reading), and the Buktenica Modification of the Wepman Auditory Discrimination Test were administered to 78 first grade students. Correlations suggested a stronger relation between auditory discrimination and reading than between IQ and reading as measured here; however, range of reading scores were restricted. Results support Wepman's developmental theory. The methodological approach includes simultaneous consideration of effects of auditory discrimination and intelligence.  相似文献   

19.
Auditory perception of English minimal pairs was tested with or without noise background. Each subject was interviewed after the test to collect information regarding their early experience on learning English as a foreign language. This study was designed to examine the differential effects of learning English at three age-starting points and two learning durations. This study hopes to determine how childhood experience of English learning (which is not mandatory in public elementary schools) has affected the auditory competence of university students in distinguishing English minimal pairs. Results showed that age effects were salient only under condition of noise background. Without the interference of background noise, most subjects performed well enough to obliterate any potential differences.  相似文献   

20.
In two experiments, we investigated how auditory–motor learning influences performers’ memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory–motor (normal performance), and weakly coupled auditory–motor (performing along with auditory recordings). Pianists’ recognition of the learned melodies was better following auditory-only or auditory–motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory–motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory–motor learning. These findings suggest that motor learning can aid performers’ auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号