首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 289 毫秒
1.
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.  相似文献   

2.
Two experiments investigated alternatives to split‐attention instructional designs. It was assumed that because a learner has a limited working memory capacity, any increase in cognitive resources required to process split‐attention materials decreases resources available for learning. Using computer‐based instructional material consisting of diagrams and text, Experiment 1 attempted to ameliorate split‐attention effects by increasing effective working memory size by presenting the text in auditory form. Auditory presentation of text proved superior to visual‐only presentation but not when the text was presented in both auditory and visual forms. In that case, the visual form was redundant and imposed a cognitive load that interfered with learning. Experiment 2 ameliorated split‐attention effects by using colour coding to reduce cognitive load inducing search for diagrammatic referents in the text. Mental load rating scales provided evidence in both experiments that alternatives to split‐attention instructional designs were effective due to reductions in cognitive load. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
Two studies are reported that tested the assumption that learning is improved by presenting text and pictures compared to text only when the text conveys non-spatial rather than spatial information. In Experiment 1, 59 students learned with text containing either visual or spatial contents, both accompanied by the same pictures. The results confirmed the expected interference between the processing of spatial text contents and pictures: Learners who received text containing spatial information showed worse text and picture recall than learners who received text containing visual information. In Experiment 2, 85 students were randomly assigned to one of four conditions, which resulted from a 2×2 between-participants design, with picture presentation (with vs without) and text contents (visual vs spatial) as between-participants factors. Again the results confirmed the expected interference between processing of spatial text information and pictures, because beneficial effects of adding pictures to text were observed only when the texts conveyed visual information. Importantly, when no pictures were present no differences were observed between learners with either visual or spatial texts contents, indicating that the observed effects are not caused by absolute differences between the two texts such as their difficulty. The implications of these results are discussed.  相似文献   

4.
When we read narrative texts such as novels and newspaper articles, we segment information presented in such texts into discrete events, with distinct boundaries between those events. But do our eyes reflect this event structure while reading? This study examines whether eye movements during the reading of discourse reveal how readers respond online to event structure. Participants read narrative passages as we monitored their eye movements. Several measures revealed that event structure predicted eye movements. In two experiments, we found that both early and overall reading times were longer for event boundaries. We also found that regressive saccades were more likely to land on event boundaries, but that readers were less likely to regress out of an event boundary. Experiment 2 also demonstrated that tracking event structure carries a working memory load. Eye movements provide a rich set of online data to test the cognitive reality of event segmentation during reading.  相似文献   

5.
It was investigated whether the beneficial effect of picture presentation might be influenced by the content conveyed through text and pictures and the way information is distributed between them. Ninety‐nine students learnt in five between‐subjects learning conditions (i.e., text with spatial contents plus pictures, text with visual contents plus pictures, only text with spatial contents, only text with visual contents, only picture) about a tourist centre and a holiday farm. Results showed that pictures (i.e., maps) were beneficial for learning if spatial knowledge had to be acquired, but did not support learning when non‐spatial, visual knowledge had to be acquired. Furthermore, a high overlap of spatial information in text and picture was helpful, which can be explained by the assumption that learning is a text‐guided process. On the other hand, regarding non‐spatial visual information, a high text‐picture overlap did not influence learning, probably because text alone was sufficient for acquiring visual knowledge. The implications of these findings are discussed.  相似文献   

6.
7.
The dual‐task paradigm was used to show how visuospatial working memory and the phonological loop are involved in processing scientific texts and illustrations presented via computer. In experiment 1, two presentation formats were compared: text‐only and text‐with‐illustrations. With a concurrent tapping task, the beneficial effect of illustrations disappeared, while a concurrent articulatory task impaired performance similarly in both presentation formats. An analysis of individual differences revealed that this pattern of results was present in high, but not low spatial span subjects. These results support the selective involvement of visuospatial working memory in processing illustrated texts. In Experiment 2, the text‐only presentation format was compared to an illustrations‐only format. The concurrent articulatory task selectively impaired text‐only processing, compared with processing illustrations‐only. In addition, this pattern of results was found for high, but not low digit span subjects. These results suggest that individual differences define the extent to which the two subsystems of working memory are involved in learning from multimedia. These two subsystems would be mainly involved in the maintenance of a visual trace of illustrations and of a verbatim representation of linguistic information respectively, these representations being the basis for higher‐level comprehension processes. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
This study examined how prior knowledge and working memory capacity (WMC) influence the effect of a reading perspective on online text processing. In Experiment 1, 47 participants read and recalled 2 texts of different familiarity from a given perspective while their eye movements were recorded. The participants' WMC was assessed with the reading span test. The results suggest that if the reader has prior knowledge related to text contents and a high WMC, relevant text information can be encoded into memory without extra processing time. In Experiment 2, baseline processing times showed whether readers slow down their processing of relevant information or read faster through their relevant information. The results are discussed in the light of different working memory theories.  相似文献   

9.
重复的画面布局能够促进观察者对目标项的搜索 (情境提示效应)。本研究采用双任务范式,分别在视觉搜索任务的学习阶段 (实验2a) 和测验阶段 (实验2b) 加入空间工作记忆任务, 并与单任务基线 (实验1)进行比较, 考察空间工作记忆负载对真实场景搜索中情境线索学习和情境提示效应表达的影响。结果发现: 空间负载会增大学习阶段的情境提示效应量, 同时削弱测验阶段的情境提示效应量, 而不影响情境线索的外显提取。由此可见, 真实场景中情境线索的学习和提示效应的表达均会受到有限的工作记忆资源的影响, 但情境线索提取的外显性不变。  相似文献   

10.
Three experiments assessed the role of verbal and visuo‐spatial working memory in supporting long‐term repetition priming for written words. In Experiment 1, two priming tasks (word stem completion and category‐exemplar production) were included with three levels of load on working memory: (1) without memory load, (2) memory load that involved storing a string of six digits, and (3) memory load that involved storing a graphic shape. Experiments 2 and 3 compared the effects of a verbal (Experiment 2) or a visual (Experiment 3) working memory load at encoding on both an implicit (word stem completion) and an explicit test (cued recall). The results show no effect of memory load in any of the implicit memory tests, suggesting that priming does not rely on working memory resources. By contrast, loading working memory at encoding causes a significant disruptive effect on the explicit memory test for words when the load is verbal but not visual.  相似文献   

11.
Experimental analogues of post-traumatic stress disorder suggest that loading the visuospatial sketchpad of working memory with a concurrent task reduces the vividness and associated distress of predominantly visual images. The present experiments explicitly tested the hypothesis that interfering with the phonological loop could analogously reduce the vividness and emotional impact of auditory images. In Experiment 1, 30 undergraduates formed non-specific images of emotive autobiographical memories while performing a concurrent task designed to load either the visuospatial sketchpad (eye movements) or phonological loop (articulatory suppression). Participants reported their images to be primarily visual, corresponding to the greater dual-task disruption observed for eye movements. Experiment 2 instructed participants to form specifically visual or auditory images. As predicted, concurrent articulation reduced vividness and emotional intensity ratings of auditory images to a greater extent than did eye movements, whereas concurrent eye movements reduced ratings of visual images much more than did articulatory suppression. Such modality-specific dual-task interference could usefully contribute to the treatment and management of intrusive distressing images in both clinical and non-clinical settings.  相似文献   

12.
Three experiments examined the role of eye and limb movements in the maintenance of information in spatial working memory. In Experiment 1, reflexive saccades interfered with memory span for spatial locations but did not interfere with memory span for letters. In Experiment 2, three different types of eye movements (reflexive saccades, pro-saccades, and anti-saccades) interfered with working memory to the same extent. In all three cases, spatial working memory was much more affected than verbal working memory. The results of these two experiments suggest that eye movements interfere with spatial working memory primarily by disrupting processes localised in the visuospatial sketchpad. In Experiment 3, limb movements performed while maintaining fixation produced as much interference with spatial working memory as reflexive saccades. These results suggest that the interference produced by eye movements is not the result of their visual consequences. Rather, all spatially directed movements appear to have similar effects on visuospatial working memory.  相似文献   

13.
Two experiments examined how interruptions impact reading and how interruption lags and the reader's spatial memory affect the recovery from such interruptions. Participants read paragraphs of text and were interrupted unpredictably by a spoken news story while their eye movements were monitored. Time made available for consolidation prior to responding to the interruption did not aid reading resumption. However, providing readers with a visual cue that indicated the interruption location did aid task resumption substantially in Experiment 2. Taken together, the findings show that the recovery from interruptions during reading draws on spatial memory resources and can be aided by processes that support spatial memory. Practical implications are discussed.  相似文献   

14.
We investigated how a picture fosters learning from text, both with self‐paced presentation and with short presentation before text. In an experiment, participants (N = 114) learned about the structure and functioning of a pulley system in one of six conditions: text only, picture presentation for 150 milliseconds, 600 milliseconds, or 2 seconds, or self‐paced before text, or self‐paced concurrent presentation of text and picture. Presenting the picture for self‐paced study time, both before and concurrently with text, fostered recall and comprehension and sped up text processing compared with presenting text only. Moreover, even inspecting the picture for only 600 milliseconds or 2 seconds improved comprehension and yielded faster reading of subsequent text about the spatial structure of the system compared with text only. These findings suggest that pictures, even if attended for a short time only, may yield a spatial mental scaffold that allows for the integration with verbal information, thereby fostering comprehension. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Two experiments examined how interruptions impact reading and how interruption lags and the reader's spatial memory affect the recovery from such interruptions. Participants read paragraphs of text and were interrupted unpredictably by a spoken news story while their eye movements were monitored. Time made available for consolidation prior to responding to the interruption did not aid reading resumption. However, providing readers with a visual cue that indicated the interruption location did aid task resumption substantially in Experiment 2. Taken together, the findings show that the recovery from interruptions during reading draws on spatial memory resources and can be aided by processes that support spatial memory. Practical implications are discussed.  相似文献   

16.
Procedural text conveys information of a series of steps to be performed. This study examined the role of verbal and visuo‐spatial WM in comprehension and execution of assembly instructions, as a function of format (text, images, multimedia) and task complexity (three or five steps). One hundred and eight participants read and executed 27 instructions to assemble a LEGOTM object, in single and dual task conditions. Study times and errors during assembly were measured. Participants processed faster pictorial and multimedia instructions than text instructions, and made fewer errors in the execution of multimedia instructions. Dual task affected more text or picture‐only, than multimedia presentation. A verbal secondary task caused more errors in text or picture‐only presentations, and spatial secondary task also caused interference in text‐only instructions. Overall, these results support the multimedia advantage, and the role of both verbal and visuo‐spatial WM, when understanding instructions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
When learning with multimedia, text and pictures are assumed to be integrated with each other. Arndt, Schüler, and Scheiter (Learning & Instruction, 35, 62–72, 2015) confirmed the process of text picture integration for sentence recognition, not, however, for picture recognition. The current paper investigates the underlying reasons for the latter finding. Two experiments are reported, where subjects memorized text–picture stimuli that differed in the specificity of information contained in either sentences or pictures. In a subsequent picture recognition test, subjects showed no integration effect after a 30‐minute delay (Experiments 1 and 2), but only after a 1‐week delay (Experiment 2). Furthermore, eye‐tracking data showed that participants sufficiently processed the pictures during learning (Experiment 1). This data pattern speaks in favor of the assumption that after a short delay participants had available a short‐lived pictorial surface representation, which masked the integration effect for pictorial recognition.Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
In three studies, eye movements of participants were recorded while they viewed a single-slide multimedia presentation about how car brakes work. Some of the participants saw an integrated presentation in which each segment of words was presented near its corresponding area of the diagram (integrated group, Experiments 1 and 3) or an integrated presentation that also included additional labels identifying each part (integrated-with-labels group, Experiment 2), whereas others saw a separated presentation in which the words were presented as a paragraph below the diagrams (separated group, Experiments 1 and 2) or as a legend below the diagrams (legend group, Experiment 3). On measures of cognitive processing during learning, the integrated groups made significantly more eye-movements from text to diagram and vice versa (integrative transitions; d = 1.65 in Experiment 1, d = 0.85 in Experiment 2, and d = 1.44 in Experiment 3) and significantly more eye-movements from the text to the corresponding part of the diagram (corresponding transitions; d = 2.02 in Experiment 1 and d = 1.35 in Experiment 3) than the separated groups. On measures of learning outcome the integrated groups significantly outperformed the separated groups on transfer test score in Experiment 1(d = .80) and Experiment 2 (d = .73) but not in Experiment 3 (d = .35). Spatial contiguity encourages more attempts to integrate words and pictures and enables more successful integration of words and pictures during learning, which can result in meaningful learning outcomes.  相似文献   

19.
20.
Summary Two experiments are reported which test the hypothesis that during reading subjects maintain in working memory a record of the spatial location of items and that this code is used to guide reinspections. In the first experiment the premisses of short syllogisms were read, one word at a time, under three presentation conditions: (a) in correct temporal order and in appropriate sequential spatial locations; (b) in correct temporal order but in random spatial locations; (c) in correct temporal order and in a single central location. A measure was taken of the time to respond to possible conclusions of the syllogisms. Solution times were longer in conditions (b) and (c) relative to (a). In the second experiment eye movements were recorded as subjects judged the soundness of auditorily presented conclusions following visual presentation of the premisses of syllogisms. Non-random eye movements took place during the solution phase directed to locations previously occupied by text in the display.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号