首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two studies are reported that tested the assumption that learning is improved by presenting text and pictures compared to text only when the text conveys non-spatial rather than spatial information. In Experiment 1, 59 students learned with text containing either visual or spatial contents, both accompanied by the same pictures. The results confirmed the expected interference between the processing of spatial text contents and pictures: Learners who received text containing spatial information showed worse text and picture recall than learners who received text containing visual information. In Experiment 2, 85 students were randomly assigned to one of four conditions, which resulted from a 2×2 between-participants design, with picture presentation (with vs without) and text contents (visual vs spatial) as between-participants factors. Again the results confirmed the expected interference between processing of spatial text information and pictures, because beneficial effects of adding pictures to text were observed only when the texts conveyed visual information. Importantly, when no pictures were present no differences were observed between learners with either visual or spatial texts contents, indicating that the observed effects are not caused by absolute differences between the two texts such as their difficulty. The implications of these results are discussed.  相似文献   

2.
Auditory text presentation improves learning with pictures and texts. With sequential text–picture presentation, cognitive models of multimedia learning explain this modality effect in terms of greater visuo‐spatial working memory load with visual as compared to auditory texts. Visual texts are assumed to demand the same working memory subsystem as pictures, while auditory texts make use of an additional cognitive resource. We provide two alternative assumptions that relate to more basic processes: First, acoustic‐sensory information causes a retention advantage for auditory over visual texts which occurs no matter if a picture is presented or not. Second, eye movements during reading hamper visuo‐spatial rehearsal. Two experiments applying elementary procedures provide first evidence for these assumptions. Experiment 1 demonstrates that, regarding text recall, the auditory advantage is independent of visuo‐spatial working memory load. Experiment 2 reveals worse matrix recognition performance after reading text requiring eye movements than after listening or reading without eye movements. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
We investigated how a picture fosters learning from text, both with self‐paced presentation and with short presentation before text. In an experiment, participants (N = 114) learned about the structure and functioning of a pulley system in one of six conditions: text only, picture presentation for 150 milliseconds, 600 milliseconds, or 2 seconds, or self‐paced before text, or self‐paced concurrent presentation of text and picture. Presenting the picture for self‐paced study time, both before and concurrently with text, fostered recall and comprehension and sped up text processing compared with presenting text only. Moreover, even inspecting the picture for only 600 milliseconds or 2 seconds improved comprehension and yielded faster reading of subsequent text about the spatial structure of the system compared with text only. These findings suggest that pictures, even if attended for a short time only, may yield a spatial mental scaffold that allows for the integration with verbal information, thereby fostering comprehension. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Text–picture integration is one of the most important cognitive processes when reading illustrated text. There is empirical evidence that text‐picture integration takes place when learning with pictures combined with single sentences. The present experiment investigated whether text–picture integration also takes place when the single sentences are embedded into longer text segments and hence when materials become more complex. In a within‐subjects design, 43 participants read an illustrated story, in which the different combinations of general and specific sentences and pictures, respectively, were embedded. In line with previous findings, participants were more likely to falsely recognize specific versions of the sentences after having studied their general versions combined with specific pictures. Thus, the experiment shows that text–picture integration also occurs when learners have to read longer text passages combined with pictures.  相似文献   

5.
In a series of four experiments, we examined the impact of disfluency in multimedia learning by testing contrasting predictions derived from disfluency theory and cognitive load theory against each other. Would a less legible text be beneficial to learning when accompanied by pictures, and what would be the role of less legible pictures? Students (N = 308) learned with text and pictures that were either easy‐to‐read (i.e., fluent) or harder‐to‐read (i.e., disfluent) about how a toilet flush works (Experiments 1–3) and about how lightning develops (Experiment 4). In line with disfluency theory, a disfluent text led to better performance in the transfer test and to more invested mental effort in Experiment 1. However, these beneficial effects could not be replicated in Experiments 2, 3, and 4, leaving open questions regarding the stability and generalizability of the disfluency effect, and thus raising concerns regarding its impact for educational practice. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Children have a bias to trust spoken testimony, yet early readers have an even stronger bias to trust print. Here, we ask how enduring is the influence of printed testimony: Can the learning be applied to new scenarios? Using hybrid pictures more dominant in one animal species (e.g., squirrel) than another (e.g., rabbit), we examined 3–6‐year‐olds' (N = 130) acceptance of an unexpected, non‐dominant label suggested only orally or via print. Consistent with previous findings, early readers, but not pre‐readers, accepted printed labels more frequently than when spoken. Children were then presented with identical but unlabelled hybrid exemplars and frequently applied the non‐dominant labels to these. Despite early readers' prior greater acceptance of text, when oral suggestions were accepted they retained a greater influence. Findings highlight potential implications for educators regarding knowledge being applied to new scenarios: For early readers, unexpected information from text may be fragile, while a greater confidence might be placed in such information gained from spoken testimony.  相似文献   

7.
8.
When learning with multimedia, text and pictures are assumed to be integrated with each other. Arndt, Schüler, and Scheiter (Learning & Instruction, 35, 62–72, 2015) confirmed the process of text picture integration for sentence recognition, not, however, for picture recognition. The current paper investigates the underlying reasons for the latter finding. Two experiments are reported, where subjects memorized text–picture stimuli that differed in the specificity of information contained in either sentences or pictures. In a subsequent picture recognition test, subjects showed no integration effect after a 30‐minute delay (Experiments 1 and 2), but only after a 1‐week delay (Experiment 2). Furthermore, eye‐tracking data showed that participants sufficiently processed the pictures during learning (Experiment 1). This data pattern speaks in favor of the assumption that after a short delay participants had available a short‐lived pictorial surface representation, which masked the integration effect for pictorial recognition.Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
This study aimed to examine how different forms (still pictures vs. animations) of seductive illustrations impact text‐and‐graphic learning processes, perceptions, and outcomes. An eye‐tracking experiment of three groups (static, dynamic, and control) was conducted with 60 college and graduate students while learning with PowerPoint slides about infant motor development milestones. Prior knowledge, learning performance, learning perception, and visual attention were assessed by achievement tests, self‐rated scales, and eye‐tracking measures. Analysis of variance and t test results showed that, under a low task‐load condition, no seductive details effect was found for learning achievement but was found for learning process and perception. Decreased attention was found in the relevant pictures in both experimental groups. With more deeply and intensively processing on the seductive animations, the dynamic group perceived more distractions than the static group. Lag sequential analysis results revealed different visual transitional patterns for the groups, providing deep understandings about the process of seductive details effects.  相似文献   

10.
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.  相似文献   

11.
Sight‐word instruction can be a useful supplement to phonics‐based methods under some circumstances. Nonetheless, few studies have evaluated the conditions under which pictures may be used successfully to teach sight‐word reading. In this study, we extended prior research by examining two potential strategies for reducing the effects of overshadowing when using picture prompts. Five children with developmental disabilities and two typically developing children participated. In the first experiment, the therapist embedded sight words within pictures but gradually faded in the pictures as needed using a least‐to‐most prompting hierarchy. In the second experiment, the therapist embedded text‐to‐picture matching within the sight‐word reading sessions. Results suggested that these strategies reduced the interference typically observed with picture prompts and enhanced performance during teaching sessions for the majority of participants. Text‐to‐picture matching also accelerated mastery of the sight words relative to a condition under which the therapist presented text without pictures.  相似文献   

12.
Two experiments were conducted examining the effects of partial picture adjuncts on young children's coding of information that was implied in sentences. In the two most critical conditions of these studies, subjects were presented sentences specifying a subject, an action, and a direct object with the instrument used to carry out the action not specified in the sentence (e.g., The workman dug a hole in the ground). Implicit-sentence-only subjects received only the sentences, whereas the implicit sentence + partial picture subjects also viewed a partial picture depicting the action in the sentence minus the implied instrument. The main hypothesis was that subsequent recall of the sentences given the implied instrument as a cue would be facilitated by the partial pictures provided at study, since they would lead the children to infer the instrument. That occurred with 6- to 7-year-old children, but not with preschool children. Consistent with the conclusion that the partial pictures prompted 6- to 7-year-olds to infer the instruments, implicit sentence + partial picture subjects recalled as much as subjects in two other conditions, one in which subjects were explicitly told the instruments at study and one in which subjects saw the instruments depicted in pictures at study. In contrast, preschool subjects who heard explicit sentences containing the instruments outperformed subjects who heard implicit sentences even when the implicit sentences were accompanied by pictures depicting the instruments. This failure of complete pictures to facilitate preschoolers' recall of information implied in sentences contrasts with the many demonstrations of prose learning facilitation when picture and sentence contents explicitly and completely overlap. In summary, there were developmental differences in whether (a) partial pictures significantly facilitated inferencing (and subsequent cued recall) and (b) complete pictures containing information not explicitly stated in sentences promoted cued recall of the sentences.  相似文献   

13.
Background: Recent research on the influence of presentation format on the effectiveness of multimedia instructions has yielded some interesting results. According to cognitive load theory (Sweller, Van Merriënboer, & Paas, 1998) and Mayer's theory of multimedia learning (Mayer, 2001), replacing visual text with spoken text (the modality effect) and adding visual cues relating elements of a picture to the text (the cueing effect) both increase the effectiveness of multimedia instructions in terms of better learning results or less mental effort spent. Aims: The aim of this study was to test the generalisability of the modality and cueing effect in a classroom setting. Sample: The participants were 111 second‐year students from the Department of Education at the University of Gent in Belgium (age between 19 and 25 years). Method: The participants studied a web‐based multimedia lesson on instructional design for about one hour. Afterwards they completed a retention and a transfer test. During both the instruction and the tests, self‐report measures of mental effort were administered. Results: Adding visual cues to the pictures resulted in higher retention scores, while replacing visual text with spoken text resulted in lower retention and transfer scores. Conclusions: Only a weak cueing effect and even a reverse modality effect have been found, indicating that both effects do not easily generalise to non‐laboratory settings. A possible explanation for the reversed modality effect is that the multimedia instructions in this study were learner‐paced, as opposed to the system‐paced instructions used in earlier research.  相似文献   

14.
Integrating pictorial information across eye movements   总被引:5,自引:0,他引:5  
Six experiments are reported dealing with the types of information integrated across eye movements in picture perception. A line drawing of an object was presented in peripheral vision, and subjects made an eye movement to it. During the saccade, the initially presented picture was replaced by another picture that the subject was instructed to name as quickly as possible. The relation between the stimulus on the first fixation and the stimulus on the second fixation was varied. Across the six experiments, there was about 100-130 ms facilitation when the pictures were identical compared with a control condition in which only the target location was specified on the first fixation. This finding clearly implies that information about the first picture facilitated naming the second picture. Changing the size of the picture from one fixation to the next had little effect on naming time. This result is consistent with work on reading and low-level visual processes in indicating that pictorial information is not integrated in a point-by-point manner in an integrated visual buffer. Moreover, only about 50 ms of the facilitation for identical pictures could be attributed to the pictures having the same name. When the pictures represented the same concept (e.g., two different pictures of a horse), there was a 90-ms facilitation effect that could have been the result of either the visual or conceptual similarity of the pictures. However, when the pictures had different names, only visual similarity produced facilitation. Moreover, when the pictures had different names, there appeared to be inhibition from the competing names. The results of all six experiments are consistent with a model in which the activation of both the visual features and the name of the picture seen on the first fixation survive the saccade and combine with the information extracted on the second fixation to produce identification and naming of the second picture.  相似文献   

15.
Background: The Material Appropriate Processing (MAP) framework suggests that the influence of a text adjunct on the learning and transfer of textual information will be moderated by the overlap between type of processing induced by the adjunct and by the organisation of the text. The two types of processing are item specific processing and relational processing. Although complementary types of processing have been found to produce superior concept learning effects in previous research, there is some question as to the effects of complementary but unbalanced processing. Aims: This study examined the effects of different combinations of two types of text adjuncts (i.e., elaborative activities) and two types of text on learning concepts from text. The four combined treatments differed as to the degree to which they were balanced and/or complementary. Sample: Participants were 80 undergraduate students who were enrolled in a Year 3 education paper. Methods: Students studied a passage that included adjuncts which asked them to either: (a) create personal examples of target concepts, or (b) contrast the target concepts. In addition, two versions of text were paired with these adjuncts: specific‐only text and specific/relational text. Subjects took a criterion test that consisted of cued recall of definitions, free recall of text, classification of novel examples, and problem solving. Results: Best performance occurred within the condition that included balanced and complementary processing of text/adjunct information, and worst performance occurred in the condition that included non‐complementary processing. Conclusion: Although these results are consistent with a MAP perspective, the results are equivocal about the potential interfering effects of complementary and unbalanced processing on learning of concepts from text.  相似文献   

16.
The effects of differences in study processing on free recall of picture names and on generalization in picture identification were investigated. Experience with degraded pictures produced poorer subsequent free recall of picture names than did naming intact pictures. For the test of picture identification, pictures that were identical to a studied picture, pictures that shared a name with a studied picture (same name), and new test pictures were presented, and the amount of clarification required to identify a picture was measured. Experience with degraded pictures produced better subsequent identification of identical test pictures but poorer later identification of same-name test pictures than did naming intact pictures. The importance of these episodic effects for theories of concept learning and theories of memory is discussed. It is argued that distinctions between memory systems (e.g., episodic-semantic) must be couched in terms of a theory of concept learning and that the data are inconsistent with a simple distinction.  相似文献   

17.
Visuo‐manual interaction in visual short‐term memory (VSTM) has been investigated little, despite its importance in everyday tasks requiring the coordination of visual perception and manual action. This study examines the influence of a manual action performed during stimulus learning on a subsequent VSTM test for object appearance. The memory display comprised a sequence of briefly presented 1/f noise discs (i.e., possessing spectral properties akin to natural images), wherein each new stimulus was presented at a unique screen location. Participants either did (or did not) perform a concurrent manual action (spatial tapping) task requiring that a hand‐held stylus be moved to a position on a touch tablet that corresponded (or did not correspond) to the screen position of each new stimulus as it appeared. At test, a single stimulus was presented, either at one of the original screen positions, or at a new position. Two factors were examined: the execution (or otherwise) of spatial tapping at a corresponding or non‐corresponding position, and the presentation of test stimuli either at their original spatial positions, or at new positions. We find that spatial tapping at corresponding positions elevates VSTM performance by more than 15%, but this occurs only when stimulus positions are matched from memory to test display. Our findings suggest that multimodal attentional focus during stimulus encoding (incorporating visual, spatial, and manual components) leads to stronger, more robust memory representations. We posit several possible explanations for this effect.  相似文献   

18.
Presentation of irrelevant additional information hampers learning. However, using a word-learning task, recent research demonstrated that an initial negative effect of mismatching pictures on learning no longer occurred once learners gained task experience. It is unclear, however, whether learners consciously suppressed attention to the content of the mismatching pictures. Therefore, we examined the effects of a picture location change towards the end of the learning phase: for half of the participants, the picture location was changed after they gained task experience. If participants only ignore the location of mismatching pictures, word learning in the mismatched condition should be hampered after the location change. Changing the location of the mismatching pictures did not affect recall in the mismatched condition, but, surprisingly, the location change did hamper learning in the matched condition. In sum, it seems that participants learned to ignore the content, and not just the location of the irrelevant information.  相似文献   

19.
Based on cognitive load theory, two experiments investigated the conditions under which audiovisual‐based instruction may be an effective or an ineffective instructional technique. Results from Experiment 1 indicated that visual with audio presentations were superior to equivalent visual‐only presentations. In this experiment, neither the auditory nor the visual material could be understood in isolation. Both sources of information were interrelated and were essential to render the material intelligible. In contrast, Experiment 2 demonstrated that a non‐essential explanatory text, presented aurally with similar written text contained in a diagram, hindered learning. This result was obtained because when compared to a diagram only format, the aural material was unnecessary and therefore created a redundancy effect. Differences between groups were stronger when information was high in complexity. It was concluded that the effectiveness of multimedia instruction depends very much on how and when auditory information is used. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
The dual‐task paradigm was used to show how visuospatial working memory and the phonological loop are involved in processing scientific texts and illustrations presented via computer. In experiment 1, two presentation formats were compared: text‐only and text‐with‐illustrations. With a concurrent tapping task, the beneficial effect of illustrations disappeared, while a concurrent articulatory task impaired performance similarly in both presentation formats. An analysis of individual differences revealed that this pattern of results was present in high, but not low spatial span subjects. These results support the selective involvement of visuospatial working memory in processing illustrated texts. In Experiment 2, the text‐only presentation format was compared to an illustrations‐only format. The concurrent articulatory task selectively impaired text‐only processing, compared with processing illustrations‐only. In addition, this pattern of results was found for high, but not low digit span subjects. These results suggest that individual differences define the extent to which the two subsystems of working memory are involved in learning from multimedia. These two subsystems would be mainly involved in the maintenance of a visual trace of illustrations and of a verbatim representation of linguistic information respectively, these representations being the basis for higher‐level comprehension processes. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号