首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Three experiments tested the idea that auditory presentation facilitates temporal recall whereas spatial recall is better if the input modality is visual. Lists of words were presented in which the temporal and spatial orders were independent, and instructions to the subjects determined whether recall would be given in a spatial or temporal order. In all three experiments, a significant interaction between the input modality and the type of recall was found, such that visual presentation resulted in superior recall over auditory presentation in the spatial conditions and auditory presentation yielded superior recall to visual in the temporal conditions. The present results contradict an earlier study by Murdock that showed that auditory presentation resulted in better performance than visual presentation in a nominally spatial task. An explanation for the discrepancies between the results of that study and the present one is presented.  相似文献   

2.
During presentation of auditory and visual lists of words, different groups of subjects generated words that either rhymed with the presented words or that were associates. Immediately after list presentation, subjects recalled either the presented or the generated words. After presentation and test of all lists, a final free recall test and a recognition test were given. Visual presentation generally produced higher recall and recognition than did auditory presentation for both encoding conditions. The results are not consistent with explanations of modality effects in terms of echoic memory or greater temporal distinctiveness of auditory items. The results are more in line with the separate-streams hypothesis, which argues for different kinds of input processing for auditory and visual items.  相似文献   

3.
In two experiments, presentation modality of a list of items and encoding task were varied, and subjects judged the frequency with which certain words had been presented in the list. In Experiment 1, auditory presentation led to higher judgements of frequency than did visual presentation when subjects counted the consonants in the words but not when they rated imageability or when they kept a running count of the number of presentations of each word. In Experiment 2, encoding questions about the rhyme or spelling patterns of target words produced opposite effects for auditory and visual items. The results are interpreted as indicating that cross-modal translation during encoding produces a bias towards higher-frequency judgements and may also produce better frequency discrimination.  相似文献   

4.
Schedules of presentation and temporal distinctiveness in human memory   总被引:1,自引:0,他引:1  
Recency, in remembering a series of events, reflects the simple fact that memory is vivid for what has just happened but deteriorates over time. Theories based on distinctiveness, an alternative to the multistore model, assert that the last few events in a series are well remembered because their times of occurrence are more highly distinctive than those of earlier items. Three experiments examined the role of temporal and ordinal factors in auditorily and visually presented lists that were temporally organized by distractor materials interpolated between memory items. With uniform distractor periods, the results were consistent with Glenberg's (1987) temporal distinctiveness theory. When the procedure was altered so that distractor periods became progressively shorter from the beginning to the end of the list, the results were consistent for only the visual modality; the auditory modality produced a different and unpredicted (by the theory) pattern of results, thus falsifying the claim that the auditory modality derives more benefit from temporal information than the visual modality. We distinguish serial order information from specifically temporal information, arguing that the former may be enhanced by auditory presentation but that the two modalities are more nearly equal with respect to the latter.  相似文献   

5.
Abstract

Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.  相似文献   

6.
The temporal coding assumption is that time of presentation is coded more accurately for auditory events than for visual events. This assumption has been used to explain the modality effect, in which recall of recent auditory events is superior to recall of recent visual events. We tested the temporal coding assumption by examining the coding and reproduction of quintessentially temporal stimuli-rhythms. The rhythms were produced by sequences of short and long auditory stimuli or short and long visual stimuli; in either case, the task was to reproduce the temporal sequence. The results from four experiments demonstrated reproduction of auditory rhythms superior to that of visual rhythms. We conclude that speech-based explanations of modality effects cannot accommodate these findings, whereas the findings are consistent with explanations based on the temporal coding assumption.  相似文献   

7.
Patients with unilateral (left or right) medial temporal lobe lesions and normal control (NC) volunteers participated in two experiments, both using a duration bisection procedure. Experiment 1 assessed discrimination of auditory and visual signal durations ranging from 2 to 8 s, in the same test session. Patients and NC participants judged auditory signals as longer than equivalent duration visual signals. The difference between auditory and visual time discrimination was equivalent for the three groups, suggesting that a unilateral temporal lobe resection does not modulate the modality effect. To document interval-timing abilities after temporal lobe resection for different duration ranges, Experiment 2 investigated the discrimination of brief, 50-200 ms, auditory durations in the same patients. Overall, patients with right temporal lobe resection were found to have more variable duration judgments across both signal modality and duration range. These findings suggest the involvement of the right temporal lobe at the level of the decision process in temporal discriminations.  相似文献   

8.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

9.
Dissociations between a motor response and the subject's verbal report have been reported from various experiments that investigated special experimental effects (e.g., metacontrast or induced motion). To examine whether similar dissociations can also be observed under standard experimental conditions, we compared reaction times (RT) and temporal order judgments (TOJ) to visual and auditory stimuli of three intensity levels. Data were collected from six subjects, each of which served for nine sessions. The results showed a strong, highly significant modality dissociation: While RTs to auditory stimuli were shorter than RTs to visual stimuli, the TOJ data indicated longer processing times for auditory than for visual stimuli. This pattern was found over the whole range of intensities investigated. Light intensity had similar effects on RT and TOJ, while there was a marginally significant tendency of tone intensity to affect RT more strongly than TOJ. It is concluded that modality dissociation is an example of "direct parameter specification", where the pathway from stimulus to response in the simple RT experiment is (at least partially) separate from the pathway that leads to a conscious, reportable representation. Two variants of this notion and alternatives to it are discussed.  相似文献   

10.
Previous research (Garber & Pisoni, 1991; Pisoni & Garber, 1990) has demonstrated that subjective familiarity judgments for words are not differentially affected by the modality (visual or auditory) in which the words are presented, suggesting that participants base their judgments on fairly abstract, modality-independent representations in memory. However, in a recent large-scale study in Japanese (Amano, Kondo, & Kakehi, 1995), marked modality effects on familiarity ratings were observed. The present research further examined possible modality differences in subjective ratings and their implications for word recognition. Specially selected words were presented to participants for frequency judgments. In particular, participants were asked how frequently they read, wrote, heard, or said a given spoken or printed word. These ratings were then regressed against processing times in auditory and visual lexical decision and naming tasks. Our results suggest modality dependence for some lexical representations.  相似文献   

11.
In 3 experiments, the authors simulated air traffic controllers giving pilots navigation instructions of various lengths. Participants either heard or read the instructions; repeated either all, a reduced form, or none of the instructions; and then followed them by clicking on the specified locations in a space represented by grids on a computer screen. Execution performance for visual presentation was worse than it was for auditory presentation on the longer messages. Repetition of the instructions generally lowered execution performance for longer messages, which required more output, especially with the visual modality, which required phonological receding from visual input to spoken output. An advantage for reduced over full repetition for visual but not for auditory presentation was attributed to an enhanced visual scanning process.  相似文献   

12.
Two experiments investigated the nature of the code in which lip-read speech is processed. In Experiment 1 subjects repeated words, presented with lip-read and masked auditory components out of synchrony by 600 ms. In one condition the lip-read input preceded the auditory input, and in the second condition the auditory input preceded the lip-read input. Direction of the modality lead did not affect the accuracy of report. Unlike auditory/graphic letter matching (Wood, 1974), the processing code used to match lip-read and auditory stimuli is insensitive to the temporal ordering of the input modalities. In Experiment 2, subjects were presented with two types of lists of colour names: in one list some words were heard, and some read; the other list consisted of heard and lip-read words. When asked to recall words from only one type of input presentation, subjects confused lip-read and heard words more frequently than they confused heard and read words. The results indicate that lip-read and heard speech share a common, non-modality specific, processing stage that excludes graphically presented phonological information.  相似文献   

13.
Prior research has established that performance in short-term memory tasks using auditory rhythmic stimuli is frequently superior to that in tasks using visual stimuli. In five experiments, the reasons for this were explored further. In a same-different task, pairs of brief rhythms were presented in which each rhythm was visual or auditory, resulting in two same-modality conditions and two cross-modality conditions. Three different rates of presentation were used. The results supported the temporal advantage of the auditory modality in short-term memory, which was quite robust at the quickest presentation rates. This advantage tended to decay as the presentation rate was slowed down, consistent with the view that, with time, the temporal patterns were being recoded into a more generic form.  相似文献   

14.
The context effect in implicit memory is the finding that presentation of words in meaningful context reduces or eliminates repetition priming compared to words presented in isolation. Virtually all of the research on the context effect has been conducted in the visual modality but preliminary results raise the question of whether context effects are less likely in auditory priming. Context effects in the auditory modality were systematically examined in five experiments using the auditory implicit tests of word-fragment and word-stem completion. The first three experiments revealed the classical context effect in auditory priming: Words heard in isolation produced substantial priming, whereas there was little priming for the words heard in meaningful passages. Experiments 4 and 5 revealed that a meaningful context is not required for the context effect to be obtained: Words presented in an unrelated audio stream produced less priming than words presented individually and no more priming than words presented in meaningful passages. Although context effects are often explained in terms of the transfer-appropriate processing (TAP) framework, the present results are better explained by Masson and MacLeod's (2000) reduced-individuation hypothesis.  相似文献   

15.
Accuracy of temporal coding: Auditory-visual comparisons   总被引:1,自引:0,他引:1  
Three experiments were designed to decide whether temporal information is coded more accurately for intervals defined by auditory events or for those defined by visual events. In the first experiment, the irregular-list technique was used, in which a short list of items was presented, the items all separated by different interstimulus intervals. Following presentation, the subject was given three items from the list, in their correct serial order, and was asked to judge the relative interstimulus intervals. Performance was indistinguishable whether the items were presented auditorily or visually. In the second experiment, two unfilled intervals were defined by three nonverbal signals in either the auditory or the visual modality. After delays of 0, 9, or 18 sec (the latter two filled with distractor activity), the subjects were directed to make a verbal estimate of the length of one of the two intervals, which ranged from 1 to 4 sec and from 10 to 13 sec. Again, performance was not dependent on the modality of the time markers. The results of Experiment 3, which was procedurally similar to Experiment 2 but with filled rather than empty intervals, showed significant modality differences in one measure only. Within the range of intervals employed in the present study, our results provide, at best, only modest support for theories that predict more accurate temporal coding in memory for auditory, rather than visual, stimulus presentation.  相似文献   

16.
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.  相似文献   

17.
Learning and retrieval rate of words presented auditorily and visually   总被引:11,自引:0,他引:11  
Mode of presentation (visual or auditory) of a multitrial free recall test is stressed as an important factor in improving the diagnosis of certain neurological patients. For further use in neuropsychological research, an experiment was carried out using normal subjects, in which the effects of presentation mode and order of modality were investigated. There were no differential effects of these variables on several parameters, such as the number of words recalled and the learning curve. The time needed for the responses in immediate recall was the same in both auditory and visual conditions. In delayed recall, however, the interresponse times were significantly shorter when words had been presented auditorily than when presented visually. The results are discussed in light of further application in the field of neuropsychology.  相似文献   

18.
Observers judged whether a periodically moving visual display (point-light walker) had the same temporal frequency as a series of auditory beeps that in some cases coincided with the apparent footsteps of the walker. Performance in this multisensory judgment was consistently better for upright point-light walkers than for inverted point-light walkers or scrambled control stimuli, even though the temporal information was the same in the three types of stimuli. The advantage with upright walkers disappeared when the visual "footsteps" were not phase-locked with the auditory events (and instead offset by 50% of the gait cycle). This finding indicates there was some specificity to the naturally experienced multisensory relation, and that temporal perception was not simply better for upright walkers per se. These experiments indicate that the gestalt of visual stimuli can substantially affect multisensory judgments, even in the context of a temporal task (for which audition is often considered dominant). This effect appears to be constrained by the ecological validity of the particular pairings.  相似文献   

19.
The purpose of the current study was to examine the performance of children with and without ADHD in time reproduction tasks involving varying durations and modalities. Twenty children with ADHD and 20 healthy controls completed time reproduction tasks in three modalities (auditory, visual, and a unique combined auditory/visual condition) and six durations (1 second, 4 seconds, 12 seconds, 24 seconds, 48 seconds, and 60 seconds). Consistent with our predictions, we found main effects of group (participants with ADHD were significantly less accurate than those without ADHD), duration (accuracy decreased as temporal duration increased), and modality (responses in the combined condition were more accurate than those in the auditory condition, which in turn were more accurate than those in the visual condition). Furthermore, predicted interactions between group and duration (the discrepancy in performance between the two groups grew as temporal duration increased), and group and modality (the modality effect was of greater for participants with ADHD) were supported. A marginal, nonsignificant interaction between group, modality, and duration was also found. These findings are discussed in relation to current theory on the nature of cognitive deficits evident in individuals with ADHD, and methodological limitations are noted.  相似文献   

20.
Effects of presentation modality and response format were investigated using visual and auditory versions of the word stem completion task. Study presentation conditions (visual, auditory, non-studied) were manipulated within participants, while test conditions (visual/written, visual/spoken, auditory/written, auditory/spoken, recall-only) were manipulated between participants. Results showed evidence for same modality and cross modality priming on all four word stem completion tasks. Words from the visual study list led to comparable levels of priming across all test conditions. In contrast, words from the auditory study list led to relatively low levels of priming in the visual/written test condition and high levels of priming in the auditory/spoken test condition. Response format was found to influence priming performance following auditory study in particular. The findings confirm and extend previous research and suggest that, for implicit memory studies that require auditory presentation, it may be especially beneficial to use spoken rather than written responses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号