首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments demonstrated that Ss are capable of making within-modality memory discriminations in both visual and auditory modalities. In Experiment I Ss studied mixed lists of pictures and labels representing common objects and were subsequently required to judge whether the original presentation was pictorial or verbal The high level of performance achieved on this task was unaffected by degree of categorical relatedness of items within method of presentation or by instructions to produce visual images when items were presented verbally. In Experiment II Ss demonstrated the ability to remember whether a sentence was originally presented by a male or a female speaker. Some strategies by which within-modality discrimination in memory might be accomplished are discussed.  相似文献   

2.
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1–3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.  相似文献   

3.
In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.  相似文献   

4.
Ss in the dichotic listening task prefer, and are more accurate with, the channel-by-channel order of recall-recalling all information presented to one ear followed by the information presented to the other ear. Current explanations for these effects rely upon the fact that the two messages are presented over separate input channels. The present study tested this hypothesis directly by presenting two messages simultaneously to a single auditory channel. Groups of Ss were instructed to recall two pairs of digits presented simultaneously over a single channel (1) in any order (free recall), (2) in the order received (simultaneous order), or (3) in sequential groups (successive order). Ss in the free recall task preferred recalling items in successive order. However, Ss instructed to recall in simultaneous order were as accurate as those instructed to recall in successive order. These data imply that accuracy with, but not preference for, the successive order of recall depends upon whether input is to one or two channels.  相似文献   

5.
The present research was designed to investigate the proposition that repressors, operationally defined by the conjunction of low anxiety and high defensiveness, are particularly adept at avoiding the processing of information when motivated to do so. Four groups of participants (nondefensive-low anxious, high anxious, repressors, and defensive-high anxious) were administered a dichotic listening task involving neutral or negative affective words presented in the unattended ear. Participants shadowed the material presented to the attended ear and simultaneously responded to a probe task presented on a video monitor. Results revealed that repressors made significantly fewer shadowing errors than high anxious and defensive-high anxious participants and marginally significantly fewer shadowing errors than low anxious participants for both neutral and negative words. High anxious participants, however, were later able to recognize the negative words that had been presented to the unattended ear at well above chance levels, whereas the recognition memory of repressors for such negative unattended words was at chance levels. In addition, repressors' responses to a postexperiment questionnaire indicated a significantly greater number of distracting thoughts during the experiment relative to other participants. Repressors, it seems, are indeed skillful at avoidant information processing and this capacity may well be related to the emotional memory deficits they have displayed in previous research.  相似文献   

6.
In 2 experiments, we evaluated the ability of amnesic patients to exhibit long-lasting perceptual priming after a single exposure to pictures. Ss named pictures as quickly as possible on a single occasion, and later named the same pictures mixed with new pictures. In Experiment 1, amnesic patients exhibited fully intact priming effects lasting at least 7 days. In Experiment 2, the priming effect for both groups was shown to depend on both highly specific visual information and on less visual, more conceptual information. In contrast, recognition memory was severely impaired in the patients, as assessed by both accuracy and response time. The results provide the first report of a long-lasting priming effect in amnesic patients, based on a single encounter, which occurs as strongly in the patients as in normal Ss. Together with other recent findings, the results suggest that long-lasting priming and recognition memory depend on separate brain systems.  相似文献   

7.
Irrelevant speech effect (ISE) is defined as a decrement in visually presented digit-list short-term memory performance due to exposure to irrelevant auditory material. Perhaps the most successful theoretical explanation of the effect is the changing state hypothesis. This hypothesis explains the effect in terms of confusion between amodal serial order cues, and represents a view based on the interference caused by the processing of similar order information of the visual and auditory materials. An alternative view suggests that the interference occurs as a consequence of the similarity between the visual and auditory contents of the stimuli. An important argument for the former view is the observation that ISE is almost exclusively observed in tasks that require memory for serial order. However, most short-term memory tasks require that both item and order information be retained in memory. An ideal task to investigate the sensitivity of maintenance of serial order to irrelevant speech would be one that calls upon order information but not item information. One task that is particularly suited to address this issue is serial recognition. In a typical serial recognition task, a list of items is presented and then probed by the same list in which the order of two adjacent items has been transposed. Due to the re-presentation of the encoding string, serial recognition requires primarily the serial order to be maintained while the content of the presented items is deemphasized. In demonstrating a highly significant ISE of changing versus steady-state auditory items in a serial recognition task, the present finding lends support for and extends previous empirical findings suggesting that irrelevant speech has the potential to interfere with the coding of the order of the items to be memorized.  相似文献   

8.
Four experiments tested the hypothesis that objects toward which individuals hold attitudes that are highly accessible from memory (i.e., attitude-evoking objects) are more likely to attract attention when presented in a visual display than objects involving less accessible attitudes. In Experiments 1 and 2, Ss were more likely to notice and report such attitude-evoking objects. Experiment 3 yielded evidence of incidental attention; Ss noticed attitude-evoking objects even when the task made it beneficial to ignore the objects. Experiment 4 demonstrated that inclusion of attitude-evoking objects as distractor items interfered with Ss' performance of a visual search task. Apparently, attitude-evoking stimuli attract attention automatically. Thus, accessible attitudes provide the functional benefit of orienting an individual's visual attention toward objects with potential hedonic consequences.  相似文献   

9.
Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes--tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed.  相似文献   

10.
Many animals have been tested for conceptual discriminations using two-dimensional images as stimuli, and many of these species appear to transfer knowledge from 2D images to analogous real life objects. We tested an American black bear for picture-object recognition using a two alternative forced choice task. She was presented with four unique sets of objects and corresponding pictures. The bear showed generalization from both objects to pictures and pictures to objects; however, her transfer was superior when transferring from real objects to pictures, suggesting that bears can recognize visual features from real objects within photographic images during discriminations.  相似文献   

11.
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.  相似文献   

12.
Summary Word lists of fifteen items were presented to eye or to ear, with recall either immediately, or after a visual task, or after an auditory one. Instructions were to recall the last items first. An intervening task using the same modality greatly reduced recall of the last items presented; whereas a visual task did not do so for acoustically presented items. An auditory task reduced visual memory. These results suggest a specific auditory memory for recent events, over-written by subsequent auditory events.This research was supported by the Medical Research Council. The experimental work was performed at the Applied Psychology Unit, Cambridge.  相似文献   

13.
Trigrams were presented visually or auditorily and followed by a 12 s retention interval filled with shadowing numbers or letters. Auditory memory letters followed by letter shadowing were recalled less than auditory memory letters followed by number shadowing or visual memory letters followed by either type of shadowing. The latter three conditions did not differ among themselves. An analysis of the recall intrusions suggested that forgetting of auditory memory letters followed by letter shadowing was caused mainly by a confusion between covert rehearsals and shadowing activity, while forgetting in the other three conditions was caused primarily by proactive interference from earlier memory trials.  相似文献   

14.
Auditory text presentation improves learning with pictures and texts. With sequential text–picture presentation, cognitive models of multimedia learning explain this modality effect in terms of greater visuo‐spatial working memory load with visual as compared to auditory texts. Visual texts are assumed to demand the same working memory subsystem as pictures, while auditory texts make use of an additional cognitive resource. We provide two alternative assumptions that relate to more basic processes: First, acoustic‐sensory information causes a retention advantage for auditory over visual texts which occurs no matter if a picture is presented or not. Second, eye movements during reading hamper visuo‐spatial rehearsal. Two experiments applying elementary procedures provide first evidence for these assumptions. Experiment 1 demonstrates that, regarding text recall, the auditory advantage is independent of visuo‐spatial working memory load. Experiment 2 reveals worse matrix recognition performance after reading text requiring eye movements than after listening or reading without eye movements. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.  相似文献   

16.
IntroductionPrevious studies using semantically related words revealed more accurate memory when the items were encoded visually rather than auditorily and when mental images were created during encoding. However, how the level of memory distortion is affected by the creation of different mental imagery formats or by techniques that should suppress generation of mental images has rarely been investigated.ObjectiveThe aim of the present studies was to investigate the ways in which the encoding strategy affects the accuracy of memory reports for two presentation formats of semantically related words: verbal and pictorial.MethodIn experiment 1, the participants were asked to memorize either pictures or their verbal equivalents (words) from the same category, using one of two encoding strategies: uttering the words or counting backwards. In experiment 2, pictorially or auditory presented material was encoded together with the creation of either visual or auditory mental images of the items. The results of the experimental groups were compared to control groups that received no specific instruction.ResultsHigher levels of false recognition, together with lower rates of correct recognition, were observed for words, presented either visually or auditory, relative to pictures. Moreover, self-generation of additional code during the processing of information favored the reduction of false recognitions.ConclusionEncoding strategies that engaged dual coding reduced false recognition. The results are discussed within the distinctiveness heuristic phenomenon.  相似文献   

17.
Dichotic listening (DL) and visual half-field (VHF) testing were used to study hemisphere asymmetry in a developmental perspective. Five-, 8-, and 11-year-old children were presented lists of fused words using a DL technique in Experiment 1, and 8- and 11-year-old children were presented pictures of common objects using a VHF technique in Experiment 2. In both experiments, measures of identification, free recall, and recognition of the words/pictures were employed. The results revealed effects of ear input (right-ear advantage) and half-field presentation (right visual half-field advantage) for all age groups, although the magnitude of this lateralization effect differed between the three memory measures. The results are discussed in relation to developmental aspects of language laterality, and in relation to the clinical utility of non-invasive lateralization techniques.  相似文献   

18.
19.
Two experiments investigated the effect of eye-closure on visual and auditory memory under conditions based on the retrieval of item-specific information. Experiment 1 investigated visual recognition memory for studied, perceptually similar and unrelated items. It was found that intermittent eye-closure increased memory for studied items and decreased memory for related items. This finding was reflected by enhanced item-specific and reduced gist memory. Experiment 2 used the Deese-Roediger-McDermott (DRM) paradigm to assess auditory recognition memory for studied, related and unrelated words that had (vs. had not) been accompanied by pictures during encoding. Pictures but not eye-closure produced a picture superiority effect by enhancing memory for studied items. False memory was reduced by pictures but not eye-closure. Methodological and theoretical considerations are discussed in relation to existing explanations of eye-closure and retrieval strategies.  相似文献   

20.
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号