首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In two related experiments on recognition-on touch and audition-accuracy rates were obtained from congenitally blind subjects and compared with those for normally sighted subjects. In Exp. 1, 5 blind subjects inspected, i.e., handled, 150 common objects and were tested after a delay of 7 days. In Exp. 2, 9 blind subjects listened to 194 naturalistic sounds and were also tested after a 7-day delay. Accuracy of tactile recognition for the blind was 89.4% while it was 87.9% for the normally sighted. Sound recognition by blind subjects was 76.6% and for the normally sighted it was 78.4%. Neither difference was statistically significant.  相似文献   

2.
3.
Long-term memory of haptic, visual, and cross-modality information was investigated. In Experiment 1, subjects briefly explored 40 commonplace objects visually or haptically and then received a recognition test with categorically similar foils in the same or the alternative modality both immediately and after 1 week. Recognition was best for visual input and test, with haptic memory still apparent after a week's delay. Recognition was poorest in the cross-modality conditions, with performance on the haptic-visual and visual-haptic cross-modal conditions being nearly identical. Visual and haptic information decayed at similar rates across a week delay. In Experiment 2, subjects simultaneously viewed and handled the same objects, and transfer was tested in a successive cue-modality paradigm. Performance with the visual modality again exceeded that with the haptic modality. Furthermore, initial errors on the haptic test were often corrected when followed by the visual presentation, both immediately and after 1 week. However, visual test errors were corrected by haptic cuing on the immediate test only. These results are discussed in terms of shared information between the haptic and visual modalities, and the ease of transfer between these modalities immediately and after a substantial delay.  相似文献   

4.
A rhesus monkey was tested in an auditory list memory task with blocked and mixed retention delays. Each list of four natural or environmental sounds (from a center speaker) was followed by a retention delay (0, 1, 2, 10, 20, or 30 sec) and then by a recognition test (from two side speakers). The monkey had been tested for 12 years in tasks with blocked delays. An earlier (4 years prior) blocked-delay test was repeated, with virtually identical results. The results from the mixed-delay test were likewise similar. Thus, the peculiarities of blocked-delay testing, such as delay predictability or differences in list spacing, apparently do not alter this monkey's memory for auditory lists. It is concluded from this and other evidence that the monkey's serial position functions reflect mnemonic processes that change with changes in retention delay and are not artifacts of the blocked-delay procedure. The nature of the monkey's auditory memory is discussed.  相似文献   

5.
When Ss attend to one auditory message, they have no permanent memory for a second auditory message received simultaneously. Generally, it has been argued that a similar effect would occur crossmodally. This hypothesis was tested in the present experiment for messages presented to visual and auditory modalities. All Ss were tested for recognition of information presented either while shadowing or while hearing but not shadowing a passage of prose presented to one ear. One group heard a list of concrete nouns in their other ear. Three other groups received (1) printed words. (2) pictures of objects easily labeled, or (3) pictures of objects difficult to label. The shadowing task produced a decrement m recognition scores for the first three groups but not for the group receiving pictures of objects difficult to label. Further, the shadowing task interfered more with information received auditorily than with any form of visual information. These results suggest that information received visually is stored in a long-term modality-specific memory that may operate independently of the auditory modality.  相似文献   

6.
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.  相似文献   

7.
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.  相似文献   

8.
9.
The second of two targets is often missed when presented shortly after the first target--a phenomenon referred to as the attentional blink (AB). Whereas the AB is a robust phenomenon within sensory modalities, the evidence for cross-modal ABs is rather mixed. Here, we test the possibility that the absence of an auditory-visual AB for visual letter recognition when streams of tones are used is due to the efficient use of echoic memory, allowing for the postponement of auditory processing. However, forcing participants to immediately process the auditory target, either by presenting interfering sounds during retrieval or by making the first target directly relevant for a speeded response to the second target, did not result in a return of a cross-modal AB. Thefindings argue against echoic memory as an explanation for efficient cross-modal processing. Instead, we hypothesized that a cross-modal AB may be observed when the different modalities use common representations, such as semantic representations. In support of this, a deficit for visual letter recognition returned when the auditory task required a distinction between spoken digits and letters.  相似文献   

10.
Rhesus monkeys were tested in serial probe recognition tasks with either travel slide pictures or natural sounds. Tests with four-item lists produced serial position functions that were essentially opposite in shape for the two modalities and changed in opposite ways with retention interval. For visual memory, the primacy effect grew and the recency effect dissipated with retention interval. Capuchin monkeys, humans, and pigeons showed similar results. For auditory memory with rhesus monkeys, the recency effect grew and the primacy effect dissipated with retention interval. These results taken together, along with results from rehearsal tests of monkeys and humans, implicate two passive memory processes with different time courses. Interference among items within auditory lists was manipulated by varying the time between items and categories of items. Interference across lists was manipulated by varying the item pool size and, hence, item repetitions. Changes in the auditory serial position functions indicated that proactive and retroactive interference may have been instrumental in these dynamically changing serial position functions. Implications for theories and models of memory are discussed.  相似文献   

11.
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1–3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.  相似文献   

12.
13.
Two tests of auditory recognition memory were given to four patients with bilateral hippocampal damage (H+) and three patients with large medial temporal lobe lesions and additional variable damage to lateral temporal cortex (MTL+). When single stimuli were presented, performance was normal across delays as long as 30 sec, presumably because information could be maintained in working memory through rehearsal. When lists of 10 stimuli were presented, performance was impaired after a 5-min delay. Patients with MTL+ lesions performed marginally worse than patients with H+ lesions, consistent with findings for recognition memory in other modalities. The findings show that auditory recognition, like recognition memory in other sensory modalities, is dependent on the medial temporal lobe.  相似文献   

14.
Auditory and visual spatial working memory   总被引:4,自引:0,他引:4  
A series of experiments compared short-term memory for object locations in the auditory and visual modalities. The stimulus materials consisted of sounds and pictures presented at different locations in space. Items were presented in pure- or mixed-modality lists of increasing length. At test, participants responded to renewed presentation of the objects by indicating their original position. If two independent modality-specific and resource-limited short-term memories support the remembering of locations, memory performance should be higher in the mixed-modality than in the pure-modality condition. Yet, memory performance was the same for items in both types of list. In addition, responses to the memory load manipulation in both modalities showed very similar declines in performance. The results are interpreted in terms of object files binding object and location information in episodic working memory, independently of the input modality.  相似文献   

15.
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones—both containing two varying features—were presented simultaneously. In Experiment 2, two gratings and two tones—each containing only one varying feature—were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.  相似文献   

16.
Two short-term memory experiments examined the nature of the stimulus suffix effect on auditory linguistic and nonlinguistic stimulus lists. In Experiment 1, where subjects recalled eight-item digit lists, it was found that a silently articulated digit suffix had the same effect on recall for the last list item as a spoken digit suffix. In Experiment 2, subjects recalled lists of sounds made by inanimate objects either by listing the names of the objects or by ordering a set of drawings of the objects. Auditory suffixes, either another object sound or the spoken name of an object, produced a suffix effect under both recall conditions, but a visually presented picture also produced a suffix effect when subjects recalled using pictures. The results were most adequately explained by a levels-of-processing memory coding hypothesis.  相似文献   

17.
Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes--tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed.  相似文献   

18.
The recognition of nonverbal emotional signals and the integration of multimodal emotional information are essential for successful social communication among humans of any age. Whereas prior studies of age dependency in the recognition of emotion often focused on either the prosodic or the facial aspect of nonverbal signals, our purpose was to create a more naturalistic setting by presenting dynamic stimuli under three experimental conditions: auditory, visual, and audiovisual. Eighty-four healthy participants (women = 44, men = 40; age range 20-70 years) were tested for their abilities to recognize emotions either mono- or bimodally on the basis of emotional (happy, alluring, angry, disgusted) and neutral nonverbal stimuli from voice and face. Additionally, we assessed visual and auditory acuity, working memory, verbal intelligence, and emotional intelligence to explore potential explanatory effects of these population parameters on the relationship between age and emotion recognition. Applying unbiased hit rates as performance measure, we analyzed data with linear regression analyses, t tests, and with mediation analyses. We found a linear, age-related decrease in emotion recognition independent of stimulus modality and emotional category. In contrast, the improvement in recognition rates associated with audiovisual integration of bimodal stimuli seems to be maintained over the life span. The reduction in emotion recognition ability at an older age could not be sufficiently explained by age-related decreases in hearing, vision, working memory, and verbal intelligence. These findings suggest alterations in social perception at a level of complexity beyond basic perceptional and cognitive abilities.  相似文献   

19.
Previous research suggests that there are significant differences in the operation of reference memory for stimuli of different modalities, with visual temporal entries appearing to be more durable than auditory entries (Ogden, Wearden, & Jones, 2008 , 2010). Ogden et al. ( 2008 , 2010 ) demonstrated that when participants were required to store multiple auditory temporal standards over a period of delay there was significant systematic interference to the representation of the standard characterized by shifts in the location of peak responding. No such performance deterioration was observed when multiple visually presented durations were encoded and maintained. The current article explored whether this apparent modality-based difference in reference memory operation is unique to temporal stimuli or whether similar characteristics are also apparent when nontemporal stimuli are encoded and maintained. The modified temporal generalization method developed in Ogden et al. (2008) was employed; however, standards and comparisons varied by pitch (auditory) and physical line length (visual) rather than duration. Pitch and line length generalization results indicated that increasing memory load led to more variable responding and reduced recognition of the standard; however, there was no systematic shift in the location of peak responding. Comparison of the results of this study with those of Ogden et al. (2008, 2010) suggests that although performance deterioration as a consequence of increases in memory load is common to auditory temporal and nontemporal stimuli and visual nontemporal stimuli, systematic interference is unique to auditory temporal processing.  相似文献   

20.
The functional characteristics of auditory temporal-spatial short-term memory were explored in 8 experiments in which the to-be-remembered stimuli were sequences of bursts of white noise presented in spatial locations separated in azimuth. Primacy and recency effects were observed in all experiments. A 10-s delay impaired recall for primacy and middle list items but not recency. This effect was shown not to depend on the response modality or on the incidence of omissions or repetitions. Verbal and nonverbal secondary tasks did not affect memory for auditory spatial sounds. Temporal errors rather than spatial errors predominated, suggesting that participants were engaged in a process of maintaining order. This pattern of results may reflect characteristics that serial recall has in common with verbal and spatial recall, but some are unique to the representation of memory for temporal-spatial auditory events.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号