首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Short-term memory for the timing of irregular sequences of signals has been said to be more accurate when the signals are auditory than when they are visual. No support for this contention was obtained when the signals were beeps versus flashes (Experiments 1 and 3) nor when they were sets of spoken versus typewritten digits (Experiments 4 and 5). On the other hand, support was obtained both for beeps versus flashes (Experiments 2 and 5) and for repetitions of a single spoken digit versus repetitions of a single typewritten digit (Experiment 6) when the subjects silently mouthed a nominally irrelevant item during sequence presentation. Also, the timing of sequences of auditory signals, whether verbal (Experiment 7) or nonverbal (Experiments 8 and 9), was more accurately remembered when the signals within each sequence were identical. The findings are considered from a functional perspective.  相似文献   

2.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

3.
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code—visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.  相似文献   

4.
A series of divided-attention experiments in which matching to the visual or auditory component of a tone-light compound was compared with matching to visual or auditory elements as sample stimuli were carried out. In 0-s delayed and simultaneous matching procedures, pigeons were able to match visual signals equally well when presented alone or with a tone; tones were matched at a substantially lower level of accuracy when presented with light signals than when presented as elements. In further experiments, it was demonstrated that the interfering effect of a signal light on tone matching was not related to the signaling value of the light, and that the prior presentation of light proactively interfered with auditory delayed matching. These findings indicate a divided attention process in which auditory processing is strongly inhibited in the presence of visual signals.  相似文献   

5.
6.
7.
8.
The results of two experiments involving the matching of unfamiliar, nameless shapes (Gibson forms) indicated that a visual representation of a brief (30-100 ms) stimulus survives in a Short-Term Visual Memory (STVM) for 6 s or more after the onset of a pattern mask. On the basis of these results a possible alternative to Sperling's (1967) model for short-term memory for visual stimuli was presented. In this model it is assumed that recognition processes occupy several hundred milliseconds and continue after the arrival of the mask using the information available in STVM.  相似文献   

9.
10.
The present study examined and compared order memory for a list of sequentially presented odours, unfamiliar faces, and pure tones. Employing single-probe serial position recall and following a correction for a response bias, qualitatively different serial position functions were observed across stimuli. Participants demonstrated an ability to perform absolute order memory judgments for odours. Furthermore, odours produced an absence of serial position effects, unfamiliar faces produced both primacy and recency, and pure tones produced recency but not primacy. Such a finding is contrary to the proposal by Ward, Avons, and Melling (2005) that the serial position function is task, rather than modality, dependent. In contrast, the observed functions support a modular conceptualization of short-term memory (e.g., Andrade & Donaldson, 2007; Baddeley & Hitch, 1974), whereby separate modality-specific memorial systems operate. An alternative amodal interpretation is also discussed wherein serial position function disparities are accommodated via differences in the psychological distinctiveness of stimuli (Hay, Smyth, Hitch, & Horton, 2007).  相似文献   

11.
Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial processing in a dual-task. Thus to further explore the nature of this dissociation and whether the Nd waveform is affected by spatial processing; fourteen adult participants performed auditory dichotic listening in conjunction with visuo-spatial memory in a cross-modal dual-task paradigm. The results showed that the visuo-spatial memory task decreased both the Nde and Ndl waveforms, and also attenuated P300 and increased its latency. This pattern of results suggests that: (1) the Nde reflects a memory trace that is shared with vision when the information is spatial in nature, and (2) P300 latency appears to be influenced by the discriminability of stimuli underlying the Nde and Ndl memory trace.  相似文献   

12.
Cognition is shaped by the way that past experiences are represented in memory. To examine the representation of recent visual experiences, we devised a novel procedure that measures episodic recognition memory for synthetic textures. On each trial, two brief study stimuli were followed by a probe, which either replicated one of the study stimuli or differed in spatial frequency from both. The probe's spatial frequency roved from trial to trial, testing recognition with a range of differences between probe and study items. Repeated testing of recognition generated mnemometric functions, snapshots of memory strength's distribution. The distributional characteristics of the mnemometric functions rule out several hypotheses about memory representation, including the hypothesis that representations are prototypes constructed from previously seen stimuli; instead, stimuli are represented in memory as noisy exemplars.  相似文献   

13.
In 5 divided attention (DA) experiments, students (24 in each experiment) performed visual distracting tasks (e.g., recognition of words, word and digit monitoring) while either simultaneously encoding an auditory word list or engaging in oral free recall of the target word list. DA during retrieval, using either of the word-based distracting tasks, produced relatively larger interference effects than the digit-monitoring task. DA during encoding produced uniformly large interference effects, regardless of the type of distracting task. Results suggest that when attention is divided at retrieval, interference is created only when the memory and concurrent task compete for access to word-specific representational systems; no such specificity is necessary to create interference at encoding. During encoding, memory and concurrent tasks compete primarily for general resources, whereas during retrieval, they compete primarily for representational systems.  相似文献   

14.
College-age subjects recalled melodies or spoken sentences immediately after hearing them, by either pointing on a chart designed to interfere with a visual memory code orsaying the equivalent responses, which should interfere with an auditory code. Pointing took significantly longer than saying for melodies, but not for sentences. A second experiment showed that this task x materials interaction could be due to interference with contour per se rather than with visual coding of melody, because a similar effect was found for spoken contours consisting simply of the words “up” or “down.” In a third experiment, pointing and saying tasks were modified both to control and to manipulate the amount of contour interference. Visual control stimuli for melodies were introduced, employing a marker that moved up or down. This time the task x materials interaction could be explained in terms of an auditory memory code for melody, but there were some problems with this interpretation. Interference related to contour, rather than to a specific modality, appeared to account best for the results of all three experiments.  相似文献   

15.
Psychonomic Bulletin & Review - Recent studies show that recognition memory for pictures is consistently better than recognition memory for sounds. The purpose of this experiment was to compare...  相似文献   

16.
Short-term implicit memory was examined for mixed auditory (A) and visual (V) stimuli. In lexical decision, words and nonwords were repeated at lags of 0, 1, 3, and 6 intervening trials, in four prime-target combinations (VV, VA, AV, AA). Same-modality repetition priming showed a lag x lexicality interaction for visual stimuli (nonwords decayed faster), but not for auditory stimuli (longer lasting smooth decay for both words and nonwords). These modality differences suggest that short-term priming has a perceptual locus, with the phonological lexicon maintaining stimuli active longer than the orthographic lexicon and treating pseudowords as potential words. We interpret these differences in terms of the different memory needs of speech recognition and text reading. Weak cross-modality short-term priming was present for words and nonwords, indicating recoding between perceptual forms.  相似文献   

17.
The effects of signal modality on duration classification in college students were studied with the duration bisection task. When auditory and visual signals were presented in the same test session and shared common anchor durations, visual signals were classified as shorter than equivalent duration auditory signals. This occurred when auditory and visual signals were presented sequentially in the same test session and when presented simultaneously but asynchronously. Presentation of a single modality signal within a test session, or both modalities but with different anchor durations did not result in classification differences. The authors posit a model in which auditory and visual signals drive an internal clock at different rates. The clock rate difference is due to an attentional effect on the mode switch and is revealed only when the memories for the short and long anchor durations consist of a mix of contributions from accumulations generated by both the fast auditory and slower visual clock rates. When this occurs auditory signals seem longer than visual signals relative to the composite memory representation.  相似文献   

18.
Abrupt visual onsets and selective attention: evidence from visual search   总被引:22,自引:0,他引:22  
The effect of temporal discontinuity on visual search was assessed by presenting a display in which one item had an abrupt onset, while other items were introduced by gradually removing line segments that camouflaged them. We hypothesized that an abrupt onset in a visual display would capture visual attention, giving this item a processing advantage over items lacking an abrupt leading edge. This prediction was confirmed in Experiment 1. We designed a second experiment to ensure that this finding was due to attentional factors rather than to sensory or perceptual ones. Experiment 3 replicated Experiment 1 and demonstrated that the procedure used to avoid abrupt onset--camouflage removal--did not require a gradual waveform. Implications of these findings for theories of attention are discussed.  相似文献   

19.
20.
Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10 s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号