首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.  相似文献   

2.
Subjects were asked to indicate which item of a word/nonword pair was a word. On critical trials the nonword was a pseudohomophone of the word. RTs of dyslexics were shorter in blocks of trials in which a congruent auditory prime was simultaneously presented with the visual stimuli. RTs of normal readers were longer for high frequency words when there was auditory priming. This provides evidence that phonology can activate orthographic representations; the size and direction of the effect of auditory priming on visual lexical decision appear to be a function of the relative speeds with which sight and hearing activate orthography.  相似文献   

3.
Abstract

The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the ‘target’ item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.  相似文献   

4.
视-听跨通道汉语词汇信息加工中的抑制机制   总被引:2,自引:0,他引:2  
采用选择性再认的方法考察在汉语词汇加工过程中 ,视 -听跨通道信息与视觉单通道信息加工过程中的抑制机制。结果表明 :对于视觉词汇的总体再认“否”反应 ,单通道干扰条件下的成绩优于跨通道干扰条件下的成绩。在视觉词汇加工过程中 ,对外在干扰材料的抑制效率不受输入干扰刺激的通道的影响。抑制效率受干扰材料语义关系性的影响 ,与目标材料属于同一语义范畴的比异范畴的干扰材料更难以被抑制。  相似文献   

5.
Two experiments involving memory retrieval of auditorilv and visually presented materials were performed. In Experiment I, subjects were presented with memory sets of 1, 2, or 4 stimuli and then with a test item to be classified as belonging or not belonging to the memory set. In Condition 1, each memory stimulus was a single, auditorily presented letter. In Condition 2, each memory stimulus was a visually presented letter. In Conditions 3 and 4, each memory stimulus was a pair of letters, one presented visually and the other auditorily. Mean reaction time (RT) for the classification task increased as a function of number of memory stimuli at equal rates for all four conditions. This was interpreted as evidence for a parallel scanning process in Conditions 3 and 4 where the auditory item and visual item of each memory stimulus pair can be scanned simultaneously. Experiment II compared memory retrieval for a simultaneous condition in which auditory and visual memory items were presented as pairs with a sequential condition in which mixed auditory-visual memory sets were presented one item at a time. RTs were shorter for the simultaneous condition. This was interpreted as evidence that parallel scanning may depend upon memory input parameters.  相似文献   

6.
Two experiments are reported in which “same”-“different” reaction times (RTs) were collected to pairs of stimuli. In Experiment 1 stimuli were matrix patterns, and in Experiment 2 stimuli were digits. In both experiments, the pairs were presented simultaneously (discrimination task) and successively (memory task) for a set of nine simple and a set of nine complex stimuli. The following results were obtained: discrimination RTs were longer than memory RTs; RTs to complex stimuli were longer than RTs to simple stimuli; “same” RTs were faster than “different” RTs across all conditions except simple pattern discrimination, for which “different” RTs were faster than “same” RTs; and discrimination RTs for complex patterns were longer than would be predicted from the other conditions. Some evidence was obtained that the form of encoding for both patterns and digits in the memory task was visual. These results are discussed in terms of encoding and comparison strategies.  相似文献   

7.
The Hatfield Polytechnic, Hatfield, Herts ALIO 9AB, England The experiment utilized a serial choice reaction time (RT) paradigm in which only one alphanumeric stimulus was presented per trial, and the target set consisted of a single identified item. The categorical relationship between the target and nontarget items was varied as a property of blocks of trials. Target and nontarget RTs were smaller when the specified target item (e.g., the number 6) was categorically distinct from the nontargets (e.g., letters) than when it was from the same category (e.g., digits). The processing of catch-trial stimuli (items from the alternate category to the nontargets) and homographie category-ambiguous items was inhibited only in the former, between-category, condition. The results are contrasted with those obtained in visual search tasks. They suggest that a “locational-cue” explanation of alphanumeric category effects is inadequate.  相似文献   

8.
The role of phonology-to-spelling consistency (i.e., feedback consistency) was investigated in 3 lexical decision experiments in both the visual and auditory modalities in French and English. No evidence for a feedback consistency effect was found in the visual modality, either in English or in French, despite the fact that consistency was manipulated for different kinds of units (onsets and rimes). In contrast, robust feedback consistency effects were obtained in the auditory lexical decision task in both English and French when exactly the same items that produced a null effect in the visual modality were used. Neural network simulations are presented to show that previous demonstrations of feedback consistency effects in the visual modality can be simulated with a model that is not sensitive to feedback consistency, suggesting that these effects might have come from various confounds. These simulations, together with the authors' results, suggest that there are no feedback consistency effects in the visual modality. In contrast, such effects are clearly present in the auditory modality. Given that orthographic information is absent from current models of spoken word recognition, the present findings present a major challenge to these models.  相似文献   

9.
Typically, recall of the last of a list of auditory items greatly exceeds recall of the last of a list of visual items. This modality effect has been found in serial recall, free recall, and recall using the distractor paradigm in which each to-be-remembered item is preceded and followed by distractor activity. One source of the auditory advantage may be visual interference that reduces recall of visual stimuli. In three experiments, sources of visual interference were minimized. Although this manipulation reduced the modality effect, it did not eliminate the effect.  相似文献   

10.
Modality effects and the structure of short-term verbal memory   总被引:6,自引:0,他引:6  
The effects of auditory and visual presentation upon short-term retention of verbal stimuli are reviewed, and a model of the structure of short-term memory is presented. The main assumption of the model is that verbal information presented to the auditory and visual modalities is processed in separate streams that have different properties and capabilities. Auditory items are automatically encoded in both the A (acoustic) code, which, in the absence of subsequent input, can be maintained for some time without deliberate allocation of attention, and a P (phonological) code. Visual items are retained in both the P code and a visual code. Within the auditory stream, successive items are strongly associated; in contrast, in the visual modality, it is simultaneously presented items that are strongly associated. These assumptions about the structure of short-term verbal memory are shown to account for many of the observed effect of presentation modality.  相似文献   

11.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.  相似文献   

12.
Three experiments are reported involving the presentation of lists of either letters or digits for immediate serial recall. The main variable was the presence or absence of a suffix-prefix, an item (tick or cross) occurring at the end of the list which had to be copied before recall of the stimulus list. With auditory stimuli and an auditory suffix-prefix there was a large and selective increase in the number of errors on the last few serial positions—the typical “suffix effect”. The suffix effect was not found with auditory stimuli and a visual suffix-prefix nor with a visual stimulus and an auditory suffix-prefix. These results are interpreted as supporting a model for short-term memory proposed by Crowder and Morton (1969) in which it is suggested that with serial recall information concerning the final items following auditory presentation has a different, precategorical, origin from that concerning other items.  相似文献   

13.
The long-term modality effect is the advantage in recall of the last of a list of auditory to-be-remembered (TBR) items compared with the last of a list of visual TBR items when the list is followed by a filled retention interval. If the auditory advantage is due to echoic sensory memory mechanisms, then recall of the last auditory TBR item should be substantially reduced when it is followed by a redundant, not-to-be-recalled auditory suffix. Contrary to this prediction, Experiment 1 demonstrated that a redundant auditory suffix does not significantly reduce recall of the last auditory TBR item. In Experiment 2 a nonredundant auditory suffix produced a large reduction in the last auditory item. Redundancy is not the only factor controlling the effectiveness of a suffix, however. Experiment 3 demonstrated that a nonredundant visual suffix does not reduce recall of the last auditory TBR item. These results are discussed in reference to a retrieval account of the long-term modality effect.  相似文献   

14.
The relationship between the phonological properties of speech sounds and the corresponding semantic entries was studied in two experiments using response time measures. Monosyllabic words and nonsense words were used in both experiments. In Experiment I. Ss were each presented with individual items and were required, in three different conditions, to respond positively if (1) the item contained a particular final consonant, (2) the item was a real word, (3) the item contained either a particular consonant or was a real word. Latencies indicated that separate retrieval of phonological and lexical information took about the same time. but that their combined retrieval was longer, indicating a serial or overlapping process. In Experiment II, Ss were presented with pairs of items, and they responded positively if (1) the two items were physically identical, (2) the two items were lexically identical (both real words or both nonsense words). Response latencies were longer for lexical than for physical matches. Lexical matches were’ significantly slower than physical matches even on the same pair of items. The results imply differential accessibility to separate loci of phonological and semantic features.  相似文献   

15.
Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.  相似文献   

16.
Selective attention to the chemosensory modality   总被引:3,自引:0,他引:3  
Previous studies have shown that behavioral responses to auditory, visual, and tactile stimuli are modulated by expectancies regarding the likely modality of an upcoming stimulus (see Spence & Driver, 1997). In the present study, we investigated whether people can also selectively attend to the chemosensory modality (involving responses to olfactory, chemical, and painful stimuli). Participants made speeded spatial discrimination responses (left vs. right) to an unpredictable sequence of odor and tactile targets. Odor stimuli were presented to either the left or the right nostril, embedded in a birhinally applied constant airstream. Tactile stimuli were presented to the left or the right hand. On each trial, a symbolic visual cue predicted the likely modality for the upcoming target (the cue was a valid predictor of the target modality on the majority of trials). Response latencies were faster when targets were presented in the expected modality than when they were presented in the unexpected modality, showing for the first time that behavioral responses to chemosensory stimuli can be modulated by selective attention.  相似文献   

17.
When easy and difficult items are mixed together, their reading aloud latencies become more homogeneous relative to their presentation in unmixed ("pure") conditions (Lupker, Brown, & Colombo, 1997). We report two experiments designed to investigate the nature of the mechanism that underlies this list composition, or blocking, effect. In Experiment 1, we replicated Lupker et al.'s (1997) blocking effect in the reading aloud task and extended these findings to the visual lexical decision task. In Experiment 2, we found that blocking effects generalized across tasks: The characteristics of stimuli in a visual lexical decision task influenced reading aloud latencies, and vice versa, when visual lexical decision and reading aloud trials were presented alternately in the same experiment. We discuss implications of these results within time-criterion (Lupker et al., 1997) and strength-of-processing (Kello & Plaut, 2000, 2003) theories of strategic processing in reading.  相似文献   

18.
Four experiments were conducted in order to compare the effects of stimulus redundancy on temporal order judgments (TOJs) and reaction times (RTs). In Experiments 1 and 2, participants were presented in each trial with a tone and either a single visual stimulus or two redundant visual stimuli. They were asked to judge whether the tone or the visual display was presented first. Judgments of the relative onset times of the visual and the auditory stimuli were virtually unaffected by the presentation of redundant, rather than single, visual stimuli. Experiments 3 and 4 used simple RT tasks with the same stimuli, and responses were much faster to redundant than to single visual stimuli. It appears that the traditional speedup of RT associated with redundant visual stimuli arises after the stimulus detection processes to which TOJs are sensitive.  相似文献   

19.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

20.
Lists of digits 5 and 7 items in length were presented to second graders, sixth graders, and low-IQ sixth graders in either the visual or auditory modality. Half the auditory lists were followed by the redundant nonrecalled, auditorily presented word “recall” which served as a list suffix. The second graders had the most errors in the ordered recall task followed by the low-IQ sixth- and normal sixth-graders in that order. The size of the modality and suffix effects for the various groups seemed to indicate that, for the younger subjects, a larger proportion of the recall after auditory presentation comes from the Prelinguistic Auditory Store than for the older subjects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号