首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
People often fail to consciously perceive visual events that are outside the focus of attention, a phenomenon referred to as inattentional blindness or IB (i.e., Mack & Rock, 1998 Mack, A. and Rock, I. 1998. Inattentional blindness, Cambridge, MA: MIT Press. [Crossref] [Google Scholar]). Here, we investigated IB for words within and across sensory modalities (visually and auditorily) in order to assess whether dividing attention across different senses has the same consequences as dividing attention within an individual sensory modality. Participants were asked to monitor a rapid stream of pictures or sounds presented concurrently with task-irrelevant words (spoken or written). A word recognition test was used to measure the processing for unattended words compared to word recognition levels after explicitly monitoring the word stream. We were able to produce high levels of IB for visually and auditorily presented words under unimodal conditions (Experiment 1) as well as under crossmodal conditions (Experiment 2). A further manipulation revealed, however, that IB is less prevalent when attention is divided across modalities than within the same modality (Experiment 3). These findings are explained in terms of the attentional load hypothesis and suggest that, contrary to some claims, attention resources are to a certain extent shared across sensory modalities.  相似文献   

2.
The simultaneous processing of auditorily and visually presented messages was examined in three experiments. Subjects searched lists of words for a target word while processing auditorily presented information. Across conditions, subjects searched for (a) target words in a list of words presented auditorily, (b) the same target words in lists presented visually, (c) a member of a taxonomic category in a visually presented list, and (d) a rhyme in a list of words presented visually. The level of processing of a simultaneous auditory message varied across experiments. In experiment 1, subjects shadowed lists of digits. In Experiment 2, subjects reported the antonym of each word in a list. In Experiment 3, subjects named the taxonomic category of each word in a list. In all three experiments, subject had high detection rates for target words presented visually and for category targets but low detection rates for target words presented auditorily and for rhyme targets. These results suggest that processing the semantic properties, but not the acoustic properties, of words presented to the visual modality is independent of simultaneous processing in the auditory modality. Implication for models of selective attention are discussed.  相似文献   

3.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

4.
ABSTRACT

The testing effect refers to improved memory after retrieval practice and has been researched primarily with visual stimuli. In two experiments, we investigated whether the testing effect can be replicated when the to-be-learned information is presented auditorily, or visually?+?auditorily. Participants learned Swahili-English word pairs in one of three presentation modalities – visual, auditory, or visual?+?auditory. This was manipulated between-participants in Experiment 1 and within-participants in Experiment2. All participants studied the word pairs during three study trials. Half of participants practiced recalling the English translations in response to the Swahili cue word twice before the final test whereas the other half simply studied the word pairs twice more. Results indicated an improvement in final test performance in the repeated test condition, but only in the visual presentation modality (Experiments 1 and 2) and in the visual?+?auditory presentation modality (Experiment 2). This suggests that the benefits of practiced retrieval may be limited to information presented in a visual modality.  相似文献   

5.
Subjects studied a long list of individual words that were presented either visually or auditorily. Recall was tested immediately or after a filled delay by using either word endings or taxonomic categories as extralist retrieval cues. Two interactions were of particular interest. First, word ending cues were just as effective as taxonomic cues on the immediate test. On the delayed test, however, ending cues were less effective. This result suggests that sensory information encoded about a word decays at a faster rate than semantic information. Second, although modality had no observable influence on the taxonomic cues, word ending cues were more effective when all items were shown visually than when they were presented auditorily. Taken together, these findings indicate that the visual features of words are encoded at study and that this information can be accessed during test if it is recapitulated by the retrieval cue shortly after acquisition.  相似文献   

6.
Lists of thematically related words were presented to participants with or without a concurrent task. In Experiments 1 and 2, respectively, English or Spanish word lists were either low or high in concreteness (concrete vs abstract words) and were presented, respectively, auditorily or visually for study. The addition of a concurrent visual or auditory task, respectively, substantially reduced correct recall and doubled the frequency of false memory reports (nonstudied critical or theme words). Divided attention was interpreted as having reduced the opportunity for participants to monitor successfully their elicitations of critical associates. Comparisons of concrete and abstract lists revealed significantly more recalls of false memories for abstract than concrete word lists. Comparisons between two levels of attention, two levels of word concreteness, and two presentation modalities failed to support the "more is less" effect by which enhanced correct recall is accompanied by increased frequencies of false memories.  相似文献   

7.
本文通过设置不同的线索条件对短时记忆中范畴组织和通道组织进行了比较研究。实验采用了通道线索组、范围线索组和混合线索组三种条件,并设置了五种词表比例。在通道线索组中,每个词表是同范畴字词,但一半用视觉呈现,一半用听觉呈现;在范畴线索组中,两类词分属两个不同的词义范畴,但以相同通道呈现;在混合线索组中,两类不同范畴的词,各有一半分别用视觉和听觉呈现。结果表明在三种线索条件下,ARC分值差异显著,表现为  相似文献   

8.
The effect of semantic priming upon lexical decisions made for words in isolation (Experiment 1) and during sentence comprehension (Experiment 2) was investigated using a cross-modal lexical decision task. In Experiment 1, subjects made lexical decisions to both auditory and visual stimuli. Processing auditorily presented words facilitated subsequent lexical decisions on semantically related visual words. In Experiment 2, subjects comprehended auditorily presented sentences while simultaneously making lexical decisions for visually presented stimuli. Lexical decisions were facilitated when a visual word appeared immediately following a related word in the sentential material. Lexical decisions were also facilitated when the visual word appeared three syllables following closure of the clause containing the related material. Arguments are made for autonomy of semantic priming during sentence comprehension.  相似文献   

9.
Lexical decision latencies to word targets presented either visually or auditorily were faster when directly preceded by a briefly presented (53-ms) pattern-masked visual prime that was the same word as the target (repetition primes), compared with different word primes. Primes that were pseudohomophones of target words did not significantly influence target processing compared with unrelated primes (Experiments 1-2) but did produce robust priming effects with slightly longer prime exposures (67 ms) in Experiment 3. Like repetition priming, these pseudohomophone priming effects did not interact with target modality. Experiments 4 and 5 replicated this general pattern of effects while introducing a different measure of prime visibility and an orthographic priming condition. Results are interpreted within the framework of a bimodal interactive activation model.  相似文献   

10.
This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."  相似文献   

11.
Three experiments contrasted the effects of articulatory suppression on recognition memory for musical and verbal sequences. In Experiment 1, a standard/comparison task was employed, with digit or note sequences presented visually or auditorily while participants remained silent or produced intermittent verbal suppression (saying "the") or musical suppression (singing "la"). Both suppression types decreased performance by equivalent amounts, as compared with no suppression. Recognition accuracy was lower during suppression for visually presented digits than during that for auditorily presented digits (consistent with phonological loop predictions), whereas accuracy was equivalent for visually presented notes and auditory tones. When visual interference filled the retention interval in Experiment 2, performance with visually presented notes but not digits was impaired. Experiment 3 forced participants to translate visually presented music sequences by presenting comparison sequences auditorily. Suppression effects for visually presented music resembled those for digits only when the recognition task required sensory translation of cues.  相似文献   

12.
This paper investigates the mechanisms underlying the standard modality effect (i.e., better recall performance for auditorily presented than for visually presented materials), and the modality congruency effect (i.e., better memory performance if the mode of recall and presentation are congruent rather than incongruent). We tested the assumption that the standard modality effect is restricted to the most recent word(s) of the sentences but occurs in both verbatim and gist recall (Experiments 1 and 2), whereas the modality congruency effect should be evident for the rest of the sentence when using verbatim recall (Experiment 3) but not when using gist recall (Experiment 4). All experiments used the Potter-Lombardi intrusion paradigm. When the target word was the most recent word of the sentence, a standard modality effect was found with both verbatim recall and gist recall. When the target word was included in the middle of the sentences, a modality congruency effect was found with verbatim recall but not with gist recall.  相似文献   

13.
Specific and nonspecific transfer of pattern class concept information between vision and audition was examined. In Experiment 1, subjects learned visually or auditorily to distinguish between two pattern classes that were either the same as or different from the test classes. All subjects were then tested on the auditory classification of 50 patterns. Specific intramodal and cross-modal transfer was noted; subjects trained visually and auditorily on the test classes were equivalent in performance and more accurate than untrained controls. In Experiment 2, the training of Experiment 1 was repeated, but subjects were tested visually. There was no evidence of auditory-to-visual transfer but some suggestion of nonspecific transfer within the visual modality. The asymmetry of transfer is discussed in terms of the modality into which patterns are most likely translated for the cross-modal tasks and in terms of the quality of prototype formation with visual versus a,ditory patterns.  相似文献   

14.
Differences in recall ability between immediate serial recall of auditorily and visually presented verbal material have traditionally been considered restricted to the end of to-be-recalled lists, the recency section of the serial position curve (e.g., Crowder & Morton, 1969). Later studies showed that--under certain circumstances--differences in recall between the two modalities can be observed across the whole of the list (Frankish, 1985). However in all these studies the advantage observed is for recall of material presented in the auditorily modality. Six separate conditions across four experiments demonstrate that a visual advantage can be obtained with serial recall if participants are required to recall the list in two distinct sections using serial recall. Judged on a list-wide basis, the visual advantage is of equivalent size to the auditory advantage of the classical modality effect. The results demonstrate that differences in representation of auditory and visual verbal material in short-term memory persist beyond lexical and phonological categorization and are problematic for current theories of the modality effect.  相似文献   

15.
Twenty-four kindergarten and fourth grade children were asked to locate a display card which had been visually or verbally presented. A probe, which identified the card to be located, was presented verbally and visually equally often. The children's ability to recall the location of an item did not differ as a function of the modality to which the material was presented. Nor was recall significantly affected when the presentation modality differed from the probe modality, suggesting that children as young as 5 can cross these sensory modalities to retrieve material with no loss in accuracy. Serial position curves suggest that the verbal and visual material is not stored in a common intersensory store. The primacy effect is found to be stronger with visually presented material and the recency effect strongest with auditorily presented material. Probe modality did not influence the serial position curves.  相似文献   

16.
17.
Two experiments tested a neo-Piagetian model of verbal short-term memory and compared it with the articulatory loop model. Experiment 1 (n = 113, age range 9-11) tested word span for 2-, 3-, and 4-syllable words, with both visual and auditory presentation. Experiment 2 (with the same participants) tested recall of visually presented supraspan lists. Measures of M capacity (as conceived in Pascual-Leone's neo-Piagetian theory) and articulation rate were also used. The proposed model can account for the effects of M capacity, word length, and presentation modality. The fit of this model to the data was acceptable, and parameter estimates were consistent across experiments. Furthermore, a correlation was found between M capacity and word span which resisted partialling out of age and articulation rate.  相似文献   

18.
This research addressed the relationship between the speed of presentation of stimuli through the auditory and visual modalities and the number of syntagmatic and paradigmatic word-association responses of 49 chronic undifferentiated schizophrenic adults. In word-association tests administered to subjects stimuli were balanced for frequency of occurrence in written English language (frequent, infrequent), word length (long, short), abstraction level (low, medium, high), and part of speech (noun, verb, adjective). The words were presented auditorily at normal speed (equivalent to 10 phonemes per second) and at half speed (equivalent to 5 phonemes per second) speech. Words were also presented visually, using a tachistoscope, at extended fixation speed (equivalent to 1,000 msec.) and at sweep speed (equivalent to 10 msec.). More paradigmatic responses occurred on word stimuli if nouns, long, and frequently occurring presented auditorily; and if concrete, nouns, and presented slowly and visually. Results were compared to previously reported data for aphasic and normal adults, and differentiating features and clinical implications were discussed.  相似文献   

19.
We explored whether the generalization of rules based on simple structures depends on attention. Participants were exposed to a stream of artificial words that followed a simple syllabic structure (ABA or AAB), overlaid on a sequence of familiar noises. After passively listening, participants successfully recognized the individual words present in the stream among foils, and they were able to generalize the underlying word structure to new exemplars. Yet, when attention was diverted from the speech stream (by requiring participants to monitor the sequence of noises), recognition of the individual words fell dramatically irrespective of word structure, whereas generalization depended on stimulus structure. For structures based on vowel repetitions across nonadjacent syllables (ABA; Experiment 1), generalization was affected by attention. In contrast, for structures based on adjacent repetitions (AAB; Experiment 2), generalization capacity was unaffected by attention. This pattern of results was replicated under favorable conditions for generalization, such as increased token variability and the implementation of the rule over whole syllables (Experiments 3 and 4). These results suggest a differential effect of attention on rule learning and generalization depending on stimulus structure.  相似文献   

20.
Switching from one functional or cognitive operation to another is thought to rely on executive/control processes. The efficacy of these processes may depend on the extent of overlap between neural circuitry mediating the different tasks; more effective task preparation (and by extension smaller switch costs) is achieved when this overlap is small. We investigated the performance costs associated with switching tasks and/or switching sensory modalities. Participants discriminated either the identity or spatial location of objects that were presented either visually or acoustically. Switch costs between tasks were significantly smaller when the sensory modality of the task switched versus when it repeated. This was the case irrespective of whether the pre-trial cue informed participants only of the upcoming task, but not sensory modality (Experiment 1) or whether the pre-trial cue was informative about both the upcoming task and sensory modality (Experiment 2). In addition, in both experiments switch costs between the senses were positively correlated when the sensory modality of the task repeated across trials and not when it switched. The collective evidence supports the independence of control processes mediating task switching and modality switching and also the hypothesis that switch costs reflect competitive interference between neural circuits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号