首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The effects of rehearsing actions by source (slideshow vs. story) and of test modality (picture vs. verbal) on source monitoring were examined. Seven- to 8-year-old children (N = 30) saw a slideshow event and heard a story about a similar event. One to 2 days later, they recalled the events by source (source recall), recalled the events without reference to source (no-source-cue recall), or engaged in no recall. Seven to 8 days later, all children received verbal and picture source-monitoring tests. Children in the source recall group were less likely than children in the other groups to claim they saw actions merely heard in the story. No-source-cue recall impaired source identification of story actions. The picture test enhanced recognition, but not source monitoring, of slide actions. Increasing the distinctiveness of the target events (Experiment 2) allowed the picture test to facilitate slideshow action discrimination by children in the no-recall group.  相似文献   

2.
Immediate and delayed recall of pictures and words was examined as a function of semantic or nonsemantic orienting tasks and the type of test (written or oral). As expected, semantic tasks generally led to greater final recall than nonsemantic tasks, with semantic tasks even producing positive recency on the delayed test. The evidence for a picture-word difference was largely restricted to the final recall of items involved in negative decisions; for such items the advantage of semantic tasks was apparent only for pictures. This suggests that "congruity" may be an important factor in picture-word differences, with such differences more apparent for the weaker items from negative judgments. Type of test did not seem to be a major factor in determining level of recall, suggesting that reinspecting the recall protocol during a written immediate test does not contribute substantially to final recall performance.  相似文献   

3.
The notion of a link between time and memory is intuitively appealing and forms the core assumption of temporal distinctiveness models. Distinctiveness models predict that items that are temporally isolated from their neighbors at presentation should be recalled better than items that are temporally crowded. By contrast, event-based theories consider time to be incidental to the processes that govern memory, and such theories would not imply a temporal isolation advantage unless participants engaged in a consolidation process (e.g., rehearsal or selective encoding) that exploited the temporal structure of the list. In this report, we examine two studies that assessed the effect of temporal distinctiveness on memory, using auditory (Experiment 1) and auditory and visual (Experiment 2) presentation with unpredictably varying interitem intervals. The results show that with unpredictable intervals temporal isolation does not benefit memory, regardless of presentation modality.  相似文献   

4.
Common processing systems involved during reading and listening were investigated. Semantic, phonological, and physical systems were examined using an experimental procedure that involved simultaneous presentation of two words: one visual and one auditory. Subjects were instructed to attend to only one modality and to make responses on the basis of words presented in that modality. Influence of unattended words on semantic and phonological decisions indicated that these processing systems are common to the two modalities. Decisions in the physical task were based on modality-specific codes operating prior to the convergence of information from the two modalities.  相似文献   

5.
One hundred children, in four groups, were presented with 72 word pairs (spoken and written). Each pair consisted of one short and one long word. The subjects were requested to indicate a target word on a card and explain their choice. The basis for a correct nonreading solution was attention to the surface aspects of words and recognition of the relation between sound duration and number of graphemes. A main experimental variable was the relationship between number of graphemes and the size of the denoted object. In the youngest age group (4-year-olds) irrelevant and nonlinguistic solutions predominated. Older children were guided by semantic content. Proper understanding of the relationship between spoken and written words was observed among some of the oldest children (7-year-olds).  相似文献   

6.
Weaker inter- than intramodality long-term priming of words has promoted two hypotheses: (1) separate visual and auditory lexicons and (2) modality dependence of implicit memory. In five experiments, we employed manipulations aimed to minimize study-test asymmetries between the two priming conditions. Activities at visual and auditory study were matched, words were phonologically consistent, and study modality was manipulated between subjects. Equal magnitudes of inter- and intramodality priming were found in experiments with visual and auditory stem completion at test, with visual fragment completion at test, and with visual and auditory perceptual identification at test. A within-subjects experiment yielded the conventional intramodality advantage. The results point to a single amodal lexicon and to modality-independent phonological processing as the basis of implicit word memory.  相似文献   

7.
Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., “Klik op het woord buffel”: Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.  相似文献   

8.
Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., “Klik op het woord buffel”: Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.  相似文献   

9.
Lexical effects in auditory rhyme-decision performance were examined in three experiments. Experiment 1 showed reliable lexical involvement: rhyme-monitoring responses to words were faster than rhyme-monitoring responses to nonwords; and decisions were faster in response to high-frequency as opposed to low-frequency words. Experiments 2 and 3 tested for lexical influences in the rejection of three types of nonrhyming item: words, nonwords with rhyming lexical neighbors (e.g.,jop after the cuerob), and nonwords with no rhyming lexical neighbor (e.g.,vop afterrob). Words were rejected more rapidly than nonwords, and there were reliable differences in the speed and accuracy of rejection of the two types of nonword. The advantage for words over nonwords was replicated for positive rhyme decisions. However, there were no differences in the speed of acceptance, as rhymes, of the two types of nonword. The implications of these results for interactive and autonomous models of spoken word recognition are discussed. It is concluded that the differences in rejection of nonrhyming nonwords are due to the operation of a guessing strategy.  相似文献   

10.
Women perform better than men on tests of verbal memory, but the nature of this advantage has not been precisely established. To examine whether phonemic memory is a factor in the female advantage, we presented, along with other verbal memory tasks, one containing nonsense words. Overall, there was the expected female advantage. However, an examination of the individual tests showed female superiority in recall of the real words but not the nonsense words.  相似文献   

11.
The aim of the study was to explore if recall of words and recognition of sentences orally presented was affected by a background noise. A further aim was to investigate the role of working memory capacity in performance in these conditions. Thirty‐two subjects performed a word recall and a sentence recognition test. They repeated each word to ensure that they had heard them. A reading span test measured their working memory capacity. Performance on the word recall task was impaired by the background noise. A high reading span score was associated with a smaller noise effect, especially on recall of the last part of the word list. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
Recent studies in humans have reported that recall of previously learned material is especially sensitive to the disruptive effects of pharmacologically induced cortisol elevations. Whether similar effects occur after exposure to psychosocial stress remains to be shown. Moreover it is unknown whether stress before or after the initial learning interacts with the later effects of repeated stress on delayed recall (e.g. state-dependent learning). Forty subjects participated in the present experiment. They learned a word list either one hour before or 10 min after exposure to a psychosocial laboratory stressor. Delayed recall was tested 4 weeks later, again either before or after stress. Salivary cortisol levels increased significantly in response to both stress exposures. Stress had no effects on the initial learning and also did not impair delayed recall. Moreover there was no evidence for state-dependent learning. The current data seem to be in conflict with previous studies demonstrating that delayed recall is especially sensitive to elevated cortisol levels. Several reasons for these discrepancies are discussed. Among them is the small sample size, the moderate cortisol increase in response to the second stress exposure but also the long recall delay, which might lead to memory traces less susceptible to stress.  相似文献   

13.
14.
15.
Many theories of spoken word recognition assume that lexical items are stored in memory as abstract representations. However, recent research (e.g., Goldinger, 1996) has suggested that representations of spoken words in memory are veridical exemplars that encode specific information, such as characteristics of the talker’s voice. If representations are exemplar based, effects of stimulus variation such as that arising from changes in the identity of the talker may have an effect on identification of and memory for spoken words. This prediction was examined for an implicit and explicit task (lexical decision and recognition, respectively). Comparable amounts of repetition priming in lexical decision were found for repeated words, regardless of whether the repetitions were in the same or in different voices. However, reaction times in the recognition task were faster if the repetition was in the same voice. These results suggest a role for both abstract and specific representations in models of spoken word recognition.  相似文献   

16.
Two tests were conducted to examine listeners' detection of missing words in spoken paragraph contexts. Detection was assessed by presenting listeners with normal paragraphs and with paragraphs each containing a single occurrence of a missed word, an inappropriate pause, or a mispronounced word. In the first test, listeners were simply asked whether they detected any abnormalities and to describe them. The results indicated that listeners reported missed words in only 34 to 49% of the paragraphs containing such words. In the second test, a separate group of listeners was given more specific instructions beforehand, indicating the three possible types of abnormality. In this task, the correct detection of missed words rose to 96%. Taken together, the results indicate that listeners do not readily detect occasional missing words under ordinary circumstances but are capable of such detection in a task specifically focused on message abnormalities.This work was supported by NIH Grant NS 20071. We thank Stephen Eady for serving as the speaker for the test recordings and Pamela Mueller for writing paragraphs.  相似文献   

17.
The authors aurally presented words varying in emotional content and frequency of exposure to 56 participants during (a) a study phase in which 288 words (72 separate words with repetitions) were presented and (b) a test phase in which participants were presented with the 72 words from the study phase along with 24 new words. In the test phase, participants responded to these 96 words with either a recognition response or a likability response. The recognition results indicated that increased exposure produced increased recognition; however, high arousal and negative valence words produced higher false positive scores. The likability scores revealed an overall mere exposure effect (MEE). However, words of low arousal and of positive valence did not show the MEE.  相似文献   

18.
This study is concerned with the effects of recall mode on memory for visual narratives. Subjects were shown a series of slides depicting an auto-pedestrian accident. They were then asked to free-recall all information they could remember. Half were asked to speak their free recalls. The other half were asked to write down their free recalls. The results showed that spoken recall was more accurate. These data were compared with previous studies using visual and textual narratives.  相似文献   

19.
In 1963 a method was published for the analysis of word frequencies in spontaneous (story-telling) spoken language. The immediate purpose was to provide a means for computer analysis of part-of-speech usage in a study of linguistically impaired aphasic patients. A dictionary of most frequently used words in response to a 20-card administration of the Thematic Apperception Test based on the verbal output of 12 subjects was compiled. The present paper is a revision of the dictionary based on 54 adults. An alphabetical listing of the words in the new directory with part-of-speech identification of each word is included.The research or work reported herein was performed pursuant to a contract with the Office of Education, U.S. Department of Health, Education and Welfare through the Chicago Early Education Research Center, a component of the National Laboratory on Early Childhood Education. Contractors undertaking such work under Government sponsorship are encouraged to express freely their professional judgment in the conduct of the work. Points of view or opinions stated do not, therefore, necessarily represent official Office of Education position or policy.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号