首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper examines the effect of (1) delay between learning and test and (2) associative interference on memory retrieval speed. The speed-accuracy tradeoff methodology, which interrupts the retrieval process at various times (0.3, 0.7, 1.0, 1.5, 2.0, and 3.0 sec) after presentation of the test item, provides a means of separating retrieval speed effects from effects on overall memory strength. Performance at short processing times is an index of retrieval speed. Performance given ample processing time is a measure of asymptotic accuracy, or memory strength. Increasing the delay between learning and test or introduction of interference relations lowered memory trace strength, as reflected in asymptotic accuracy. Items tested shortly (about 3 sec) after learning showed a significant speedup in retrieval relative to items tested at a longer (several minute) delay. Further analysis suggested that the delay effect on retrieval was primarily the result of immediate repetition, or testing of the last-learned item. The interference manipulation showed a slight and nonsignificant tendency toward slowing of memory retrieval. The implications of these results for various models of retrieval are explored via simulations. The results of all the simulations suggested a direct-access retrieval process where associations are processed largely in parallel. Contradiction or mismatch information in recognizing new items was important because it provided an explanation for a slight slowing in retrieval due to interference even with a parallel-processing assumption. Faster retrieval for the last-learned item may be the result of residual activation following active processing.  相似文献   

2.
3.
Two experiments tested the hypothesis that the time course of retrieval from memory is different for familiarity and recall. The response-signal method was used to compare memory retrieval dynamics in yes-no recognition memory, as a measure of familiarity, with those of list discrimination, as a measure of contextual recall. Responses were always made with regard to membership in two previous study lists. In Experiment 1 an exclusion task requiring positive responses to words from one list and negative responses to new words and words from the nontarget list was used. In Experiment 2, recognition and list discrimination were separate tasks. Retrieval curves from both experiments were consistent, showing that the minimal retrieval time for recognition was about 100 msec faster than that for list discrimination. Repetition affected asymptotic performance but had no reliable effects on retrieval dynamics in either the recognition or the list-discrimination task.  相似文献   

4.
Reaction-time and accuracy data obtained from studies of sentence verification have not been rich enough to answer certain important theoretical questions about structures and processes in human semantic memory. However, a new technique called speed-accuracy decomposition (Meyer, Irwin, Osman, & Kounios, 1986) may help solve this problem. The technique allows intermediate products of sentence verification to be analyzed more precisely. Three experiments with speed-accuracy decomposition indicate that verification processes produce useful partial information before they are completed. Such information appears to accumulate continuously at a rate whose magnitude depends on the degree of relatedness between semantic categories. This outcome is consistent with continuous computational (e.g., semantic-feature comparison) models of semantic memory. An analysis of reaction-time minima suggests that a discrete all-or-none search process may also contribute at least occasionally to sentence verification. Further details regarding the nature of these processes and the memory structures on which they operate can be inferred from additional results obtained through speed-accuracy decomposition.  相似文献   

5.
This research investigates the process by which people discriminate preexperimental (semantic) from experimental (episodic) associations. Subjects were instructed to recognize (reply old) only to experimentally studied materials. The questions are how is context information used to select relevant memories, and how successful is the exclusion of irrelevant information? The recognition accuracy and the retrieval speed (rate of approach to asymptotic accuracy) are jointly measured using a speed-accuracy trade-off (SAT) paradigm, with collateral reaction time (RT) experiments. Experiment 1 presented both semantically related and unrelated pairs for study. In Experiment 3, semantically related pairs were never presented for study and preexperimentally related lures could be rejected by rule. Semantically related lures in both of these SAT experiments showed evidence for elevated false alarm rates early in retrieval, followed by late suppression of false alarms (at about 1 s). When related pairs were studied in the experiment, suppression was incomplete; when related pairs were never studied, rule-based supersuppression obtained. Results from collateral reaction time studies (Experiments 2 and 4) showed points that corresponded to the pattern of results in Experiments 1 and 3 near asymptote, although the RT data by themselves would have been interpreted quite differently. These results are compatible with a single-store, two-phase retrieval model in which context information, or recall-like information about correct pair mates, is used to correct spurious false alarms resulting from the incomplete filtering of semantic information.  相似文献   

6.
Time-course studies of semantic verification are reviewed, discussed, and reinterpreted with the aim of drawing general theoretical conclusions about semantic memory structure. These reaction time, speed-accuracy tradeoff, speed-accuracy decomposition, and event-related (brain) potential (ERP) studies suggest that semantic memory is structured on at least three levels. In particular, specific models of the intermediate (macrostructural) level are discussed and compared. ERP investigations of this level suggest that context-independent and context-dependent types of semantic information are potentially isolable and analyzable.  相似文献   

7.
A critical discussion of the model of sentence memory which enjoyed greatest popularity in the psycholinguistic research of the sixties, namely the model based on the deep-structure-plus-tag hypothesis of sentence memory, is presented together with the results of an experiment on prompted recall for sentences with various intervals after presentation and with two types of instructions. This experiment contributed to show that immediate memory for sentences can be affected by appropriate instructions, and that after a short time after presentation only the main semantic information of the sentences is recalled. An alternative model is presented, based on the notion of storage of the meaningful elements of the sentences in a rather abstract form, and of recall as a reconstructive process to produce new sentences. The results of two new experiments on sentence memory, the first a free and prompted recall experiment with children, the second a recognition memory study with adolescents, are then presented and discussed in relation to the model.  相似文献   

8.
For a long time, it has been known that one can tradeoff accuracy for speed in (presumably) any task. The range over which one can obtain substantial speed-accuracy tradeoff varies from 150 msec in some very simple perceptual tasks to 1,000 msec in some recognition memory tasks and presumably even longer in more complex cognitive tasks. Obtaining an entire speed-accuracy tradeoff function provides much greater knowledge concerning information processing dynamics than is obtained by a reaction- time experiment, which yields the equivalent of a single point on this function. For this and other reasons, speed-accuracy tradeoff studies are often preferable to reaction-time studies of the dynamics of perceptual, memory, and cognitive processes. Methods of obtaining speed-accuracy tradeoff functions include: instructions, payoffs, deadlines, bands, response signals (with blocked and mixed designs), and partitioning of reaction time. A combination of the mixed-design signal method supplemented by partitioning of reaction times appears to be the optimal method.  相似文献   

9.
In a continuous recognition memory design, Ss judged whether each sentence was identical in form and meaning to some previously presented sentence, then judged whether the sentence was identical in meaning irrespective of form, and, finally, rated the likelihood of recognizing the sentence ff it was presented an hour later (memorability). The Ss were given sentences that were new, identical to, or paraphrased from some previously presented sentence, at delays ranging from 0 sec to 2 h. Long-term memory for both semantic information and syntactic-lexical information decayed according to the same exponential-power retention function previously found to be characteristic of the decay of simpler verbal materials (nonsense items, letters, digits, words, and word pairs). Semantic memory primarily differed from syntactic-lexical memory in that the semantic information had a far higher degree of learning, but the decay rate for syntactic-lexical information was also approximately 5 0% greater than the decay rate for semantic information.  相似文献   

10.
11.
In two experiments subjects classified as being either high or low in field articulation (FA) performed a semantic integration task with high-information load. In Experiment 1, differences in performance between high- and low-FA subjects on an inference and recognition test were obtained when sentences were presented for 5 sec a piece but not when they were presented for 10 sec a piece. In Experiment 2, performance differences between high and low-FA subjects were eliminated by presenting only a specific subset of the sentences for 10 sec a piece. The implications of these results for explanations of FA effects in semantic integration are discussed.  相似文献   

12.
We examined the hypothesis that feeling-of-knowing judgments rely on recollection as well as on familiarity prompted by the cue presentation. A remember-know-no memory procedure was combined with the episodic FOK procedure employing a cue–target pair memory task. The magnitude of FOK judgments and FOK accuracy were examined as a function of recollection, familiarity, or the “no memory” option. Results showed that the proportion of R and K responses was similar. FOK accuracy and magnitude of FOK judgments were higher for R and K responses than for N responses. FOK accuracy related to R and K responses were above chance level, but FOK was not accurate in the “no memory” condition. Finally, both FOK magnitude and FOK accuracy were related more to recollection than to familiarity. These results support the hypothesis that both recollection and familiarity are determinants of the FOK process, although they suggest that recollection has a stronger influence.  相似文献   

13.
为探究文本背景下句子错误记忆的发展性逆转现象及精加工推理效应,95名有效被试学习3篇文本材料后,参与由学过句、内涵推理句、外延推理句和无关句组成的再认测验。结果发现:(1)高中二年级被试学过句的正确再认率和关键诱饵句的校正的错误再认率均显著高于小学五年级和初中二年级,后二者差异不显著;(2)内涵推理句错误再认率高于外延推理句,高中二年级被试尤甚。结论:(1)句子真实记忆随着年龄增长而增加;句子错误记忆存在发展性逆转现象,初中二年级到高中二年级是句子错误记忆发展相对迅速的阶段;(2)文本背景下,不同的精加工推理诱发了不同程度的句子错误记忆,这种精加工推理效应与一般世界知识的自上而下激活的程度有关。  相似文献   

14.
Three experiments revealed that memory for verbs is more dependent on semantic context than is memory for nouns. The participants in Experiment 1 were asked to remember either nouns or verbs from intransitive sentences. A recognition test included verbatim sentences, sentences with an old noun and a new verb, sentences with an old verb and a new noun, and entirely new sentences. Memory for verbs was significantly better when the verb was presented with the same noun at encoding and at retrieval. This contextual effect was much smaller for nouns. Experiments 2 and 3 replicated this effect and provided evidence that context effects reflect facilitation from bringing to mind the same meaning of a verb at encoding and at retrieval. Memory for verbs may be more dependent on semantic context because the meanings of verbs are more variable across semantic contexts than are the meanings of nouns.  相似文献   

15.
How is semantic information from different modalities integrated and stored? If related ideas are encountered in French and English, or in pictures and sentences, is the result a single representation in memory, or two modality-dependent ones? Subjects were presented with items in different modalities, then were asked whether or not subsequently presented items were identical with the former ones. Subjects frequently accepted translations and items semantically consistent with those presented earlier as identical, although not as often as they accepted items actually seen previously. The same pattern of results was found when the items were French and English sentences, and when they were pictures and sentences. The results can be explained by the hypothesis that subjects integrate information across modalities into a single underlying semantic representation. A computer model, embodying this hypothesis, made predictions in close agreement with the data.  相似文献   

16.
Two experiments examined the roles played by semantic and surface information in reading and recognizing sentences. Subjects read sentences in normal and inverted typography. Their recognition of meaning and other sentence features was tested using sentences whose typography, wording, and/or meaning were either the same as or different from that in the first set of sentences. In Experiment 1, subjects either read aloud or performed a sentence continuation task. For originally inverted sentences, recognition of meaning was high, irrespective of task demands. For originally normal sentences, recognition was low for Read Aloud subjects and high for Sentence Continuation subjects. Sentence recognition was affected by repetition of wording and typography. Experiment 2 replicated the results with the read aloud task and showed the second reading of originally inverted sentences to be equally swift for paraphrase and verbatim test forms. It was concluded that reading and recognition are interactive processes, involving conceptually driven and data driven operations. The interaction of operations may be either automatic or controlled. While processing of normal typography is automatic, inverted typography induces controlled processing, resulting in better retention. Furthermore, semantic and surface information are conceptualized as interacting components of comprehension and memory processes.  相似文献   

17.
The present experiment tested the hypothesis that unconscious reconstructive memory processing can lead to the breakdown of the relationship between memory confidence and memory accuracy. Participants heard deceptive schema-inference sentences and nondeceptive sentences and were tested with either simple or forced-choice recognition. The nondeceptive items showed a positive relation between confidence and accuracy in both simple and forced-choice recognition. However, the deceptive items showed a strong negative confidence/accuracy relationship in simple recognition and a low positive relationship in forced choice. The mean levels of confidence for erroneous responses for deceptive items were inappropriately high in simple recognition but lower in forced choice. These results suggest that unconscious reconstructive memory processes involved in memory for the deceptive schema-inference items led to inaccurate confidence judgments and that, when participants were made aware of the deceptive nature of the schema-inference items through the use of a forced-choice procedure, they adjusted their confidence accordingly.  相似文献   

18.
Memory for sentences as a function of the syntactic complexity of the sentences was examined. Sentence complexity was varied through a manipulation that involved presenting sentences in either self-embedded forms or more standard forms. Subjects performed an incidental semantic orienting task on a set of sentences varying in complexity and were subsequently tested for their recognition memory of the sentences. In Experiment 1, subjects were tested for their memory of both surface characteristics and meaning of the sentences. There were no differences caused by sentence complexity for memory for meaning. Memory for surface structure, however, was a function of sentence complexity such that there was better memory for the more complex sentences. Experiment 2 replicated the finding that the more complex sentences produced better recognition memory for surface structure. The results are interpreted within a framework that suggests that increased syntactic complexity produces more elaboration, which in turn produces better memory.  相似文献   

19.
After reading or listening to short passages, Ss attempted to recognize semantically changed sentences and paraphrases (syntactically and lexically changed sentences). The intervals between the original presentation and test ranged from 1 to 23 sec. In general, paraphrases were poorly detected after a brief time, supporting earlier findings that the exact wording of sentences is not stored in long-term memory. An exception was the high recognition of active-passive changes with the visual presentation. Recognition at the first test interval was significantly better after listening than after reading, although the eventual level of recognition memory was not different in the two modes. This result, consistent with other studies of modality effects in short-term memory, suggests that acoustic-phonetic memory played a role in the storage of the auditorally presented material.  相似文献   

20.
English texts were constructed from propositional bases. One set of 16-word sentences was obtained from semantic bases containing from 4 to 9 propositions. For another set of sentences and paragraphs, number of words and number of propositions covaried. Subjects read the texts at their own rate and recalled them immediately. For the 16-word sentences, subjects needed 1.5 sec additional reading time to process each proposition. For longer texts, this value increased. In another experimental condition reading time was controlled by the experimenter. The analysis of both the text and the recall protocols in terms of number of propositions lent support to the notion that propositions are a basic unit of memory for text. However, evidence was also obtained that while the total number of propositions upon which a text was based proved to be an effective psychological variable, all propositions were not equally difficult to remember: superordinate propositions were recalled better than propositions which were stucturally subordinate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号