首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent studies of the mathematical relationship between time and forgetting suggest that it is a power function rather than an exponential function, a finding that has important theoretical consequences. Through computational analysis and reanalyses of published data, we demonstrate that arithmetic averaging of exponential curves can produce an artifactual power curve, particularly when there are large and systematic differences among the slopes of the component curves. A series of simulations showed that the amount of power artifact is small when the slopes of the component curves are normally or rectangularly distributed and when the performance measure is noise free. However, the simulations also showed that the artifact can be quite large, depending on the shape of the noise distribution and restrictions in the performance range. We conclude that claims concerning the form of memory functions should consider whether the data are likely to contain artifact caused by averaging or by the presence of range-restricted noise.  相似文献   

2.
It has frequently been claimed that learning performance improves with practice according to the so-called “Power Law of Learning.” Similarly, forgetting may follow a power law. It has been shown on the basis of extensive simulations that such power laws may emerge through averaging functions with other, nonpower function shapes. In the present article, we supplement these simulations with a mathematical proof that power functions will indeed emerge as a result of averaging over exponential functions, if the distribution of learning rates follows a gamma distribution, a uniform distribution, or a half-normal function. Through a number of simulations, we further investigate to what extent these findings may affect empirical results in practice.  相似文献   

3.
4.
Forgetting curves: implications for connectionist models   总被引:4,自引:0,他引:4  
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a connectionist model to account for power-function forgetting curves by using bounded weights and by generating the learning rates from a monotonically decreasing function. The bounded weights introduce exponential forgetting in each weight and a power-function forgetting results when weights with different learning rates are averaged. It is argued that these assumptions are biologically reasonable. Therefore power-function forgetting curves are a property that may be expected from biological networks. The model has an analytic solution, which is a good approximation of a power function displaced one lag in time. This function fits better than any of the 105 suggested two-parameter forgetting-curve functions when tested on the most precise recognition memory data set collected by. Unlike the power-function normally used, the suggested function is defined at lag zero. Several functions for generating learning rates with a finite integral yield power-function forgetting curves; however, the type of function influences the rate of forgetting. It is shown that power-function forgetting curves cannot be accounted for by variability in performance between subjects because it requires a distribution of performance that is not found in empirical data. An extension of the model accounts for intersecting forgetting curves found in massed and spaced repetitions. The model can also be extended to account for a faster forgetting rate in item recognition (IR) compared to associative recognition in short but not long retention intervals.  相似文献   

5.
Empirical forgetting curve data have been shown to follow a power function. In contrast, many connectionist models predict either an exponential decay or flat forgetting curves. This paper simulates power function forgetting curves in a Hopfield network modified to incorporate the more biologically realistic assumptions of bounded weights and a distribution of learning rates. The modified model produces power function forgetting curves. The bounded weights introduce exponential decay for individual weights, and a power function forgetting curve when summing exponential decays with different learning rates. Because these assumptions are biologically reasonable, power function forgetting curves may be an emergent property of biological networks. The results fit empirical data and indicate that forgetting curves restrict possible implementation of models of memory.  相似文献   

6.
Wixted and Ebbesen (1991) showed that forgetting functions produced by a variety of procedures are often well described by the power function, at?b, where a and b are free parameters. However, all of their analyses were based on data arithmetically averaged over subjects. R. B. Anderson and Tweney (1997) argue that the power law of forgetting may be an artifact of arithmetically averaging individual subject forgetting functions that are truly exponential in form and that geometric averaging would avoid this potential problem. We agree that researchers should always be cognizant of the possibility of averaging artifacts, but we also show that our conclusions about the form of forgetting remain unchanged (and goodness-of-fit statistics are scarcely affected by) whether arithmetic or geometric averaging is used. In addition, an analysis of individual subject forgetting functions shows that they, too, are described much better by a power function than by an exponential.  相似文献   

7.
An integrative account of short-term memory is based on data from pigeons trained to report the majority color in a sequence of lights. Performance showed strong recency effects, was invariant over changes in the interstimulus interval, and improved with increases in the intertrial interval. A compound model of binomial variance around geometrically decreasing memory described the data; a logit transformation rendered it isomorphic with other memory models. The model was generalized for variance in the parameters, where it was shown that averaging exponential and power functions from individuals or items with different decay rates generates new functions that are hyperbolic in time and in log time, respectively. The compound model provides a unified treatment of both the accrual and the dissipation of memory and is consistent with data from various experiments, including thechoose-short bias in delayed recall, multielement stimuli, and Rubin and Wenzel’s (1996) meta-analyses of forgetting.  相似文献   

8.
Recent memory theory has emphasized the concept ofneed probability—that is, the probability that a given piece of learned information will be tested at some point in the future. It has been proposed that, in real-world situations, need probability declines over time and that the memory-loss rate is calibrated to match the progressive reduction in need probability (J. R. Anderson & Schooler, 1991). The present experiments were designed to examine the influence of the slope of the need-probability curve on the slope of the retention curve. On each of several trials, subjects memorized a list of digits, then retained the digits in memory for 1, 2, 4, 8, or 16 sec. Some trials ended with a recall test; other trials ended with the message, “no test.” In Experiment 1, the likelihood of encountering a memory test (i.e., the need probability) was made to either increase or decrease as the retention interval increased; in Experiment 2, need probability either was flat (invariant across retention intervals) or decreased as the retention interval increased. The results indicated that the shape of the need-probability curve influenced the slope of the retention curve (Experiment 1) and that the effect became larger as the experimental session progressed (Experiment 2). The findings support the notion that memory adapts to need probabilities and that the rate of forgetting is influenced by the slope of the need-probability curve. In addition, all of the forgetting curves approximated a power function, suggesting that need probability influences the slope but not the form of forgetting.  相似文献   

9.
Forgetting functions generated by delayed matching-to-sample procedures allow delay-dependent effects to be distinguished from delay-independent effects on working memory. Parameters of negative exponential functions estimate initial discriminability (intercept) and rate of forgetting (slope). Forgetting functions for patients with Alzheimer's disease indicate that they differ from normal controls in terms of reduced initial discriminability—that is, in the encoding component of memory performance— but not convincingly in rate of forgetting. Reanalyses of previous studies with different species suggest that pro- and anticholinergic drugs influence initial discriminability in delayed matching-to-sample performance, but not rate of forgetting. The results of our reanalyses are consistent with the conclusion that the cholinergic system plays a role in the encoding component of working memory and that this is the main characteristic of the memory deficit shown by patients with Alzheimer's disease.  相似文献   

10.
The present experiment investigated performance in perceptual averaging of line ensembles during maintenance of minimal and near-span memory loads of digits. Observers memorized a four-to-seven digit number (high load) or a zero (low load) prior to a brief exposure (500 ms) of an ensemble of nine horizontal lines of various lengths. A subsequent probe line was then classified by observers as greater than or less than the ensemble average length followed by serial recall of the memory load. Slope analysis of the psychometric functions relating p(”greater than”) and the probe to ensemble-mean-size-ratio showed an advantage (steeper slope and therefore smaller threshold) for averaging under high-load compared with low-load conditions. Reaction time analysis indicated that faster probe responses were more accurate than slower responses.  相似文献   

11.
Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual’s semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1‐hr daily sessions. The semantic networks of individuals have a small‐world structure with short distances between words and high clustering. The distribution of links follows a power law truncated by an exponential cutoff, meaning that most words are poorly connected and a minority of words has a high, although bounded, number of connections. Existing aggregate networks mirror the individual link distributions, and so they are not scale‐free, as has been previously assumed; still, there are properties of individual structure that the aggregate networks do not reflect. A simulation of the new sampling process suggests that it can uncover the true structure of an individual’s semantic memory.  相似文献   

12.
Opfer JE  Siegler RS  Young CJ 《Developmental science》2011,14(5):1194-204; discussion 1205-6
Barth and Paladino (2011) argue that changes in numerical representations are better modeled by a power function whose exponent gradually rises to 1 than as a shift from a logarithmic to a linear representation of numerical magnitude. However, the fit of the power function to number line estimation data may simply stem from fitting noise generated by averaging over changing proportions of logarithmic and linear estimation patterns. To evaluate this possibility, we used conventional model fitting techniques with individual as well as group average data; simulations that varied the proportion of data generated by different functions; comparisons of alternative models' prediction of new data; and microgenetic analyses of rates of change in experiments on children's learning. Both new data and individual participants' data were predicted less accurately by power functions than by logarithmic and linear functions. In microgenetic studies, changes in the best fitting power function's exponent occurred abruptly, a finding inconsistent with Barth and Paladino's interpretation that development of numerical representations reflects a gradual shift in the shape of the power function. Overall, the data support the view that change in this area entails transitions from logarithmic to linear representations of numerical magnitude.  相似文献   

13.
Woodworth (1938) reported that naming latency increased linearly with the number of digits per number (number length). In the present study, the Sternberg memory scanning paradigm was utilized to investigate this effect. It was found that the slope of the memory scanning function increased linearly with number length: memory scanning time was 40 msec for one-digit numbers, 70 msec for two-digit numbers and 101 msec for 3 digit numbers. The intercepts of the memory scanning functions did not differ for the three types of numbers. Thus the increase in latency may be due to the memory comparison stage of processing. The data suggest that a memory comparison operation occurs for each digit position of the complex memory items composed of more than one digit.  相似文献   

14.
In the study of nonspecific preparation, the response time (RT) to an imperative stimulus is analyzed as a function of the foreperiod (FP), the interval between a warning stimulus and the imperative stimulus. When FP is varied within blocks of trials, a downward sloping FP-RT function is usually observed. The slope of this function depends on the distribution of FPs (the more negative the skewness, the steeper the slope) and on intertrial sequences of FP (the longer the FP on the preceding trial, the steeper the slope). Because these determinants are confounded, we examined whether FP-RT functions, observed under three different FP distributions (i.e., uniform, exponential, and peaked) can be predicted, one from the other, by reweighting sequential effects. It turned out that reweighting explained very little variance of the difference between the FP-RT functions, suggesting a dominant role of temporal orienting strategies.  相似文献   

15.
Successful prospective remembering (PR) comprises at least two components: one retrospective component that refers to the encoding and retrieval of the content of the intention and a second prospective component that involves the retrieval of the intended action at the appropriate moment. Whereas the retrospective component is very similar to memory skills like learning and retention of new information, the prospective component is thought to rely mainly on executive function. Head injuries can disturb PR during both of these stages. To disentangle the relative impact of executive functions and retrospective memory processes on PR we embedded a PR task in a 2‐back verbal working memory paradigm. Fifty‐six neurological patients with brain damage of various aetiologies were divided pre‐experimentally into four groups on the basis of their delayed recall Index in the Wechsler Memory Scale‐Revised (WMS‐R, indicating absence or presence of deficits in retrospective memory functions) and their age‐corrected score in the Behavioural Assessment of the Dysexecutive Syndrome (BADS, indicating absence or presence of deficits in executive functions). Additionally, 19 controls matched for age and education were examined. We found that patients with deficits in executive functions detected fewer cues than any other group irrespective of their retrospective memory performance. However, eight patients with severe anterograde memory deficits could retain neither the intention nor its content. Thus, intactness of the retrospective component seems to be a necessary but not sufficient prerequisite for successful prospective remembering. For the execution of the intended action itself executive functions play a critical role.  相似文献   

16.
Psychologists have debated the form of the forgetting curve for over a century. We focus on resolving three problems that have blocked a clear answer on this issue. First, we analyzed data from a longitudinal experiment measuring cued recall and stem completion from 1 min to 28 days after study, with more observations per interval per participant than in previous studies. Second, we analyzed the data using hierarchical models, avoiding distortions due to averaging over participants. Third, we implemented the models in a Bayesian framework, enabling our analysis to account for the ability of candidate forgetting functions to imitate each other. An exponential function provided the best fit to individual participant data collected under both explicit and implicit retrieval instructions, but Bayesian model selection favored a power function. All analysis supported above chance asymptotic retention, suggesting that, despite quite brief study, storage of some memories was effectively permanent.  相似文献   

17.
The power function is treated as the law relating response time to practice trials. However, the evidence for a power law is flawed, because it is based on averaged data. We report a survey that assessed the form of the practice function for individual learners and learning conditions in paradigms that have shaped theories of skill acquisition. We fit power and exponential functions to 40 sets of data representing 7,910 learning series from 475 subjects in 24 experiments. The exponential function fit better than the power function in all the unaveraged data sets. Averaging produced a bias in favor of the power function. A new practice function based on the exponential, the APEX function, fit better than a power function with an extra, preexperimental practice parameter. Clearly, the best candidate for the law of practice is the exponential or APEX function, not the generally accepted power function. The theoretical implications are discussed.  相似文献   

18.
A classic law of cognition is that forgetting curves are closely approximated by power functions. This law describes relations between different empirical dependent variables and the retention interval, and the precise form of the functional relation depends on the scale used to measure each variable. In the research reported here, we conducted a recognition task involving both short- and long-term probes. We discovered that formal memory-strength parameters from an exemplar-recognition model closely followed a power function of the lag between studied items and a test probe. The model accounted for rich sets of response time (RT) data at both individual-subject and individual-lag levels. Because memory strengths were derived from model fits to choices and RTs from individual trials, the psychological power law was independent of the scale used to summarize the forgetting functions. Alternative models that assumed different functional relations or posited a separate fixed-strength working memory store fared considerably worse than the power-law model did in predicting the data.  相似文献   

19.
Ratings of the degree of association between words are linearly related to normed associative strengths, but the intercept is high, and the slope is shallow (the judgments of associative memory [JAM] function). Two experiments included manipulations intended to decrease the intercept and increase the slope. Discrimination training on many pairs of words and constraining ratings to sum to a constant both reduced the intercept but failed to change the slope. The intercept of the JAM function appears to contain a bias component that can be manipulated independently of the slope, which reflects sensitivity to associative strengths.  相似文献   

20.
Visual search data were collected from six Ss on three target set sizes on each of 30 days. Error level was low, and items assigned to memory sets were nonnested and changed from session tosession. For each S. the same item sometimes required a positive and sometimes a negative response (response inconsistency). Combining data over Ss and over successive 6-day blocks, visual search rates as a function of target set size were found to be linear for each of the five 6-day blocks. The slopes of the above functions (memory search time) did not differ significantly over the final four 6-day blocks, and averaged approximately.500 sec per six-character item. These results are qualitatively very similar to results obtained from item recognition studies when error level, memory set structure, degree of response’_ consistency, and practice are handled in the same way in that task. The significantly lower slope obtained on the first 6-day block is shown to be consistent with a speed-accuracy trade off interpretation when error rate is expressed per unit of processing time (percent errors/set size). Over the final three 6-day blocks, where all important parameters ofthe data were highly stable, the intercepts of the memory search functions were found to closely approximate zero, averaging .0068 sec. From this finding, along with the finding that the memory search functions are linear, it is inferred that visual search time is determined entirely by memory search time, or by memory search time and other processes which increase linearly with set size, under the conditions of this experiment. The estimate of memory search time (approximately 83 msec/character) obtained using this visual search procedure is much slower than that obtained using the item recognition procedure (approximately 35\2-40 msec/character). An explanation for this difference is proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号