首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clinical evidence based on real-world data (RWD) is accumulating exponentially providing larger sample sizes available, which demand novel methods to deal with the enhanced heterogeneity of the data. Here, we used RWD to assess the prediction of cognitive decline in a large heterogeneous sample of participants being enrolled with cognitive stimulation, a phenomenon that is of great interest to clinicians but that is riddled with difficulties and limitations. More precisely, from a multitude of neuropsychological Training Materials (TMs), we asked whether was possible to accurately predict an individual's cognitive decline one year after being tested. In particular, we performed longitudinal modelling of the scores obtained from 215 different tests, grouped into 29 cognitive domains, a total of 124,610 instances from 7902 participants (40% male, 46% female, 14% not indicated), each performing an average of 16 tests. Employing a machine learning approach based on ROC analysis and cross-validation techniques to overcome overfitting, we show that different TMs belonging to several cognitive domains can accurately predict cognitive decline, while other domains perform poorly, suggesting that the ability to predict decline one year later is not specific to any particular domain, but is rather widely distributed across domains. Moreover, when addressing the same problem between individuals with a common diagnosed label, we found that some domains had more accurate classification for conditions such as Parkinson's disease and Down syndrome, whereas they are less accurate for Alzheimer's disease or multiple sclerosis. Future research should combine similar approaches to ours with standard neuropsychological measurements to enhance interpretability and the possibility of generalizing across different cohorts.  相似文献   

2.
3.
This paper reports an experiment testing two hypotheses. The first is that the value or utility associated with a payment to one's self and a payment to a co-worker can be represented as an additive function of a utility for own payment (nonsocial utility) and a utility for the difference between own and other's payment (social utility). The second hypothesis is that changes in the amount of work accomplished by one's self and/or the other should influence the social, but not the the nonsocial utilities. Support for both hypotheses is reported.  相似文献   

4.
Five-year changes in episodic and semantic memory were examined in a sample of 829 participants (35-80 years). A cohort-matched sample (N=967) was assessed to control for practice effects. For episodic memory, cross-sectional analyses indicated gradual age-related decrements, whereas the longitudinal data revealed no decrements before age 60, even when practice effects were adjusted for. Longitudinally, semantic memory showed minor increments until age 55, with smaller decrements in old age as compared with episodic memory. Cohort differences in educational attainment appear to account for the discrepancies between cross-sectional and longitudinal data. Collectively, the results show that age trajectories for episodic and semantic memory differ and underscore the need to control for cohort and retest effects in cross-sectional and longitudinal studies, respectively.  相似文献   

5.
Several investigators have fit psychometric functions to data from adaptive procedures for threshold estimation. Although the threshold estimates are in general quite correct, one encounters a slope bias that has not been explained up to now. The present paper demonstrates slope bias for parametric and nonparametric maximum-likelihood fits and for Spearman-Kärber analysis of adaptive data. The examples include staircase and stochastic approximation procedures. The paper then presents an explanation of slope bias based on serial data dependency in adaptive procedures. Data dependency is first illustrated with simple two-trial examples and then extended to realistic adaptive procedures. Finally, the paper presents an adaptive staircase procedure designed to measure threshold and slope directly. In contrast to classical adaptive threshold-only procedures, this procedure varies both a threshold and a spread parameter in response to double trials.  相似文献   

6.
Several investigators have fit psychometric functions to data from adaptive procedures for threshold estimation. Although the threshold estimates are in general quite correct, one encounters a slope bias that has not been explained up to now. The present paper demonstrates slope bias for parametric and nonparametric maximum-likelihood fits and for Spearman-K?rber analysis of adaptive data. The examples include staircase and stochastic approximation procedures. The paper then presents an explanation of slope bias based on serial data dependency in adaptive procedures. Data dependency is first illustrated with simple two-trial examples and then extended to realistic adaptive procedures. Finally, the paper presents an adaptive staircase procedure designed to measure threshold and slope directly. In contrast to classical adaptive threshold-only procedures, this procedure varies both a threshold and a spread parameter in response to double trials.  相似文献   

7.
Previous research has demonstrated adults' difficulties with explicitly forecasting exponential processes. Exponential growth is usually grossly underestimated, whereas exponential decline is forecast more accurately. By contrast, the present study examined implicit knowledge about exponential processes and how it is affected by function type (growth versus decline) in samples of 7-, 10-, 14-year-olds, and adults (N=80). Different indicators of the quality of forecasts were investigated. As opposed to previous findings, participants of all age groups estimated exponential decline less adequately than exponential growth. This effect could be attributed mainly to the fact that, in relation to fitted exponential functions, the starting value, or intercept, of the function was approximated well for exponential growth but badly with regard to exponential decline. The accuracy of the non-linear component in forecast functions barely differed between function types within the same age group. Furthermore, even 7-year-olds appeared to have a preliminary understanding of exponential processes, while both intercepts and exponents of forecasts became more accurate with age. Theoretical and practical implications are discussed.  相似文献   

8.
This research investigated the effect on power spectra when data-smoothing functions were used on EEG data prior to submitting them to a FFT. We used two smoothing function options: not using any smoothing function and using the Parzen smoothing function. We developed a program to evaluate each of these functions with real and standard data. When a set of data is submitted to smoothing prior to being submitted to a FFT, there are statistically significant differences in the power spectra obtained from the FFT. This finding holds true for standard waveforms as well as for real EEG data.  相似文献   

9.
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.  相似文献   

10.
High-performance computing becomes essential to statistical analysis when the database is massive or the number of computations per data element is large. Albert F. Anderson (1997) discusses the application of high-performance computing to massive databases; J. O. Ramsay’s (Ramsay, Heckman, & Silverman, 1997) estimation problems potentially require large numbers of computations per data element.  相似文献   

11.
APL functions designed to provide labeled plots and histograms are described. Support functions that augment a data file with necessary information to label output and to maintain a common plotting scale are also described. APL code and illustrative output are presented.  相似文献   

12.
The authors present a program of service and research with preschool children that has been shown effective in providing positive preventive outcomes.  相似文献   

13.
We present an application, using Excel, that can solve best-fitting parameters for multinomial models. Multinomial modeling has become increasingly popular and can be used in a variety of domains, such as memory, perception, and other domains in which processes are assumed to be dissociable. We offer an application that can be used for a variety of psychological models and can be used on both PC and Macintosh platforms. We illustrate the use of our program by analyzing data from a source memory experiment.  相似文献   

14.
A method is presented to provide estimates of parameters of specified nonlinear equations from ordinal data generated from a crossed design. The analytic method, NOPE, is an iterative method in which monotone regression and the Gauss-Newton method of least squares are applied alternatively until a measure of stress is minimized. Examples of solutions from artificial data are presented together with examples of applications of the method to experimental results.This work was begun while the author was on sabbatical leave during 1970–71 at the Department of Mathematical Psychology, University of Nijmegen, the Netherlands, where discussions with E. E. Roskam on the problem were very helpful. Support was provided by Grant A0151 from the Natural Sciences and Engineering Council, Canada.  相似文献   

15.
16.
An instance of fruitful cross-disciplinary contacts is examined in detail. The ideas involved include (1) the double-blind hypothesis for schizophrenia, (2) the critique of game theory from the viewpoint of anthropology and psychiatry, and (3) the application of concepts of communication theory and theory of logical types to an interpretation of psychoanalytic practice. The protagonists of the interchange are Gregory Bateson and the two mathematicians Norbert Wiener and John von Neumann; the date, March 1946. This interchange and its sequels are described. While the interchanges between Bateson and Wiener were fruitful, those between Bateson and von Neumann were much less so. The latter two held conflicting premises concerning what is significant in science; Bateson's and Wiener's were compatible. In 1946, Wiener suggested that information and communication might be appropriate central concepts for psychoanalytic theory--a vague general idea which Bateson (with Ruesch) related to contemporary clinical practice. For Bateson, Wiener, and von Neumann, the cross-disciplinary interactions foreshadowed a shift in activities and new roles in society, to which the post World War II period was conducive. Von Neumann became a high-level government advisor; Wiener, an interpreter of science and technology for the general public; and Bateson a counter-culture figure.  相似文献   

17.
Schwarz (2001, 2002) proposed the ex-Wald distribution, obtained from the convolution of Wald and exponential random variables, as a model of simple and go/no-go response time. This article provides functions for the S-PLUS package that produce maximum likelihood estimates of the parameters for the ex-Wald, as well as for the shifted Wald and ex-Gaussian, distributions. In a Monte Carlo study, the efficiency and bias of parameter estimates were examined. Results indicated that samples of at least 400 are necessary to obtain adequate estimates of the ex-Wald and that, for some parameter ranges, much larger samples may be required. For shifted Wald estimation, smaller samples of around 100 were adequate, at least when fits identified by the software as having ill-conditioned maximums were excluded. The use of all functions is illustrated using data from Schwarz (2001). The S-PLUS functions and Schwarz’s data may be downloaded from the Psychonomic Society’s Web archive, www. psychonomic.org/archive/.  相似文献   

18.
Many models offer different explanations of learning processes, some of them predicting equal learning rates between conditions. The simplest method by which to assess this equality is to evaluate the curvature parameter for each condition, followed by a statistical test. However, this approach is highly dependent on the fitting procedure, which may come with built-in biases difficult to identify. Averaging the data per block of training would help reduce the noise present in the trial data, but averaging introduces a severe distortion on the curve, which can no longer be fitted by the original function. In this article, we first demonstrate what is the distortion resulting from block averaging. Theblock average learning function, once known, can be used to extract parameters when the performance is averaged over blocks or sessions. The use of averages eliminates an important part of the noise present in the data and allows good recovery of the learning curve parameters. Equality of curvatures can be tested with a test of linear hypothesis. This method can be performed on trial data or block average data, but it is more powerful with block average data.  相似文献   

19.
The Individuals with Disabilities Education Act requires that an Individualized Education Program (IEP) be developed for each child that receives special education services. To develop the most effective IEP, information is gathered from everyone who has worked with the child. In many schools the child receives early intervention services prior to referral to special education. One early intervention program that is utilized for first grade children falling behind in reading and writing is Reading Recovery®. The detailed information gathered as part of this program provides invaluable information that may facilitate development of appropriate literacy goals. This article discusses the information that is collected in the Reading Recovery program and provides an example of how this information can be utilized to support the development of IEP literacy goals.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号