首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We demonstrate some procedures in the statistical computing environment R for obtaining maximum likelihood estimates of the parameters of a psychometric function by fitting a generalized nonlinear regression model to the data. A feature for fitting a linear model to the threshold (or other) parameters of several psychometric functions simultaneously provides a powerful tool for testing hypotheses about the data and, potentially, for reducing the number of parameters necessary to describe them. Finally, we illustrate procedures for treating one parameter as a random effect that would permit a simplified approach to modeling stimulus-independent variability due to factors such as lapses or interobserver differences. These tools will facilitate a more comprehensive and explicit approach to the modeling of psychometric data.  相似文献   

2.
Conventionally, fitting a mathematical model to empirically derived data is achieved by varying model parameters to minimize the deviations between expected and observed values in the dependent dimension. However, when functions to be fit are multivalued (e.g., an ellipse), conventional model fitting procedures fail. A novel (n+1)-dimensional [(n+1)-D] model fitting procedure is presented which can solve such problems by transforming then-D model and data into (n+1)-D space and then minimizing deviations in the constructed dimension. While the (n+1)-D procedure provides model fits identical to those obtained with conventional methods for single-valued functions, it also extends parameter estimation to multivalued functions.  相似文献   

3.
Three methods for fitting the diffusion model (Ratcliff, 1978) to experimental data are examined. Sets of simulated data were generated with known parameter values, and from fits of the model, we found that the maximum likelihood method was better than the chi-square and weighted least squares methods by criteria of bias in the parameters relative to the parameter values used to generate the data and standard deviations in the parameter estimates. The standard deviations in the parameter values can be used as measures of the variability in parameter estimates from fits to experimental data. We introduced contaminant reaction times and variability into the other components of processing besides the decision process and found that the maximum likelihood and chi-square methods failed, sometimes dramatically. But the weighted least squares method was robust to these two factors. We then present results from modifications of the maximum likelihood and chi-square methods, in which these factors are explicitly modeled, and show that the parameter values of the diffusion model are recovered well. We argue that explicit modeling is an important method for addressing contaminants and variability in nondecision processes and that it can be applied in any theoretical approach to modeling reaction time.  相似文献   

4.
Among the most valuable tools in behavioral science is statistically fitting mathematical models of cognition to data—response time distributions, in particular. However, techniques for fitting distributions vary widely, and little is known about the efficacy of different techniques. In this article, we assess several fitting techniques by simulating six widely cited models of response time and using the fitting procedures to recover model parameters. The techniques include the maximization of likelihood and least squares fits of the theoretical distributions to different empirical estimates of the simulated distributions. A running example is used to illustrate the different estimation and fitting procedures. The simulation studies reveal that empirical density estimates are biased even for very large sample sizes. Some fitting techniques yield more accurate and less variable parameter estimates than do others. Methods that involve least squares fits to density estimates generally yield very poor parameter estimates.  相似文献   

5.
In knowledge space theory, existing adaptive assessment procedures can only be applied when suitable estimates of their parameters are available. In this paper, an iterative procedure is proposed, which upgrades its parameters with the increasing number of assessments. The first assessments are run using parameter values that favor accuracy over efficiency. Subsequent assessments are run using new parameter values estimated on the incomplete response patterns from previous assessments. Parameter estimation is carried out through a new probabilistic model for missing-at-random data. Two simulation studies show that, with the increasing number of assessments, the performance of the proposed procedure approaches that of gold standards.  相似文献   

6.
A new method for the analysis of linear models that have autoregressive errors is proposed. The approach is not only relevant in the behavioral sciences for analyzing small-sample time-series intervention models, but it is also appropriate for a wide class of small-sample linear model problems in which there is interest in inferential statements regarding all regression parameters and autoregressive parameters in the model. The methodology includes a double application of bootstrap procedures. The 1st application is used to obtain bias-adjusted estimates of the autoregressive parameters. The 2nd application is used to estimate the standard errors of the parameter estimates. Theoretical and Monte Carlo results are presented to demonstrate asymptotic and small-sample properties of the method; examples that illustrate advantages of the new approach over established time-series methods are described.  相似文献   

7.
For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non‐parametric item response theory a natural starting‐point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p‐values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example.  相似文献   

8.
Many intensive longitudinal measurements are collected at irregularly spaced time intervals, and involve complex, possibly nonlinear and heterogeneous patterns of change. Effective modelling of such change processes requires continuous-time differential equation models that may be nonlinear and include mixed effects in the parameters. One approach of fitting such models is to define random effect variables as additional latent variables in a stochastic differential equation (SDE) model of choice, and use estimation algorithms designed for fitting SDE models, such as the continuous-discrete extended Kalman filter (CDEKF) approach implemented in the dynr R package, to estimate the random effect variables as latent variables. However, this approach's efficacy and identification constraints in handling mixed-effects SDE models have not been investigated. In the current study, we analytically inspect the identification constraints of using the CDEKF approach to fit nonlinear mixed-effects SDE models; extend a published model of emotions to a nonlinear mixed-effects SDE model as an example, and fit it to a set of irregularly spaced ecological momentary assessment data; and evaluate the feasibility of the proposed approach to fit the model through a Monte Carlo simulation study. Results show that the proposed approach produces reasonable parameter and standard error estimates when some identification constraint is met. We address the effects of sample size, process noise variance, and data spacing conditions on estimation results.  相似文献   

9.
The present study provides evidence for the role of ferrite-grain-size distributions on the occurrence of void initiation in a low-carbon steel. Various thermomechanical treatments were undertaken to create ultrafine, bimodal and coarse distributions of ferrite grain sizes. A two-parameter characterisation of probable void initiation sites is proposed, namely an elastic modulus difference and a difference in Schmid factor of the grains surrounding the void. All microstructures were categorised based on the ability to facilitate or resist void nucleation. For coarse grains, the elastic modulus and the Schmid factor differences are the highest, while they are intermediate for ultrafine grains and the lowest for the bimodal microstructure.  相似文献   

10.
The well‐known problem of fitting the exploratory factor analysis model is reconsidered where the usual least squares goodness‐of‐fit function is replaced by a more resistant discrepancy measure, based on a smooth approximation of the ?1 norm. Fitting the factor analysis model to the sample correlation matrix is a complex matrix optimization problem which requires the structure preservation of the unknown parameters (e.g. positive definiteness). The projected gradient approach is a natural way of solving such data matching problems as especially designed to follow the geometry of the model parameters. Two reparameterizations of the factor analysis model are considered. The approach leads to globally convergent procedures for simultaneous estimation of the factor analysis matrix parameters. Numerical examples illustrate the algorithms and factor analysis solutions.  相似文献   

11.
The MIRID CML program is a program for the estimation of the parameter values of two different componential IRT models: the Rasch—MIRID and the OPLM—MIRID (Butter, 1994; Butter, De Boeck, & Verhelst, 1998). To estimate the parameters of both models, the program uses a CML approach. The model parameters can also be estimated with a MML approach that can be implemented in PROC NLMIXED of SAS Version 8. Both the MIRID CML program and the MML SAS approach are explained and compared in a simulation study. The results showed that they did about equally well in estimating the values of the item parameters but that there were some differences in the estimation of the person parameters, as could be expected from the differential assumptions regarding the distribution of the persons. The SAS MML approach is much slower than the MIRID CML program, but it is more flexible.  相似文献   

12.
With the goal of drawing inferences about underlying processes from fits of theoretical models to cognitive data, we examined the trade off of risks of depending on model fits to individual performance versus risks of depending on fits to averaged data with respect to estimation of values of a model’s parameters. Comparisons based on several models applied to experiments on recognition and categorization and to artificial, computer-generated data showed that results of using the two types of model fitting are strongly determined by two factors: model complexity and number of subjects. Reasonably accurate information about true parameter values was found only for model fits to individual performance and then only for some of the parameters of a complex model. Suggested guidelines are given for circumventing a variety of obstacles to successful recovery of useful estimates of a model’s parameters from applications to cognitive data.  相似文献   

13.
This paper addresses a common challenge with computational cognitive models: identifying parameter values that are both theoretically plausible and generate predictions that match well with empirical data. While computational models can offer deep explanations of cognition, they are computationally complex and often out of reach of traditional parameter fitting methods. Weak methodology may lead to premature rejection of valid models or to acceptance of models that might otherwise be falsified. Mathematically robust fitting methods are, therefore, essential to the progress of computational modeling in cognitive science. In this article, we investigate the capability and role of modern fitting methods—including Bayesian optimization and approximate Bayesian computation—and contrast them to some more commonly used methods: grid search and Nelder–Mead optimization. Our investigation consists of a reanalysis of the fitting of two previous computational models: an Adaptive Control of Thought—Rational model of skill acquisition and a computational rationality model of visual search. The results contrast the efficiency and informativeness of the methods. A key advantage of the Bayesian methods is the ability to estimate the uncertainty of fitted parameter values. We conclude that approximate Bayesian computation is (a) efficient, (b) informative, and (c) offers a path to reproducible results.  相似文献   

14.
Lai K  Kelley K 《心理学方法》2011,16(2):127-148
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about the magnitude of the population targeted effects. With the goal of obtaining sufficiently narrow confidence intervals for the model parameters of interest, sample size planning methods for SEM are developed from the accuracy in parameter estimation approach. One method plans for the sample size so that the expected confidence interval width is sufficiently narrow. An extended procedure ensures that the obtained confidence interval will be no wider than desired, with some specified degree of assurance. A Monte Carlo simulation study was conducted that verified the effectiveness of the procedures in realistic situations. The methods developed have been implemented in the MBESS package in R so that they can be easily applied by researchers.  相似文献   

15.
Trust dynamics can be modeled in relation to experiences. In this paper two models to represent human trust dynamics are introduced, namely a model on a cognitive level and a neural model. These models include a number of parameters, providing the possibility to express certain relations between trustees. The behavior of each of the models is further analyzed by means of simulation experiments and formal verification techniques. Thereafter, both models have been compared to see whether they can produce patterns that are comparable. As each of the models has its own specific set of parameters, with values that depend on the type of person modeled, such a comparison is non-trivial. To address this, a special comparison approach is introduced, based on mutual mirroring of the models in each other. More specifically, for a given parameter values set for one model, by an automated parameter estimation procedure the most optimal values for the parameter values of the other model are determined in order to show the same behavior. Roughly spoken the results are that the models can mirror each other up to an accuracy of around 90%.  相似文献   

16.
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the population distribution. A simulation study shows that the new procedure is feasible in practice, and that when the latent distribution is not well approximated as normal, two-parameter logistic (2PL) item parameter estimates and expected a posteriori scores (EAPs) can be improved over what they would be with the normal model. An example with real data compares the new method and the extant empirical histogram approach.  相似文献   

17.
The supplemented EM (SEM) algorithm is applied to address two goodness‐of‐fit testing problems in psychometrics. The first problem involves computing the information matrix for item parameters in item response theory models. This matrix is important for limited‐information goodness‐of‐fit testing and it is also used to compute standard errors for the item parameter estimates. For the second problem, it is shown that the SEM algorithm provides a convenient computational procedure that leads to an asymptotically chi‐squared goodness‐of‐fit statistic for the ‘two‐stage EM’ procedure of fitting covariance structure models in the presence of missing data. Both simulated and real data are used to illustrate the proposed procedures.  相似文献   

18.
Prior to a three-way component analysis of a three-way data set, it is customary to preprocess the data by centering and/or rescaling them. Harshman and Lundy (1984) considered that three-way data actually consist of a three-way model part, which in fact pertains to ratio scale measurements, as well as additive “offset” terms that turn the ratio scale measurements into interval scale measurements. They mentioned that such offset terms might be estimated by incorporating additional components in the model, but discarded this idea in favor of an approach to remove such terms from the model by means of centering. Then estimates for the three-way component model parameters are obtained by analyzing the centered data. In the present paper, the possibility of actually estimating the offset terms is taken up again. First, it is mentioned in which cases such offset terms can be estimated uniquely. Next, procedures are offered for estimating model parameters and offset parameters simultaneously, as well as successively (i.e., providing offset term estimates after the three-way model parameters have been estimated in the traditional way on the basis of the centered data). These procedures are provided for both the CANDECOMP/PARAFAC model and the Tucker3 model extended with offset terms. The successive and the simultaneous approaches for estimating model and offset parameters have been compared on the basis of simulated data. It was found that both procedures perform well when the fitted model captures at least all offset terms actually underlying the data. The simultaneous procedures performed slightly better than the successive procedures. If fewer offset terms are fitted than there are underlying the model, the results are considerably poorer, but in these cases the successive procedures performed better than the simultaneous ones. All in all, it can be concluded that the traditional approach for estimating model parameters can hardly be improved upon, and that offset terms can sufficiently well be estimated by the proposed successive approach, which is a simple extension of the traditional approach. The author is obliged to Jos M.F. ten Berge and Marieke Timmerman for helpful comments on an earlier version of this paper. The author is obliged to Iven van Mechelen for making available the data set used in Section 6.  相似文献   

19.
The diffusion model (Ratcliff, 1978) for fast two-choice decisions has been successful in a number of domains. Wagenmakers, van der Maas, and Grasman (2007) proposed a new method for fitting the model to data (“EZ”) that is simpler than the standard chisquare method (Ratcliff & Tuerlinckx, 2002). For an experimental condition, EZ can estimate parameter values for the main components of processing using only correct response times (RTs), their variance, and accuracy, not error RTs or the shapes of RT distributions. Wagenmakers et al. suggested that EZ produces accurate parameter estimates in cases in which the chi-square method would fail-specifically, experimental conditions with small numbers of observations or with accuracy near ceiling. In this article, I counter these claims and discuss EZ’s limitations. Unlike the chi-square method, EZ is extremely sensitive to outlier RTs and is usually less efficient in recovering parameter values, and it can lead to errors in interpretation when the data do not meet its assumptions, when the number of observations in an experimental condition is small, or when accuracy in an experimental condition is high. The conclusion is that EZ can be useful in the exploration of parameter spaces, but it should not be used for meaningful estimates of parameter values or for assessing whether or not a model fits data.  相似文献   

20.
Factor mixture models are latent variable models with categorical and continuous latent variables that can be used as a model-based approach to clustering. A previous article covered the results of a simulation study showing that in the absence of model violations, it is usually possible to choose the correct model when fitting a series of models with different numbers of classes and factors within class. The response format in the first study was limited to normally distributed outcomes. This article has 2 main goals, first, to replicate parts of the first study with 5-point Likert scale and binary outcomes, and second, to address the issue of testing class invariance of thresholds and loadings. Testing for class invariance of parameters is important in the context of measurement invariance and when using mixture models to approximate nonnormal distributions. Results show that it is possible to discriminate between latent class models and factor models even if responses are categorical. Comparing models with and without class-specific parameters can lead to incorrectly accepting parameter invariance if the compared models differ substantially with respect to the number of estimated parameters. The simulation study is complemented with an illustration of a factor mixture analysis of 10 binary depression items obtained from a female subsample of the Virginia Twin Registry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号