首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
Group-level variance estimates of zero often arise when fitting multilevel or hierarchical linear models, especially when the number of groups is small. For situations where zero variances are implausible a priori, we propose a maximum penalized likelihood approach to avoid such boundary estimates. This approach is equivalent to estimating variance parameters by their posterior mode, given a weakly informative prior distribution. By choosing the penalty from the log-gamma family with shape parameter greater than 1, we ensure that the estimated variance will be positive. We suggest a default log-gamma(2,λ) penalty with λ→0, which ensures that the maximum penalized likelihood estimate is approximately one standard error from zero when the maximum likelihood estimate is zero, thus remaining consistent with the data while being nondegenerate. We also show that the maximum penalized likelihood estimator with this default penalty is a good approximation to the posterior median obtained under a noninformative prior. Our default method provides better estimates of model parameters and standard errors than the maximum likelihood or the restricted maximum likelihood estimators. The log-gamma family can also be used to convey substantive prior information. In either case—pure penalization or prior information—our recommended procedure gives nondegenerate estimates and in the limit coincides with maximum likelihood as the number of groups increases.  相似文献   

2.
Mathematical models of cognition often contain unknown parameters whose values are estimated from the data. A question that generally receives little attention is how informative such estimates are. In a maximum likelihood framework, standard errors provide a measure of informativeness. Here, a standard error is interpreted as the standard deviation of the distribution of parameter estimates over multiple samples. A drawback to this interpretation is that the assumptions that are required for the maximum likelihood framework are very difficult to test and are not always met. However, at least in the cognitive science community, it appears to be not well known that standard error calculation also yields interpretable intervals outside the typical maximum likelihood framework. We describe and motivate this procedure and, in combination with graphical methods, apply it to two recent models of categorization: ALCOVE (Kruschke, 1992) and the exemplar-based random walk model (Nosofsky & Palmeri, 1997). The applications reveal aspects of these models that were not hitherto known and bring a mix of bad and good news concerning estimation of these models.  相似文献   

3.
Mathematical models of cognition often contain unknown parameters whose values are estimated from the data. A question that generally receives little attention is how informative such estimates are. In a maximum likelihood framework, standard errors provide a measure of informativeness. Here, a standard error is interpreted as the standard deviation of the distribution of parameter estimates over multiple samples. A drawback to this interpretation is that the assumptions that are required for the maximum likelihood framework are very difficult to test and are not always met. However, at least in the cognitive science community, it appears to be not well known that standard error calculation also yields interpretable intervals outside the typical maximum likelihood framework. We describe and motivate this procedure and, in combination with graphical methods, apply it to two recent models of categorization: ALCOVE (Kruschke, 1992) and the exemplar-based random walk model (Nosofsky & Palmeri, 1997). The applications reveal aspects of these models that were not hitherto known and bring a mix of bad and good news concerning estimation of these models.  相似文献   

4.
Queen’s University, Kingston, Ontario, Canada We introduce and evaluate via a Monte Carlo study a robust new estimation technique that fits distribution functions to grouped response time (RT) data, where the grouping is determined by sample quantiles. The new estimator, quantile maximum likelihood (QML), is more efficient and less biased than the best alternative estimation technique when fitting the commonly used ex-Gaussian distribution. Limitations of the Monte Carlo results are discussed and guidance provided for the practical application of the new technique. Because QML estimation can be computationally costly, we make fast open source code for fitting available that can be easily modified  相似文献   

5.
To assess the effect of a manipulation on a response time distribution, psychologists often use Vincentizing or quantile averaging to construct group or “average” distributions. We provide a theorem characterizing the large sample properties of the averaged quantiles when the individual RT distributions all belong to the same location-scale family. We then apply the theorem to estimating parameters for the quantile-averaged distributions. From the theorem, it is shown that parameters of the group distribution can be estimated by generalized least squares. This method provides accurate estimates of standard errors of parameters and can therefore be used in formal inference. The method is benchmarked in a small simulation study against both a maximum likelihood method and an ordinary least-squares method. Generalized least squares essentially is the only method based on the averaged quantiles that is both unbiased and provides accurate estimates of parameter standard errors. It is also proved that for location-scale families, performing generalized least squares on quantile averages is formally equivalent to averaging parameter estimates from generalized least squares performed on individuals. A limitation on the method is that individual RT distributions must be members of the same location-scale family.  相似文献   

6.
Three methods for fitting the diffusion model (Ratcliff, 1978) to experimental data are examined. Sets of simulated data were generated with known parameter values, and from fits of the model, we found that the maximum likelihood method was better than the chi-square and weighted least squares methods by criteria of bias in the parameters relative to the parameter values used to generate the data and standard deviations in the parameter estimates. The standard deviations in the parameter values can be used as measures of the variability in parameter estimates from fits to experimental data. We introduced contaminant reaction times and variability into the other components of processing besides the decision process and found that the maximum likelihood and chi-square methods failed, sometimes dramatically. But the weighted least squares method was robust to these two factors. We then present results from modifications of the maximum likelihood and chi-square methods, in which these factors are explicitly modeled, and show that the parameter values of the diffusion model are recovered well. We argue that explicit modeling is an important method for addressing contaminants and variability in nondecision processes and that it can be applied in any theoretical approach to modeling reaction time.  相似文献   

7.
Parameters of the two‐parameter logistic model are generally estimated via the expectation–maximization (EM) algorithm by the maximum‐likelihood (ML) method. In so doing, it is beneficial to estimate the common prior distribution of the latent ability from data. Full non‐parametric ML (FNPML) estimation allows estimation of the latent distribution with maximum flexibility, as the distribution is modelled non‐parametrically on a number of (freely moving) support points. It is generally assumed that EM estimation of the two‐parameter logistic model is not influenced by initial values, but studies on this topic are unavailable. Therefore, the present study investigates the sensitivity to initial values in FNPML estimation. In contrast to the common assumption, initial values are found to have notable influence: for a standard convergence criterion, item discrimination and difficulty parameter estimates as well as item characteristic curve (ICC) recovery were influenced by initial values. For more stringent criteria, item parameter estimates were mainly influenced by the initial latent distribution, whilst ICC recovery was unaffected. The reason for this might be a flat surface of the log‐likelihood function, which would necessitate setting a sufficiently tight convergence criterion for accurate recovery of item parameters.  相似文献   

8.
Bayesian estimation and testing of structural equation models   总被引:2,自引:0,他引:2  
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, for example, output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters underidentified models, as we illustrate on a simple errors-in-variables model.We thank David Spiegelhalter for suggesting applying the Gibbs sampler to structural equation models to the first author at a 1994 workshop in Wiesbaden. We thank Ulf Böckenholt, Chris Meek, Marijtje van Duijn, Clark Glymour, Ivo Molenaar, Steve Klepper, Thomas Richardson, Teddy Seidenfeld, and Tom Snijders for helpful discussions, mathematical advice, and critiques of earlier drafts of this paper.  相似文献   

9.
Schwarz (2001, 2002) proposed the ex-Wald distribution, obtained from the convolution of Wald and exponential random variables, as a model of simple and go/no-go response time. This article provides functions for the S-PLUS package that produce maximum likelihood estimates of the parameters for the ex-Wald, as well as for the shifted Wald and ex-Gaussian, distributions. In a Monte Carlo study, the efficiency and bias of parameter estimates were examined. Results indicated that samples of at least 400 are necessary to obtain adequate estimates of the ex-Wald and that, for some parameter ranges, much larger samples may be required. For shifted Wald estimation, smaller samples of around 100 were adequate, at least when fits identified by the software as having ill-conditioned maximums were excluded. The use of all functions is illustrated using data from Schwarz (2001). The S-PLUS functions and Schwarz’s data may be downloaded from the Psychonomic Society’s Web archive, www. psychonomic.org/archive/.  相似文献   

10.
Schwarz (2001, 2002) proposed the ex-Wald distribution, obtained from the convolution of Wald and exponential random variables, as a model of simple and go/no-go response time. This article provides functions for the S-PLUS package that produce maximum likelihood estimates of the parameters for the ex-Wald, as well as for the shifted Wald and ex-Gaussian, distributions. In a Monte Carlo study, the efficiency and bias of parameter estimates were examined. Results indicated that samples of at least 400 are necessary to obtain adequate estimates of the ex-Wald and that, for some parameter ranges, much larger samples may be required. For shifted Wald estimation, smaller samples of around 100 were adequate, at least when fits identified by the software as having ill-conditioned maximums were excluded. The use of all functions is illustrated using data from Schwarz (2001). The S-PLUS functions and Schwarz's data may be downloaded from the Psychonomic Society's Web archive, www. psychonomic.org/archive/.  相似文献   

11.
Maximum likelihood estimation in confirmatory factor analysis requires large sample sizes, normally distributed item responses, and reliable indicators of each latent construct, but these ideals are rarely met. We examine alternative strategies for dealing with non‐normal data, particularly when the sample size is small. In two simulation studies, we systematically varied: the degree of non‐normality; the sample size from 50 to 1000; the way of indicator formation, comparing items versus parcels; the parcelling strategy, evaluating uniformly positively skews and kurtosis parcels versus those with counterbalancing skews and kurtosis; and the estimation procedure, contrasting maximum likelihood and asymptotically distribution‐free methods. We evaluated the convergence behaviour of solutions, as well as the systematic bias and variability of parameter estimates, and goodness of fit.  相似文献   

12.
Cheng Y  Yuan KH 《Psychometrika》2010,75(2):280-291
In this paper we propose an upward correction to the standard error (SE) estimation of [^(q)]ML\hat{\theta}_{\mathrm{ML}} , the maximum likelihood (ML) estimate of the latent trait in item response theory (IRT). More specifically, the upward correction is provided for the SE of [^(q)]ML\hat{\theta}_{\mathrm{ML}} when item parameter estimates obtained from an independent pretest sample are used in IRT scoring. When item parameter estimates are employed, the resulting latent trait estimate is called pseudo maximum likelihood (PML) estimate. Traditionally, the SE of [^(q)]ML\hat{\theta}_{\mathrm{ML}} is obtained on the basis of test information only, as if the item parameters are known. The upward correction takes into account the error that is carried over from the estimation of item parameters, in addition to the error in latent trait recovery itself. Our simulation study shows that both types of SE estimates are very good when θ is in the middle range of the latent trait distribution, but the upward-corrected SEs are more accurate than the traditional ones when θ takes more extreme values.  相似文献   

13.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

14.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

15.
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.  相似文献   

16.
Structural equation models with interaction and quadratic effects have become a standard tool for testing nonlinear hypotheses in the social sciences. Most of the current approaches assume normally distributed latent predictor variables. In this article, we present a Bayesian model for the estimation of latent nonlinear effects when the latent predictor variables are nonnormally distributed. The nonnormal predictor distribution is approximated by a finite mixture distribution. We conduct a simulation study that demonstrates the advantages of the proposed Bayesian model over contemporary approaches (Latent Moderated Structural Equations [LMS], Quasi-Maximum-Likelihood [QML], and the extended unconstrained approach) when the latent predictor variables follow a nonnormal distribution. The conventional approaches show biased estimates of the nonlinear effects; the proposed Bayesian model provides unbiased estimates. We present an empirical example from work and stress research and provide syntax for substantive researchers. Advantages and limitations of the new model are discussed.  相似文献   

17.
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.  相似文献   

18.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

19.
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found. Conditions under which monotonic relationships do not exist are also identified. Such functional relationships allow researchers to better understand the problem when significant factor loading estimates are expected but not obtained, and vice versa. What will affect the likelihood for Heywood cases (negative unique variance estimates) is also explicit through these relationships. Empirical findings in the literature are discussed using the obtained results.  相似文献   

20.
Applications of item response theory, which depend upon its parameter invariance property, require that parameter estimates be unbiased. A new method, weighted likelihood estimation (WLE), is derived, and proved to be less biased than maximum likelihood estimation (MLE) with the same asymptotic variance and normal distribution. WLE removes the first order bias term from MLE. Two Monte Carlo studies compare WLE with MLE and Bayesian modal estimation (BME) of ability in conventional tests and tailored tests, assuming the item parameters are known constants. The Monte Carlo studies favor WLE over MLE and BME on several criteria over a wide range of the ability scale.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号