首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is shown that the common and unique variance estimates produced by Martin & McDonald's Bayesian estimation procedure for the unrestricted common factor model have a predictable sum which is always greater than the maximum likelihood estimate of the total variance. This fact is used to justify a suggested simple alternative method of specifying the Bayesian parameters required by the procedure.  相似文献   

2.
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.  相似文献   

3.
Researchers recommend reporting of bias-corrected variance-accounted-for effect size estimates such as omega squared instead of uncorrected estimates, because the latter are known for their tendency toward overestimation, whereas the former mostly correct this bias. However, this argument may miss an important fact: A bias-corrected estimate can take a negative value, and of course, a negative variance ratio does not make sense. Therefore, it has been a common practice to report an obtained negative estimate as zero. This article presents an argument against this practice, based on a simulation study investigating how often negative estimates are obtained and what are the consequences of treating them as zero. The results indicate that negative estimates are obtained more often than researchers might have thought. In fact, they occur more than half the time under some reasonable conditions. Moreover, treating the obtained negative estimates as zero causes substantial overestimation of even bias-corrected estimators when the sample size and population effect are not large, which is often the case in psychology. Therefore, the recommendation is that researchers report obtained negative estimates as is, instead of reporting them as zero, to avoid the inflation of effect sizes in research syntheses, even though zero can be considered the most plausible value when interpreting such a result. R code to reproduce all of the described results is included as supplemental material.  相似文献   

4.
Bayesian estimation and testing of structural equation models   总被引:2,自引:0,他引:2  
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, for example, output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters underidentified models, as we illustrate on a simple errors-in-variables model.We thank David Spiegelhalter for suggesting applying the Gibbs sampler to structural equation models to the first author at a 1994 workshop in Wiesbaden. We thank Ulf Böckenholt, Chris Meek, Marijtje van Duijn, Clark Glymour, Ivo Molenaar, Steve Klepper, Thomas Richardson, Teddy Seidenfeld, and Tom Snijders for helpful discussions, mathematical advice, and critiques of earlier drafts of this paper.  相似文献   

5.
A Bayes estimation procedure is introduced that allows the nature and strength of prior beliefs to be easily specified and modal posterior estimates to be obtained as easily as maximum likelihood estimates. The procedure is based on constructing posterior distributions that are formally identical to likelihoods, but are based on sampled data as well as artificial data reflecting prior information. Improvements in performance of modal Bayes procedures relative to maximum likelihood estimation are illustrated for Rasch-type models. Improvements range from modest to dramatic, depending on the model and the number of items being considered.This research was supported by ORN Contact #00014-86-K0087. We wish to thank Sheng-Hui Chu and Dzung-Ji Lii for providing intelligent and energetic programming support for this article. We also thank one of the reviewers for pointing out several interesting and useful perspectives.  相似文献   

6.
The posterior distribution of the bivariate correlation is analytically derived given a data set wherex is completely observed buty is missing at random for a portion of the sample. Interval estimates of the correlation are then constructed from the posterior distribution in terms of highest density regions (HDRs). Various choices for the form of the prior distribution are explored. For each of these priors, the resulting Bayesian HDRs are compared with each other and with intervals derived from maximum likelihood theory.  相似文献   

7.
Over the last decade or two, multilevel structural equation modeling (ML-SEM) has become a prominent modeling approach in the social sciences because it allows researchers to correct for sampling and measurement errors and thus to estimate the effects of Level 2 (L2) constructs without bias. Because the latent variable modeling software Mplus uses maximum likelihood (ML) by default, many researchers in the social sciences have applied ML to obtain estimates of L2 regression coefficients. However, one drawback of ML is that covariance matrices of the predictor variables at L2 tend to be degenerate, and thus, estimates of L2 regression coefficients tend to be rather inaccurate when sample sizes are small. In this article, I show how an approach for stabilizing covariance matrices at L2 can be used to obtain more accurate estimates of L2 regression coefficients. A simulation study is conducted to compare the proposed approach with ML, and I illustrate its application with an example from organizational research.  相似文献   

8.
We describe multilevel modeling of cognitive function in subjects with schizophrenia, their healthy first degree relatives and controls. The purpose of the study was to compare mean cognitive performance between the three groups after adjusting for various covariates, as well as to investigate differences in the variances. Multilevel models were required because subjects were nested within families and some of the measures were repeated several times on the same subject. The following four methodological issues that arose during the analysis of the data are discussed. First, when the random effects distribution was not normal, non-parametric maximum likelihood (NPML) was employed, leading to a different conclusion than the conventional multilevel model regarding one of the main study hypotheses. Second, the between-subject (within-family) variance was allowed to differ between the three groups. This corresponded to the variance at level 1 or level 2 depending on whether repeated measures were analyzed. Third, a positively skewed response was analyzed using a number of different generalized linear mixed models. Finally, penalized quasilikelihood (PQL) estimates for a binomial response were compared with estimates obtained using Gaussian quadrature. A small simulation study was carried out to assess the accuracy of the latter.  相似文献   

9.
A general one-way analysis of variance components with unequal replication numbers is used to provide unbiased estimates of the true and error score variance of classical test theory. The inadequacy of the ANOVA theory is noted and the foundations for a Bayesian approach are detailed. The choice of prior distribution is discussed and a justification for the Tiao-Tan prior is found in the particular context of the “n-split” technique. The posterior distributions of reliability, error score variance, observed score variance and true score variance are presented with some extensions of the original work of Tiao and Tan. Special attention is given to simple approximations that are available in important cases and also to the problems that arise when the ANOVA estimate of true score variance is negative. Bayesian methods derived by Box and Tiao and by Lindley are studied numerically in relation to the problem of estimating true score. Each is found to be useful and the advantages and disadvantages of each are discussed and related to the classical test-theoretic methods. Finally, some general relationships between Bayesian inference and classical test theory are discussed. Supported in part by the National Institute of Child Health and Human Development under Research Grant 1 PO1 HDO1762. Reproduction, translation, use or disposal by or for the United States Government is permitted.  相似文献   

10.
A common form of missing data is caused by selection on an observed variable (e.g., Z). If the selection variable was measured and is available, the data are regarded as missing at random (MAR). Selection biases correlation, reliability, and effect size estimates when these estimates are computed on listwise deleted (LD) data sets. On the other hand, maximum likelihood (ML) estimates are generally unbiased and outperform LD in most situations, at least when the data are MAR. The exception is when we estimate the partial correlation. In this situation, LD estimates are unbiased when the cause of missingness is partialled out. In other words, there is no advantage of ML estimates over LD estimates in this situation. We demonstrate that under a MAR condition, even ML estimates may become biased, depending on how partial correlations are computed. Finally, we conclude with recommendations about how future researchers might estimate partial correlations even when the cause of missingness is unknown and, perhaps, unknowable.  相似文献   

11.
Cheng Y  Yuan KH 《Psychometrika》2010,75(2):280-291
In this paper we propose an upward correction to the standard error (SE) estimation of [^(q)]ML\hat{\theta}_{\mathrm{ML}} , the maximum likelihood (ML) estimate of the latent trait in item response theory (IRT). More specifically, the upward correction is provided for the SE of [^(q)]ML\hat{\theta}_{\mathrm{ML}} when item parameter estimates obtained from an independent pretest sample are used in IRT scoring. When item parameter estimates are employed, the resulting latent trait estimate is called pseudo maximum likelihood (PML) estimate. Traditionally, the SE of [^(q)]ML\hat{\theta}_{\mathrm{ML}} is obtained on the basis of test information only, as if the item parameters are known. The upward correction takes into account the error that is carried over from the estimation of item parameters, in addition to the error in latent trait recovery itself. Our simulation study shows that both types of SE estimates are very good when θ is in the middle range of the latent trait distribution, but the upward-corrected SEs are more accurate than the traditional ones when θ takes more extreme values.  相似文献   

12.
Dorfman and Biderman evaluated an additive-operator learning model and some special cases of this model on data from a signal-detection experiment. They found that Kac's pure error-correction model gave the poorest fit of the special models when the predictions were generated from the maximum likelihood estimates and the initial cutoffs were set at an a priori value rather than estimated. First, this paper presents tests of an asymptotic theorem by Norman, which provide strong support for Kac's model. On the final 100 trials, every subject but one gave probability matching, and the response propcrtions appropriately normed were approximately normally distributed with variance π(1 ? π). Further analyses of the Dorfman-Biderman data based upon maximum likelihood and likelihood-ratio tests suggest that Kac's model gives a relatively good, but imperfect fit to the data. Some possible explanations for the apparent contradiction between the results of these new analyses and the original findings of Dorfman and Biderman were explored. The investigations led to the proposal that there may be nonsystematic, random drifts in the decision criterion after correct responses as well as after errors. The hypothesis gives a minor modification of the conclusions from Norman's theorem for Kac's model. It gives asymptotic probability matching for every subject, but a larger asymptotic variance than π(1 ? π), which agrees with the data. The paper also presents good Monte Carlo justification for the use of maximum likelihood and likelihood-ratio tests with these additive learning models. Results from Thomas' nonparametric test of error correction are presented, which are inconclusive. Computation of Thomas' p statistic on the Monte Carlo simulations showed that it is quite variable and insensitive to small deviations from error correction.  相似文献   

13.
The problem of penalized maximum likelihood (PML) for an exploratory factor analysis (EFA) model is studied in this paper. An EFA model is typically estimated using maximum likelihood and then the estimated loading matrix is rotated to obtain a sparse representation. Penalized maximum likelihood simultaneously fits the EFA model and produces a sparse loading matrix. To overcome some of the computational drawbacks of PML, an approximation to PML is proposed in this paper. It is further applied to an empirical dataset for illustration. A simulation study shows that the approximation naturally produces a sparse loading matrix and more accurately estimates the factor loadings and the covariance matrix, in the sense of having a lower mean squared error than factor rotations, under various conditions.  相似文献   

14.
Using an approach nearly identical to one adopted by Guttman, it is established that within the framework of classical test theory the squared multiple correlation for predicting an element of a composite measure from then — 1 remaining elements is a lower-bound to the reliability of the element. The relationship existing between the reliabilities of the elements of a composite measure and their squared-multiple correlations with remaining elements is used to derive Guttman's sixth lower bound (λ 6) to the reliability of a composite measure. It is shown that Harris factors of a correlation matrixR are associated with a set of (observable) uncorrelated latent variables having maximum coefficientsλ 6.  相似文献   

15.
In studies of detection and discrimination, data are often obtained in the form of a 2 × 2 matrix and then converted to an estimate of d′, based on the assumptions that the underlying decision distributions are Gaussian and equal in variance. The statistical properties of the estimate of d′, $\hat d'$ , are well understood for data obtained using the yes—no procedure, but less effort has been devoted to the more commonly used two-interval forced choice (2IFC) procedure. The variance associated with $\hat d'$ is a function of trued′ in both procedures, but for small values of trued′, the variance of $\hat d'$ obtained using the 2IFC procedure is predicted to be less than the variance of $\hat d'$ obtained using yes—no; for large values of trued′, the variance of $\hat d'$ obtained using the 2IFC procedure is predicted to be greater than the variance of $\hat d'$ from yes—no. These results follow from standard assumptions about the relationship between the two procedures. The present paper reviews the statistical properties of $\hat d'$ obtained using the two standard procedures and compares estimates of the variance of $\hat d'$ as a function of trued′ with the variance observed in values of $\hat d'$ obtained with a 2IFC procedure.  相似文献   

16.
Psychometric models for item-level data are broadly useful in psychology. A recurring issue for estimating item factor analysis (IFA) models is low-item endorsement (item sparseness), due to limited sample sizes or extreme items such as rare symptoms or behaviors. In this paper, I demonstrate that under conditions characterized by sparseness, currently available estimation methods, including maximum likelihood (ML), are likely to fail to converge or lead to extreme estimates and low empirical power. Bayesian estimation incorporating prior information is a promising alternative to ML estimation for IFA models with item sparseness. In this article, I use a simulation study to demonstrate that Bayesian estimation incorporating general prior information improves parameter estimate stability, overall variability in estimates, and power for IFA models with sparse, categorical indicators. Importantly, the priors proposed here can be generally applied to many research contexts in psychology, and they do not impact results compared to ML when indicators are not sparse. I then apply this method to examine the relationship between suicide ideation and insomnia in a sample of first-year college students. This provides an important alternative for researchers who may need to model items with sparse endorsement.  相似文献   

17.
The equivalence of two multivariate classification schemes is shown when the sizes of the samples drawn from the populations to which assignment is required are identical. One scheme is based on posterior probabilities determined from a Bayesian density function; the second scheme is based on likelihood ratio discriminated scores. Both of these procedures involve prior probabilities; if estimates of these priors are obtained from the identical sample sizes, the equivalence follows.  相似文献   

18.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

19.
Given a drift diffusion model with unknown drift and boundary parameters, we analyse the behaviour of maximum likelihood estimates with respect to changes of responses and response times. It is shown analytically that a single fast response time can dominate the estimation in that no matter how many correct answers a test taker provides, the estimate of the drift (ability) parameter decreases to zero. In addition, it is shown that although higher drift rates imply shorter response times, the reverse implication does not hold for the estimates: shorter response times can decrease the drift rate estimate. In the light of these analytical results, we illustrate the actual impact of the findings in a small simulation for a mental rotation test. The method of analysis outlined is applicable to a broader range of models, and we emphasize the need to further check currently used reaction time models within this framework.  相似文献   

20.
Multilevel autoregressive models are especially suited for modeling between-person differences in within-person processes. Fitting these models with Bayesian techniques requires the specification of prior distributions for all parameters. Often it is desirable to specify prior distributions that have negligible effects on the resulting parameter estimates. However, the conjugate prior distribution for covariance matrices—the Inverse-Wishart distribution—tends to be informative when variances are close to zero. This is problematic for multilevel autoregressive models, because autoregressive parameters are usually small for each individual, so that the variance of these parameters will be small. We performed a simulation study to compare the performance of three Inverse-Wishart prior specifications suggested in the literature, when one or more variances for the random effects in the multilevel autoregressive model are small. Our results show that the prior specification that uses plug-in ML estimates of the variances performs best. We advise to always include a sensitivity analysis for the prior specification for covariance matrices of random parameters, especially in autoregressive models, and to include a data-based prior specification in this analysis. We illustrate such an analysis by means of an empirical application on repeated measures data on worrying and positive affect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号