首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This note is concerned with differences and similarities between structural models for analyzing change, which are conceptualized within two different modelling traditions: the one based on the classical test theory, and that within the factor-analytic approach. It is shown that these two possibilities lead to models for studying change, which are indistinguishable when using for data analytic purposes structural modeling programs, such as LISREL, EQS, COSAN, LISCOMP, RAMONA, EzPATH, SAS PROC CALIS. The reason for this data-analytic equivalence of the two conceptually different types of models is the confounding of their differences in the corresponding implied covariance matrix structures.  相似文献   

2.
Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers’ shopping behaviour after a sale promotion, and to a set of public data tracking participants’ grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed.  相似文献   

3.
Group-level variance estimates of zero often arise when fitting multilevel or hierarchical linear models, especially when the number of groups is small. For situations where zero variances are implausible a priori, we propose a maximum penalized likelihood approach to avoid such boundary estimates. This approach is equivalent to estimating variance parameters by their posterior mode, given a weakly informative prior distribution. By choosing the penalty from the log-gamma family with shape parameter greater than 1, we ensure that the estimated variance will be positive. We suggest a default log-gamma(2,λ) penalty with λ→0, which ensures that the maximum penalized likelihood estimate is approximately one standard error from zero when the maximum likelihood estimate is zero, thus remaining consistent with the data while being nondegenerate. We also show that the maximum penalized likelihood estimator with this default penalty is a good approximation to the posterior median obtained under a noninformative prior. Our default method provides better estimates of model parameters and standard errors than the maximum likelihood or the restricted maximum likelihood estimators. The log-gamma family can also be used to convey substantive prior information. In either case—pure penalization or prior information—our recommended procedure gives nondegenerate estimates and in the limit coincides with maximum likelihood as the number of groups increases.  相似文献   

4.
Principal component regression (PCR) is a popular technique in data analysis and machine learning. However, the technique has two limitations. First, the principal components (PCs) with the largest variances may not be relevant to the outcome variables. Second, the lack of standard error estimates for the unstandardized regression coefficients makes it hard to interpret the results. To address these two limitations, we propose a model-based approach that includes two mean and covariance structure models defined for multivariate PCR. By estimating the defined models, we can obtain inferential information that will allow us to test the explanatory power of individual PCs and compute the standard error estimates for the unstandardized regression coefficients. A real example is used to illustrate our approach, and simulation studies under normality and nonnormality conditions are presented to validate the standard error estimates for the unstandardized regression coefficients. Finally, future research topics are discussed.  相似文献   

5.
Observational studies of multilevel data to estimate treatment effects must consider both the nonrandom treatment assignment mechanism and the clustered structure of the data. We present an approach for implementation of four propensity score (PS) methods with multilevel data involving creation of weights and three types of weight scaling (normalized, cluster-normalized and effective), followed by estimation of multilevel models with the multilevel pseudo-maximum likelihood estimation method. Using a Monte Carlo simulation study, we found that the multilevel model provided unbiased estimates of the Average Treatment Effect on the Treated (ATT) and its standard error across manipulated conditions and combinations of PS model, PS method, and type of weight scaling. Estimates of between-cluster variances of the ATT were biased, but improved as cluster sizes increased. We provide a step-by-step demonstration of how to combine PS methods and multilevel modeling to estimate treatment effects using multilevel data from the Early Childhood Longitudinal Study–Kindergarten Cohort (ECLS-K).  相似文献   

6.
The crux in psychometrics is how to estimate the probability that a respondent answers an item correctly on one occasion out of many. Under the current testing paradigm this probability is estimated using all kinds of statistical techniques and mathematical modeling. Multiple evaluation is a new testing paradigm using the person's own personal estimates of these probabilities as data. It is compared to multiple choice, which appears to be a degenerated form of multiple evaluation. Multiple evaluation has much less measurement error than multiple choice, and this measurement error is not in favor of the examinee. When the test is used for selection purposes as it is with multiple choice, the probability of a Type II error (unjustified passes) is almost negligible. Procedures for statistical item-and-test analyses under the multiple evaluation paradigm are presented. These procedures provide more accurate information in comparison to what is possible under the multiple choice paradigm. A computer program that implements multiple evaluation is also discussed.  相似文献   

7.
Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.  相似文献   

8.
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the population sampled has the covariance structure assumed. Commonly used covariance structure analysis software uses parametric methods for estimating parameters and standard errors. When the population sampled has the covariance structure assumed, but fails to have the distributional form assumed, the parameter estimates usually remain consistent, but the standard error estimates do not. This has motivated the introduction of a variety of nonparametric standard error estimates that are consistent when the population sampled fails to have the distributional form assumed. The only distributional assumption these require is that the covariance structure be correctly specified. As noted, even this assumption is not required for the infinitesimal jackknife. The relation between the infinitesimal jackknife and other nonparametric standard error estimators is discussed. An advantage of the infinitesimal jackknife over the jackknife and the bootstrap is that it requires only one analysis to produce standard error estimates rather than one for every jackknife or bootstrap sample.  相似文献   

9.
Latent variable models with many categorical items and multiple latent constructs result in many dimensions of numerical integration, and the traditional frequentist estimation approach, such as maximum likelihood (ML), tends to fail due to model complexity. In such cases, Bayesian estimation with diffuse priors can be used as a viable alternative to ML estimation. This study compares the performance of Bayesian estimation with ML estimation in estimating single or multiple ability factors across 2 types of measurement models in the structural equation modeling framework: a multidimensional item response theory (MIRT) model and a multiple-indicator multiple-cause (MIMIC) model. A Monte Carlo simulation study demonstrates that Bayesian estimation with diffuse priors, under various conditions, produces results quite comparable with ML estimation in the single- and multilevel MIRT and MIMIC models. Additionally, an empirical example utilizing the Multistate Bar Examination is provided to compare the practical utility of the MIRT and MIMIC models. Structural relationships among the ability factors, covariates, and a binary outcome variable are investigated through the single- and multilevel measurement models. The article concludes with a summary of the relative advantages of Bayesian estimation over ML estimation in MIRT and MIMIC models and suggests strategies for implementing these methods.  相似文献   

10.
Cross-classified random effects modeling (CCREM) is used to model multilevel data from nonhierarchical contexts. These models are widely discussed but infrequently used in social science research. Because little research exists assessing when it is necessary to use CCREM, 2 studies were conducted. A real data set with a cross-classified structure was analyzed by comparing parameter estimates when ignoring versus modeling the cross-classified data structure. A follow-up simulation study investigated potential factors affecting the need to use CCREM. Results indicated that when the structure is ignored, fixed-effect estimates were unaffected, but standard error estimates associated with the variables modeled incorrectly were biased. Estimates of the variance components also displayed bias, which was related to several study factors.  相似文献   

11.
A new method for the analysis of linear models that have autoregressive errors is proposed. The approach is not only relevant in the behavioral sciences for analyzing small-sample time-series intervention models, but it is also appropriate for a wide class of small-sample linear model problems in which there is interest in inferential statements regarding all regression parameters and autoregressive parameters in the model. The methodology includes a double application of bootstrap procedures. The 1st application is used to obtain bias-adjusted estimates of the autoregressive parameters. The 2nd application is used to estimate the standard errors of the parameter estimates. Theoretical and Monte Carlo results are presented to demonstrate asymptotic and small-sample properties of the method; examples that illustrate advantages of the new approach over established time-series methods are described.  相似文献   

12.
This article uses Monte Carlo techniques to examine the effect of heterogeneity of variance in multilevel analyses in terms of relative bias, coverage probability, and root mean square error (RMSE). For all simulated data sets, the parameters were estimated using the restricted maximum-likelihood (REML) method both assuming homogeneity and incorporating heterogeneity into multilevel models. We find that (a) the estimates for the fixed parameters are unbiased, but the associated standard errors are frequently biased when heterogeneity is ignored; by contrast, the standard errors of the fixed effects are almost always accurate when heterogeneity is considered; (b) the estimates for the random parameters are slightly overestimated; (c) both the homogeneous and heterogeneous models produce standard errors of the variance component estimates that are underestimated; however, taking heterogeneity into account, the REML-estimations give correct estimates of the standard errors at the lowest level and lead to less underestimated standard errors at the highest level; and (d) from the RMSE point of view, REML accounting for heterogeneity outperforms REML assuming homogeneity; a considerable improvement has been particularly detected for the fixed parameters. Based on this, we conclude that the solution presented can be uniformly adopted. We illustrate the process using a real dataset.  相似文献   

13.
The present study examined measurement equivalence of the Satisfaction with Life Scale between American and Chinese samples using multigroup Structural Equation Modeling (SEM), Multiple indicator multiple cause model (MIMIC), and Item Response Theory (IRT). Whereas SEM and MIMIC identified only one biased item across cultures, the IRT analysis revealed that four of the five items had differential item functioning. According to IRT, Chinese whose latent life satisfaction scores were quite high did not endorse items such as “So far I have gotten the important things I want in life” and “If I could live my life over, I would change almost nothing.” The IRT analysis also showed that even when the unbiased items were weighted more heavily than the biased items, the latent mean life satisfaction score of Chinese was substantially lower than that of Americans. The differences among SEM, MIMIC, and IRT are discussed.  相似文献   

14.
A Monte Carlo study was used to compare four approaches to growth curve analysis of subjects assessed repeatedly with the same set of dichotomous items: A two‐step procedure first estimating latent trait measures using MULTILOG and then using a hierarchical linear model to examine the changing trajectories with the estimated abilities as the outcome variable; a structural equation model using modified weighted least squares (WLSMV) estimation; and two approaches in the framework of multilevel item response models, including a hierarchical generalized linear model using Laplace estimation, and Bayesian analysis using Markov chain Monte Carlo (MCMC). These four methods have similar power in detecting the average linear slope across time. MCMC and Laplace estimates perform relatively better on the bias of the average linear slope and corresponding standard error, as well as the item location parameters. For the variance of the random intercept, and the covariance between the random intercept and slope, all estimates are biased in most conditions. For the random slope variance, only Laplace estimates are unbiased when there are eight time points.  相似文献   

15.
The relations among alternative parameterizations of the binary factor analysis (FA) model and two-parameter logistic (2PL) item response theory (IRT) model have been thoroughly discussed in literature. However, the conversion formulas widely available are mainly for transforming parameter estimates from one parameterization to another. There is a lack of discussion about the standard error (SE) conversion among different parameterizations, when SEs of IRT model parameters are often of immediate interest to practitioners. This article provides general formulas for computing the SEs of transformed parameter values, when these parameters are transformed from FA to IRT models. These formulas are suitable for unidimensional 2PL, multidimensional 2PL, and bi-factor 2PL models. A simulation study is conducted to verify the formula by providing empirical evidence. A real data example is given in the end for an illustration.  相似文献   

16.
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found. Conditions under which monotonic relationships do not exist are also identified. Such functional relationships allow researchers to better understand the problem when significant factor loading estimates are expected but not obtained, and vice versa. What will affect the likelihood for Heywood cases (negative unique variance estimates) is also explicit through these relationships. Empirical findings in the literature are discussed using the obtained results.  相似文献   

17.
Fei Gu  Hao Wu 《Psychometrika》2016,81(3):751-773
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.  相似文献   

18.
Cross validation is a useful way of comparing predictive generalizability of theoretically plausible a priori models in structural equation modeling (SEM). A number of overall or local cross validation indices have been proposed for existing factor-based and component-based approaches to SEM, including covariance structure analysis and partial least squares path modeling. However, there is no such cross validation index available for generalized structured component analysis (GSCA) which is another component-based approach. We thus propose a cross validation index for GSCA, called Out-of-bag Prediction Error (OPE), which estimates the expected prediction error of a model over replications of so-called in-bag and out-of-bag samples constructed through the implementation of the bootstrap method. The calculation of this index is well-suited to the estimation procedure of GSCA, which uses the bootstrap method to obtain the standard errors or confidence intervals of parameter estimates. We empirically evaluate the performance of the proposed index through the analyses of both simulated and real data.  相似文献   

19.
In single-case research, multiple-baseline (MB) design provides the opportunity to estimate the treatment effect based on not only within-series comparisons of treatment phase to baseline phase observations, but also time-specific between-series comparisons of observations from those that have started treatment to those that are still in the baseline. For analyzing MB studies, two types of linear mixed modeling methods have been proposed: the within- and between-series models. In principle, those models were developed based on normality assumptions, however, normality may not always be found in practical settings. Therefore, this study aimed to investigate the robustness of the within- and between-series models when data were non-normal. A Monte Carlo study was conducted with four statistical approaches. The approaches were defined by the crossing of two analytic decisions: (a) whether to use a within- or between-series estimate of effect and (b) whether to use restricted maximum likelihood or Markov chain Monte Carlo estimations. The results showed the treatment effect estimates of the four approaches had minimal bias, that within-series estimates were more precise than between-series estimates, and that confidence interval coverage was frequently acceptable, but varied across conditions and methods of estimation. Applications and implications were discussed based on the findings.  相似文献   

20.
This paper shows how LISREL may be used to estimate simplex models which impose constraints on the variances of endogenous variables. This technique allows us to estimate both the parameters and the standard errors of the correlated measurement error model proposed by Wiley and Wiley (1974).We would like to thank Jim Wiley for his many helpful comments and suggestions on an earlier draft. We are grateful also to an anonymous reviewer for supplying the EQS program presented in Figure 4.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号