首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
In applications of SEM, investigators obtain and interpret parameter estimates that are computed so as to produce optimal model fit in the sense that the obtained model fit would deteriorate to some degree if any of those estimates were changed. This property raises a question: to what extent would model fit deteriorate if parameter estimates were changed? And which parameters have the greatest influence on model fit? This is the idea of parameter influence. The present paper will cover two approaches to quantifying parameter influence. Both are based on the principle of likelihood displacement (LD), which quantifies influence as the discrepancy between the likelihood under the original model and the likelihood under the model in which a minor perturbation is imposed (Cook, 1986 Cook, R. D. 1986. Assessment of local influence. Journal of the Royal Statistical Society. Series B (Methodological)., 48: 133169. [Crossref], [Web of Science ®] [Google Scholar]). One existing approach for quantifying parameter influence is a vector approach (Lee &; Wang, 1996 Lee, S-Y. and Wang, S. J. 1996. Sensitivity analysis of structural equation models. Psychometrika, 61: 93108. [Crossref], [Web of Science ®] [Google Scholar]) that determines a vector in the parameter space such that altering parameter values simultaneously in this direction will cause maximum change in LD. We propose a new approach, called influence mapping for single parameters, that determines the change in model fit under perturbation of a single parameter holding other parameter estimates constant. An influential parameter is defined as one that produces large change in model fit under minor perturbation. Figure 1 illustrates results from this procedure for three different parameters in an empirical application. Flatter curves represent less influential parameters. Practical implications of the results are discussed. The relationship with statistical power in structural equation models is also discussed.
FIGURE 1 Influence mapping for single parameters.  相似文献   

3.
4.
5.
When designing a study that uses structural equation modeling (SEM), an important task is to decide an appropriate sample size. Historically, this task is approached from the power analytic perspective, where the goal is to obtain sufficient power to reject a false null hypothesis. However, hypothesis testing only tells if a population effect is zero and fails to address the question about the population effect size. Moreover, significance tests in the SEM context often reject the null hypothesis too easily, and therefore the problem in practice is having too much power instead of not enough power.

An alternative means to infer the population effect is forming confidence intervals (CIs). A CI is more informative than hypothesis testing because a CI provides a range of plausible values for the population effect size of interest. Given the close relationship between CI and sample size, the sample size for an SEM study can be planned with the goal to obtain sufficiently narrow CIs for the population model parameters of interest.

Latent curve models (LCMs) is an application of SEM with mean structure to studying change over time. The sample size planning method for LCM from the CI perspective is based on maximum likelihood and expected information matrix. Given a sample, to form a CI for the model parameter of interest in LCM, it requires the sample covariance matrix S, sample mean vector , and sample size N. Therefore, the width (w) of the resulting CI can be considered a function of S, , and N. Inverting the CI formation process gives the sample size planning process. The inverted process requires a proxy for the population covariance matrix Σ, population mean vector μ, and the desired width ω as input, and it returns N as output. The specification of the input information for sample size planning needs to be performed based on a systematic literature review. In the context of covariance structure analysis, Lai and Kelley (2011) discussed several practical methods to facilitate specifying Σ and ω for the sample size planning procedure.  相似文献   

6.
7.
Mediation analysis investigates how certain variables mediate the effect of predictors on outcome variables. Existing studies of mediation models have been limited to normal theory maximum likelihood (ML) or least squares with normally distributed data. Because real data in the social and behavioral sciences are seldom normally distributed and often contain outliers, classical methods can result in biased and inefficient estimates, which lead to inaccurate or unreliable test of the meditated effect. The authors propose two approaches for better mediation analysis. One is to identify cases that strongly affect test results of mediation using local influence methods and robust methods. The other is to use robust methods for parameter estimation, and then test the mediated effect based on the robust estimates. Analytic details of both local influence and robust methods particular for mediation models were provided and one real data example was given. We first used local influence and robust methods to identify influential cases. Then, for the original data and the data with the identified influential cases removed, the mediated effect was tested using two estimation methods: normal theory ML and the robust method, crossing two tests of mediation: the Sobel (1982) Sobel, M. E. 1982. “Asymptotic confidence intervals for indirect effects in structural equation models”. In Sociological methodology, Edited by: Leinhardt, S. 290312. Washington, DC: American Sociological Association. [Crossref] [Google Scholar] test using information-based standard error (z I ) and sandwich-type standard error (z SW ). Results show that local influence and robust methods rank the influence of cases similarly, while the robust method is more objective. The widely used z I statistic is inflated when the distribution is heavy-tailed. Compared to normal theory ML, the robust method provides estimates with smaller standard errors and more reliable test.  相似文献   

8.
9.
10.
A productive way to think about imagistic mental models of physical systems is as though they were sources of quasi-empirical evidence. People depict or imagine events at those points in time when they would experiment with the world if possible. Moreover, just as they would do when observing the world, people induce patterns of behavior from the results depicted in their imaginations. These resulting patterns of behavior can then be cast into symbolic rules to simplify thinking about future problems and to reveal higher order relationships. Using simple gear problems, three experiments explored the occasions of use for, and the inductive transitions between, depictive models and number-based rules. The first two experiments used the convergent evidence of problem-solving latencies, hand motions, referential language and error data to document the initial use of a model, the induction of rules from the modeling results, and the fallback to a model when a rule fails. The third experiment explored the intermediate representations that facilitate the induction of rules from depictive models. The strengths and weaknesses of depictive modeling and more analytic systems of reasoning are delineated to motivate the reasons for these transitions.  相似文献   

11.
12.
13.
14.
15.
Abstract

Conventional growth models assume that the random effects describing individual trajectories are conditionally normal. In practice, this assumption may often be unrealistic. As an alternative, Nagin (2005) Nagin, D. 2005. Group-based modeling of development, Cambridge: Harvard University Press. [Crossref] [Google Scholar] suggested a semiparametric group-based approach (SPGA) which approximates an unknown, continuous distribution of individual trajectories with a mixture of group trajectories.

Prior simulations (Brame, Nagin, &; Wasserman, 2006 Brame, R., Nagin, D. and Wasserman, L. 2006. Exploring some analytical characteristics of finite mixture models.. Journal of Quantitative Criminology, 22: 3159. [Crossref], [Web of Science ®] [Google Scholar]; Nagin, 2005 Nagin, D. 2005. Group-based modeling of development, Cambridge: Harvard University Press. [Crossref] [Google Scholar]) indicated that SPGA could generate nearly-unbiased estimates of means and variances of a nonnormal distribution of individual trajectories, as functions of group-trajectory estimates. However, these studies used few random effects—usually only a random intercept. Based on the analytical relationship between SPGA and adaptive quadrature, we hypothesized that SPGA's ability to approximate (a) random effect variances/covariances and (b) effects of time-invariant predictors of growth should deteriorate as the dimensionality of the random effects distribution increases. We expected this problem to be mitigated by correlations among the random effects (highly correlated random effects functioning as fewer dimensions) and sample size (larger N supporting more groups).

We tested these hypotheses via simulation, varying the number of random effects (1, 2, or 3), correlation among the random effects (0 or .6), and N (250, 500). Results indicated that, as the number of random effects increased, SPGA approximations remained acceptable for fixed effects, but became increasingly negatively biased for random effect variances. Whereas correlated random effects and larger N reduced this underestimation, correlated random effects sometimes distorted recovery of predictor effects. To illustrate this underestimation, Figure 1 depicts SPGA's approximation of the intercept variance from a three correlated random effect generating model (N < eqid1 > 500). These results suggest SPGA approximations are inadequate for the nonnormal, high-dimensional distributions of individual trajectories often seen in practice.
FIGURE 1 SPGA-approximated intercept variance from a three correlated random effect generating model. Notes. The dashed horizontal lines denote + 10% bias. The solid horizontal line denotes the population-generating parameter value; * denotes the best-BIC selected number of groups. The vertical bars denote 90% confidence intervals.  相似文献   

16.
This paper discusses an analysis of how scientists select relevant publications, and an application that can assist scientists in this information selection task. The application, called the Personal Publication Assistant, is based on the assumption that successful information selection is driven by recognizing familiar terms. To adapt itself to a researcher’s interests, the system takes into account what words have been used in a particular researcher’s abstracts, and when these words have been used. The user model underlying the Personal Publication Assistant is based on a rational analysis of memory, and takes the form of a model of declarative memory as developed for the cognitive architecture ACT-R. We discuss an experiment testing the assumptions of this model and present a user study that validates the implementation of the Personal Publication Assistant. The user study shows that the Personal Publication Assistant can successfully make an initial selection of relevant papers from a large collection of scientific literature.  相似文献   

17.
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods.

This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx.

Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.  相似文献   

18.
19.
Prior to a three-way component analysis of a three-way data set, it is customary to preprocess the data by centering and/or rescaling them. Harshman and Lundy (1984) considered that three-way data actually consist of a three-way model part, which in fact pertains to ratio scale measurements, as well as additive “offset” terms that turn the ratio scale measurements into interval scale measurements. They mentioned that such offset terms might be estimated by incorporating additional components in the model, but discarded this idea in favor of an approach to remove such terms from the model by means of centering. Then estimates for the three-way component model parameters are obtained by analyzing the centered data. In the present paper, the possibility of actually estimating the offset terms is taken up again. First, it is mentioned in which cases such offset terms can be estimated uniquely. Next, procedures are offered for estimating model parameters and offset parameters simultaneously, as well as successively (i.e., providing offset term estimates after the three-way model parameters have been estimated in the traditional way on the basis of the centered data). These procedures are provided for both the CANDECOMP/PARAFAC model and the Tucker3 model extended with offset terms. The successive and the simultaneous approaches for estimating model and offset parameters have been compared on the basis of simulated data. It was found that both procedures perform well when the fitted model captures at least all offset terms actually underlying the data. The simultaneous procedures performed slightly better than the successive procedures. If fewer offset terms are fitted than there are underlying the model, the results are considerably poorer, but in these cases the successive procedures performed better than the simultaneous ones. All in all, it can be concluded that the traditional approach for estimating model parameters can hardly be improved upon, and that offset terms can sufficiently well be estimated by the proposed successive approach, which is a simple extension of the traditional approach. The author is obliged to Jos M.F. ten Berge and Marieke Timmerman for helpful comments on an earlier version of this paper. The author is obliged to Iven van Mechelen for making available the data set used in Section 6.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号