首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling. In practice, researchers may often be interested in examining the interaction effects of latent variables. However, GSCA has been geared only for the specification and testing of the main effects of variables. Thus, an extension of GSCA is proposed to effectively deal with various types of interactions among latent variables. In the proposed method, a latent interaction is defined as a product of interacting latent variables. As a result, this method does not require the construction of additional indicators for latent interactions. Moreover, it can easily accommodate both exogenous and endogenous latent interactions. An alternating least-squares algorithm is developed to minimize a single optimization criterion for parameter estimation. A Monte Carlo simulation study is conducted to investigate the parameter recovery capability of the proposed method. An application is also presented to demonstrate the empirical usefulness of the proposed method.  相似文献   

2.
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.  相似文献   

3.
4.
Cross validation is a useful way of comparing predictive generalizability of theoretically plausible a priori models in structural equation modeling (SEM). A number of overall or local cross validation indices have been proposed for existing factor-based and component-based approaches to SEM, including covariance structure analysis and partial least squares path modeling. However, there is no such cross validation index available for generalized structured component analysis (GSCA) which is another component-based approach. We thus propose a cross validation index for GSCA, called Out-of-bag Prediction Error (OPE), which estimates the expected prediction error of a model over replications of so-called in-bag and out-of-bag samples constructed through the implementation of the bootstrap method. The calculation of this index is well-suited to the estimation procedure of GSCA, which uses the bootstrap method to obtain the standard errors or confidence intervals of parameter estimates. We empirically evaluate the performance of the proposed index through the analyses of both simulated and real data.  相似文献   

5.
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge type of regularization into GSCA in a unified framework, thereby enabling to handle multi-collinearity problems effectively. An alternating regularized least squares algorithm is developed for parameter estimation. A Monte Carlo simulation study is conducted to investigate the performance of the proposed method as compared to its non-regularized counterpart. An application is also presented to demonstrate the empirical usefulness of the proposed method.  相似文献   

6.
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modelling, which adopts components of observed variables as proxies for latent variables and examines directional relationships among latent and observed variables. GSCA has been extended to deal with a wider range of data types, including discrete, multilevel or intensive longitudinal data, as well as to accommodate a greater variety of complex analyses such as latent moderation analysis, the capturing of cluster-level heterogeneity, and regularized analysis. To date, however, there has been no attempt to generalize the scope of GSCA into the Bayesian framework. In this paper, a novel extension of GSCA, called BGSCA, is proposed that estimates parameters within the Bayesian framework. BGSCA can be more attractive than the original GSCA for various reasons. For example, it can infer the probability distributions of random parameters, account for error variances in the measurement model, provide additional fit measures for model assessment and comparison from the Bayesian perspectives, and incorporate external information on parameters, which may be obtainable from past research, expert opinions, subjective beliefs or knowledge on the parameters. We utilize a Markov chain Monte Carlo method, the Gibbs sampler, to update the posterior distributions for the parameters of BGSCA. We conduct a simulation study to evaluate the performance of BGSCA. We also apply BGSCA to real data to demonstrate its empirical usefulness.  相似文献   

7.
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.  相似文献   

8.
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the interdependence of successive observations. Bootstrap methods can fill this need, however. The standard bootstrap of individual timepoints is not appropriate because it destroys their order in time and consequently gives incorrect standard error estimates. Two bootstrap procedures that are appropriate for dynamic factor analysis are described. The moving block bootstrap breaks down the original time series into blocks and draws samples of blocks instead of individual timepoints. A parametric bootstrap is essentially a Monte Carlo study in which the population parameters are taken to be estimates obtained from the available sample. These bootstrap procedures are demonstrated using 103 days of affective mood self-ratings from a pregnant woman, 90 days of personality self-ratings from a psychology freshman, and a simulation study.  相似文献   

9.
This paper focuses on the two‐parameter latent trait model for binary data. Although the prior distribution of the latent variable is usually assumed to be a standard normal distribution, that prior distribution can be estimated from the data as a discrete distribution using a combination of EM algorithms and other optimization methods. We assess with what precision we can estimate the prior from the data, using simulations and bootstrapping. A novel calibration method is given to check that near optimality is achieved for the bootstrap estimates. We find that there is sufficient information on the prior distribution to be informative, and that the bootstrap method is reliable. We illustrate the bootstrap method for two sets of real data.  相似文献   

10.
Process factor analysis (PFA) is a latent variable model for intensive longitudinal data. It combines P-technique factor analysis and time series analysis. The goodness-of-fit test in PFA is currently unavailable. In the paper, we propose a parametric bootstrap method for assessing model fit in PFA. We illustrate the test with an empirical data set in which 22 participants rated their effects everyday over a period of 90 days. We also explore Type I error and power of the parametric bootstrap test with simulated data.  相似文献   

11.
The latent Markov (LM) model is a popular method for identifying distinct unobserved states and transitions between these states over time in longitudinally observed responses. The bootstrap likelihood-ratio (BLR) test yields the most rigorous test for determining the number of latent states, yet little is known about power analysis for this test. Power could be computed as the proportion of the bootstrap p values (PBP) for which the null hypothesis is rejected. This requires performing the full bootstrap procedure for a large number of samples generated from the model under the alternative hypothesis, which is computationally infeasible in most situations. This article presents a computationally feasible shortcut method for power computation for the BLR test. The shortcut method involves the following simple steps: (1) obtaining the parameters of the model under the null hypothesis, (2) constructing the empirical distributions of the likelihood ratio under the null and alternative hypotheses via Monte Carlo simulations, and (3) using these empirical distributions to compute the power. We evaluate the performance of the shortcut method by comparing it to the PBP method and, moreover, show how the shortcut method can be used for sample-size determination.  相似文献   

12.
A split-sample replication criterion originally proposed by J. E. Overall and K. N. Magee (1992) as a stopping rule for hierarchical cluster analysis is applied to multiple data sets generated by sampling with replacement from an original simulated primary data set. An investigation of the validity of this bootstrap procedure was undertaken using different combinations of the true number of latent populations, degrees of overlap, and sample sizes. The bootstrap procedure enhanced the accuracy of identifying the true number of latent populations under virtually all conditions. Increasing the size of the resampled data sets relative to the size of the primary data set further increased accuracy. A computer program to implement the bootstrap stopping rule is made available via a referenced Web site.  相似文献   

13.
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods.

This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx.

Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.  相似文献   

14.
The paper proposes a full information maximum likelihood estimation method for modelling multivariate longitudinal ordinal variables. Two latent variable models are proposed that account for dependencies among items within time and between time. One model fits item‐specific random effects which account for the between time points correlations and the second model uses a common factor. The relationships between the time‐dependent latent variables are modelled with a non‐stationary autoregressive model. The proposed models are fitted to a real data set.  相似文献   

15.
The generalized graded unfolding model (GGUM) is capable of analyzing polytomous scored, unfolding data such as agree‐disagree responses to attitude statements. In the present study, we proposed a GGUM with structural equation for subject parameters, which enabled us to evaluate the relation between subject parameters and covariates and/or latent variables simultaneously, in order to avoid the influence of attenuation. Additionally, an algorithm for parameter estimation is newly implemented via the Markov Chain Monte Carlo (MCMC) method, based on Bayesian statistics. In the simulation, we compared the accuracy of estimates of regression coefficients between the proposed model and a conventional method using a GGUM (where regression coefficients are estimated using estimates of θ). As a result, the proposed model performed much better than the conventional method in terms of bias and root mean squared errors of estimates of regression coefficients. The study concluded by verifying the efficacy of the proposed model, using an actual data example of attitude measurement.  相似文献   

16.
Many intensive longitudinal measurements are collected at irregularly spaced time intervals, and involve complex, possibly nonlinear and heterogeneous patterns of change. Effective modelling of such change processes requires continuous-time differential equation models that may be nonlinear and include mixed effects in the parameters. One approach of fitting such models is to define random effect variables as additional latent variables in a stochastic differential equation (SDE) model of choice, and use estimation algorithms designed for fitting SDE models, such as the continuous-discrete extended Kalman filter (CDEKF) approach implemented in the dynr R package, to estimate the random effect variables as latent variables. However, this approach's efficacy and identification constraints in handling mixed-effects SDE models have not been investigated. In the current study, we analytically inspect the identification constraints of using the CDEKF approach to fit nonlinear mixed-effects SDE models; extend a published model of emotions to a nonlinear mixed-effects SDE model as an example, and fit it to a set of irregularly spaced ecological momentary assessment data; and evaluate the feasibility of the proposed approach to fit the model through a Monte Carlo simulation study. Results show that the proposed approach produces reasonable parameter and standard error estimates when some identification constraint is met. We address the effects of sample size, process noise variance, and data spacing conditions on estimation results.  相似文献   

17.
Growth mixture models (GMMs) with nonignorable missing data have drawn increasing attention in research communities but have not been fully studied. The goal of this article is to propose and to evaluate a Bayesian method to estimate the GMMs with latent class dependent missing data. An extended GMM is first presented in which class probabilities depend on some observed explanatory variables and data missingness depends on both the explanatory variables and a latent class variable. A full Bayesian method is then proposed to estimate the model. Through the data augmentation method, conditional posterior distributions for all model parameters and missing data are obtained. A Gibbs sampling procedure is then used to generate Markov chains of model parameters for statistical inference. The application of the model and the method is first demonstrated through the analysis of mathematical ability growth data from the National Longitudinal Survey of Youth 1997 (Bureau of Labor Statistics, U.S. Department of Labor, 1997). A simulation study considering 3 main factors (the sample size, the class probability, and the missing data mechanism) is then conducted and the results show that the proposed Bayesian estimation approach performs very well under the studied conditions. Finally, some implications of this study, including the misspecified missingness mechanism, the sample size, the sensitivity of the model, the number of latent classes, the model comparison, and the future directions of the approach, are discussed.  相似文献   

18.
A taxonomy of latent structure assumptions (LSAs) for probability matrix decomposition (PMD) models is proposed which includes the original PMD model (Maris, De Boeck, & Van Mechelen, 1996) as well as a three-way extension of the multiple classification latent class model (Maris, 1999). It is shown that PMD models involving different LSAs are actually restricted latent class models with latent variables that depend on some external variables. For parameter estimation a combined approach is proposed that uses both a mode-finding algorithm (EM) and a sampling-based approach (Gibbs sampling). A simulation study is conducted to investigate the extent to which information criteria, specific model checks, and checks for global goodness of fit may help to specify the basic assumptions of the different PMD models. Finally, an application is described with models involving different latent structure assumptions for data on hostile behavior in frustrating situations.Note: The research reported in this paper was partially supported by the Fund for Scientific Research-Flanders (Belgium) (project G.0207.97 awarded to Paul De Boeck and Iven Van Mechelen), and the Research Fund of K.U. Leuven (F/96/6 fellowship to Andrew Gelman, OT/96/10 project awarded to Iven Van Mechelen and GOA/2000/02 awarded to Paul De Boeck and Iven Van Mechelen). We thank Marcel Croon and Kristof Vansteelandt for commenting on an earlier draft of this paper.  相似文献   

19.
Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor model with vector autoregressive relations and time-varying cross-regression parameters at the factor level. Using techniques drawn from the state-space literature, the model was fitted to a set of daily affect data (over 71 days) from 10 participants who had been diagnosed with Parkinson's disease. Our empirical results lend partial support and some potential refinement to the Dynamic Model of Activation with regard to how the time dependencies between positive and negative affects change over time. A simulation study is conducted to examine the performance of the proposed techniques when (a) changes in the time-varying parameters are represented using the true model of change, (b) supposedly time-invariant parameters are represented as time-varying, and (c) the time-varying parameters show discrete shifts that are approximated using an autoregressive model of differences.  相似文献   

20.
Until recently, item response models such as the factor analysis model for metric responses, the two‐parameter logistic model for binary responses and the multinomial model for nominal responses considered only the main effects of latent variables without allowing for interaction or polynomial latent variable effects. However, non‐linear relationships among the latent variables might be necessary in real applications. Methods for fitting models with non‐linear latent terms have been developed mainly under the structural equation modelling approach. In this paper, we consider a latent variable model framework for mixed responses (metric and categorical) that allows inclusion of both non‐linear latent and covariate effects. The model parameters are estimated using full maximum likelihood based on a hybrid integration–maximization algorithm. Finally, a method for obtaining factor scores based on multiple imputation is proposed here for the non‐linear model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号