首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper we implement a Markov chain Monte Carlo algorithm based on the stochastic search variable selection method of George and McCulloch (1993) for identifying promising subsets of manifest variables (items) for factor analysis models. The suggested algorithm is constructed by embedding in the usual factor analysis model a normal mixture prior for the model loadings with latent indicators used to identify not only which manifest variables should be included in the model but also how each manifest variable is associated with each factor. We further extend the suggested algorithm to allow for factor selection. We also develop a detailed procedure for the specification of the prior parameters values based on the practical significance of factor loadings using ideas from the original work of George and McCulloch (1993). A straightforward Gibbs sampler is used to simulate from the joint posterior distribution of all unknown parameters and the subset of variables with the highest posterior probability is selected. The proposed method is illustrated using real and simulated data sets.  相似文献   

2.
A new oblique factor rotation method is proposed, the aim of which is to identify a simple and well‐clustered structure in a factor loading matrix. A criterion consisting of the complexity of a factor loading matrix and a between‐cluster dissimilarity is optimized using the gradient projection algorithm and the k‐means algorithm. It is shown that if there is an oblique rotation of an initial loading matrix that has a perfect simple structure, then the proposed method with Kaiser's normalization will produce the perfect simple structure. Although many rotation methods can also recover a perfect simple structure, they perform poorly when a perfect simple structure is not possible. In this case, the new method tends to perform better because it clusters the loadings without requiring the clusters to be perfect. Artificial and real data analyses demonstrate that the proposed method can give a simple structure, which the other methods cannot produce, and provides a more interpretable result than those of widely known rotation techniques.  相似文献   

3.
The polychoric instrumental variable (PIV) approach is a recently proposed method to fit a confirmatory factor analysis model with ordinal data. In this paper, we first examine the small-sample properties of the specification tests for testing the validity of instrumental variables (IVs). Second, we investigate the effects of using different numbers of IVs. Our results show that specification tests derived for continuous data are extremely oversized at all sample sizes when applied to ordinal variables. Possible modifications for ordinal data are proposed in the present study. Simulation results show that the modified specification tests with all available IVs are able to detect model misspecification. In terms of estimation accuracy, the PIV approach where the IVs outnumber the endogenous variables by one produces a lower bias but a higher variation than the PIV approach with more IVs for correctly specified factor loadings at small samples.  相似文献   

4.
Influence analysis is an important component of data analysis, and the local influence approach has been widely applied to many statistical models to identify influential observations and assess minor model perturbations since the pioneering work of Cook (1986) . The approach is often adopted to develop influence analysis procedures for factor analysis models with ranking data. However, as this well‐known approach is based on the observed data likelihood, which involves multidimensional integrals, directly applying it to develop influence analysis procedures for the factor analysis models with ranking data is difficult. To address this difficulty, a Monte Carlo expectation and maximization algorithm (MCEM) is used to obtain the maximum‐likelihood estimate of the model parameters, and measures for influence analysis on the basis of the conditional expectation of the complete data log likelihood at the E‐step of the MCEM algorithm are then obtained. Very little additional computation is needed to compute the influence measures, because it is possible to make use of the by‐products of the estimation procedure. Influence measures that are based on several typical perturbation schemes are discussed in detail, and the proposed method is illustrated with two real examples and an artificial example.  相似文献   

5.
Statistical aspects of a three-mode factor analysis model   总被引:1,自引:0,他引:1  
A special case of Bloxom's version of Tucker's three-mode model is developed statistically. A distinction is made between modes in terms of whether they are fixed or random. Parameter matrices are associated with the fixed modes, while no parameters are associated with the mode representing random observation vectors. The identification problem is discussed, and unknown parameters of the model are estimated by a weighted least squares method based upon a Gauss-Newton algorithm. A goodness-of-fit statistic is presented. An example based upon self-report and peer-report measures of personality shows that the model is applicable to real data. The model represents a generalization of Thurstonian factor analysis; weighted least squares estimators and maximum likelihood estimators of the factor model can be obtained using the proposed theory.This investigation was supported in part by a Research Scientist Development Award (K02-DA00017) and a research grant (DA01070) from the U. S. Public Health Service. The very helpful comments of several anonymous reviewers are gratefully acknowledged.  相似文献   

6.
Dynamic factor analysis of nonstationary multivariate time series   总被引:3,自引:0,他引:3  
A dynamic factor model is proposed for the analysis of multivariate nonstationary time series in the time domain. The nonstationarity in the series is represented by a linear time dependent mean function. This mild form of nonstationarity is often relevant in analyzing socio-economic time series met in practice. Through the use of an extended version of Molenaar's stationary dynamic factor analysis method, the effect of nonstationarity on the latent factor series is incorporated in the dynamic nonstationary factor model (DNFM). It is shown that the estimation of the unknown parameters in this model can be easily carried out by reformulating the DNFM as a covariance structure model and adopting the ML algorithm proposed by Jöreskog. Furthermore, an empirical example is given to demonstrate the usefulness of the proposed DNFM and the analysis.  相似文献   

7.
This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo evaluation of structural equation models within the context of nonnormal data. The new procedure for nonnormal data simulation is theoretically described and also implemented in the widely used R environment. The quality of the method is assessed by Monte Carlo simulations. A 1-sample test on the observed covariance matrix based on the copula methodology is proposed. This new test for evaluating the quality of a simulation is defined through a particular structural model specification and is robust against normality violations.  相似文献   

8.
Markland D  Oliver EJ 《Body image》2008,5(1):116-121
The Sociocultural Attitudes Towards Appearance Questionnaire-3 measures awareness and endorsement of societal appearance standards. The instrument has been subjected to exploratory factor analyses but to date no studies have reported a priori tests of its hypothesized factor structure using confirmatory factor analysis (CFA). The aim of the present study was to subject the SATAQ-3 to a CFA. Results from a non-clinical convenience sample of 369 women revealed an adequate fit of the model according to conventional criteria. However, detailed residual analysis indicated a significant lack of fit which was explainable by one mis-specified item and shared method variance due to similarities in item content. It was concluded that, with the removal of the mis-specified item, the degree of misfit was tolerable and the intended four-factor solution provides a satisfactory and parsimonious representation of the data.  相似文献   

9.
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects.

To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.  相似文献   

10.
Formulas are derived for the asymptotic variances and covariances of the maximum likelihood estimators for oblique simple structure models which are identified by prior specification of zero elements in the factor loading matrix. The formulas are expressed in terms of the various submatrices of the inverse of the required variance-covariance matrix. A numerical example using artificial data is given and problems in the application of the formulas discussed.Now at The Pennsylvania State University.  相似文献   

11.
This article proposes an intuitive approach for predictive discriminant analysis with mixed continuous, dichotomous, and ordered categorical variables that are defined via an underlying multivariate normal distribution with a threshold specification. The classification rule is based on the comparison of the observed data logarithm probability density functions. To reduce the computational burden, the analysis is conducted in the context of a confirmatory factor analysis model with independent error measurements. Identification of the dichotomous and ordered categorical variables is discussed. Results are obtained by implementations of a Monte Carlo expectation maximization (MCEM)algorithm and a path sampling procedure. Probabilities of misclassification are estimated via the idea of the “jackknife” method. A real example is given to illustrate the proposed method.  相似文献   

12.
The growth curve model has been a useful tool for the analysis of repeated measures data. However, it is designed for an aggregate-sample analysis based on the assumption that the entire sample of respondents are from a single homogenous population. Thus, this method may not be suitable when heterogeneous subgroups exist in the population with qualitatively distinct patterns of trajectories. In this paper, the growth curve model is generalized to a fuzzy clustering framework, which explicitly accounts for such group-level heterogeneity in trajectories of change over time. Moreover, the proposed method estimates parameters based on generalized estimating equations thereby relaxing the assumption of correct specification of the population covariance structure among repeated responses. The performance of the proposed method in recovering parameters and the number of clusters is investigated based on two Monte Carlo analyses involving synthetic data. In addition, the empirical usefulness of the proposed method is illustrated by an application concerning the antisocial behavior of a sample of children.  相似文献   

13.
The Beck Depression Inventory-II (BDI-II) is a frequently used scale for measuring depressive severity. BDI-II data (404 clinical; 695 nonclinical adults) were analyzed by means of confirmatory factor analysis to test whether the factor structure model with a somatic-affective and cognitive component of depression, formulated by Beck and colleagues, has a good fit. We also evaluated 10 alternative models. The fit of Beck's model was not good for all criteria. Three of the alternative models had a better fit in both samples, but none of these met all criteria for good fit. Of the alternatives with a better fit, we selected the only model with unidimensional subscales, which assesses a somatic, affective, and cognitive dimension. For this model, which we recommend, as well as for Beck' original model, a good fitting structure containing 15 and 16 items was developed with an item-deletion algorithm.  相似文献   

14.
Though most social psychologists are aware of the use of factor analysis as an exploratory method to uncover latent dimensions, factor analysis can also be used in a confirmatory mode to test specific hypotheses. A confirmatory solution is unique and cannot be rotated. One possible use of confirmatory factor analysis is with the multitrait-multimethod matrix. Jaccard, Weber, and Lundmark's (1975) multitrait-multimethod matrix of two traits and four methods is analyzed using confirmatory factor analysis. Given the very small sample size the analysis is primarily illustrative. A simple two factor model with errors of measurement correlated across measures using the same method satisfactorily fits the data. Both discriminant and convergent validity are high and none of the methods has higher reliability or less correlated measurement error. The reliability of measures estimated by maximum likelihood factor analysis is lower than the test-retest reliability since method variance is subtracted out. Confirmatory factor analysis is recommended over the Campbell-Fiske criteria.  相似文献   

15.
ABSTRACT Chronic hyponatremia (CHN) has traditionally been considered asymptomatic. If symptoms are observed, they are often mistakenly attributed to the underlying disorder. However, in recent studies neuropsychological deficits have been associated with CHN. The authors sought to determine the association between CHN and motor deficits. They used previously collected data, and 41 subjects with hyponatremia were included. An exploratory factor analysis with principal component analysis (PCA) was performed (eigenvalues >1.0). Factor scores were generated for each subject based on the resultant PCA factor structure. Finally, partial correlations were computed to measure the degree of association between baseline serum sodium concentration [Na+] and individual neuropsychological factor scores with the effect of age removed. All significance tests were performed using 2-tailed comparisons with alpha level of p ≤ .05. A 3-factor model emerged accounting for 70.17% of the total variance, including 1 factor that loaded primarily with motor speed and reaction time. A significant correlation was observed between this motor factor and serum [Na+] (r = -.477, p = .002). These findings add to previous observations suggesting that CHN is associated with subtle yet harmful motor deficits.  相似文献   

16.
A job requirements approach to biodata item specification, similar to the content-valid job analysis approach developed by Pannone (1984), is used to predict customer service. Applicants rate the extent to which their current and previous jobs involve tasks and behaviours that have been identified through an analysis of the target job. On a sample of 245 employees in an international hotel, the criterion-related validity of job requirements biodata compares favourably with traditional construct-oriented biodata measures of customer service, cognitive ability and personality (Conscientiousness, Agreeableness and Extroversion). The job requirements approach provides a simple, direct and content-valid method of biodata item specification. As the approach can also be tailored for particular jobs or organizations, validity is also potentially optimized.  相似文献   

17.
This article presents the results of two Monte Carlo simulation studies of the recovery of weak factor loadings, in the context of confirmatory factor analysis, for models that do not exactly hold in the population. This issue has not been examined in previous research. Model error was introduced using a procedure that allows for specifying a covariance structure with a specified discrepancy in the population. The effects of sample size, estimation method (maximum likelihood vs. unweighted least squares), and factor correlation were also considered. The first simulation study examined recovery for models correctly specified with the known number of factors, and the second investigated recovery for models incorrectly specified by underfactoring. The results showed that recovery was not affected by model discrepancy for the correctly specified models but was affected for the incorrectly specified models. Recovery improved in both studies when factors were correlated, and unweighted least squares performed better than maximum likelihood in recovering the weak factor loadings.  相似文献   

18.
A simple multiple imputation-based method is proposed to deal with missing data in exploratory factor analysis. Confidence intervals are obtained for the proportion of explained variance. Simulations and real data analysis are used to investigate and illustrate the use and performance of our proposal.  相似文献   

19.
This note is concerned with the interpretation of results of a confirmatory factor analysis of data from the Tridimensional Personality Questionnaire, which has been proposed by Bagby, Parker and Joffe (1992; Personality and Individual Differences, 13, 1245–1246). Contrary to their claim of having found a model providing ‘a remarkably good fit’ between the obtained factor structure and the hypothesized dimensions corresponding to the three dimensional biosocial model of personality proposed by Cloninger, it is argued that the cited fitting evidence does not convincingly point to a tenable model. Issues in structural equation model evaluation, as well as possible reasons for lack of an acceptable fit of the model, are discussed.  相似文献   

20.
The Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV) and the Wechsler Memory Scale-fourth edition (WMS-IV) were co-developed to be used individually or as a combined battery of tests. The independent factor structure of each of the tests has been identified; however, the combined factor structure has yet to be determined. Confirmatory factor analysis was applied to the WAIS-IV/WMS-IV Adult battery (i.e., age 16-69 years) co-norming sample (n = 900) to test 13 measurement models. The results indicated that two models fit the data equally well. One model is a seven-factor solution without a hierarchical general ability factor: Verbal Comprehension, Perceptual Reasoning, Processing Speed, Auditory Working Memory, Visual Working Memory, Auditory Memory, and Visual Memory. The second model is a five-factor model composed of Verbal Comprehension, Perceptual Reasoning, Processing Speed, Working Memory, and Memory with a hierarchical general ability factor. Interpretative implications for each model are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号