首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Manolov R  Arnau J  Solanas A  Bono R 《Psicothema》2010,22(4):1026-1032
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions about intervention effectiveness in single-case designs. Ordinary least square estimation is compared to two correction techniques dealing with general trend and a procedure that eliminates autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approach the nominal ones in the presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.  相似文献   

2.
Miller suggested ordinary least squares estimation of a constant transition matrix; Madansky proposed a relatively more efficient weighted least squares estimator which corrects for heteroscedasticity. In this paper an efficient generalized least squares estimator is derived which utilizes the entire covariance matrix of the distrubances. This estimator satisfies the condition that each row of the transition matrix must sum to unity. Madansky noted that estimates of the variances could be negative; a method for obtaining consistent non-negative estimates of the variances is suggested in this paper. The technique is applied to the hypothetical sample data used by Miller and Madansky.I am indebted to a referee for his thoughtful suggestions on content and style.  相似文献   

3.
Abstract: Exploratory methods using second‐order components and second‐order common factors were proposed. The second‐order components were obtained from the resolution of the correlation matrix of obliquely rotated first‐order principal components. The standard errors of the estimates of the second‐order component loadings were derived from an augmented information matrix with restrictions for the loadings and associated parameters. The second‐order factor analysis proposed was similar to the classical method in that the factor correlations among the first‐order factors were further resolved by the exploratory method of factor analysis. However, in this paper the second‐order factor loadings were estimated by the generalized least squares using the asymptotic variance‐covariance matrix for the first‐order factor correlations. The asymptotic standard errors for the estimates of the second‐order factor loadings were also derived. A numerical example was presented with simulated results.  相似文献   

4.
A two-stage procedure is developed for analyzing structural equation models with continuous and polytomous variables. At the first stage, the maximum likelihood estimates of the thresholds, polychoric covariances and variances, and polyserial covariances are simultaneously obtained with the help of an appropriate transformation that significantly simplifies the computation. An asymptotic covariance matrix of the estiates is also computed. At the second stage, the parameters in the structural covariance model are obtained via the generalized least squares approach. Basic statistical properties of the estimates are derived and some illustrative examples and a small simulation study are reported.This research was supported in part by a research grant DA01070 from the U. S. Public Health Service. We are indebted to several referees and the editor for very valuable comments and suggestions for improvement of this paper. The computing assistance of King-Hong Leung and Man-Lai Tang is also gratefully acknowledged.  相似文献   

5.
Existing test statistics for assessing whether incomplete data represent a missing completely at random sample from a single population are based on a normal likelihood rationale and effectively test for homogeneity of means and covariances across missing data patterns. The likelihood approach cannot be implemented adequately if a pattern of missing data contains very few subjects. A generalized least squares rationale is used to develop parallel tests that are expected to be more stable in small samples. Three factors were varied for a simulation: number of variables, percent missing completely at random, and sample size. One thousand data sets were simulated for each condition. The generalized least squares test of homogeneity of means performed close to an ideal Type I error rate for most of the conditions. The generalized least squares test of homogeneity of covariance matrices and a combined test performed quite well also.Preliminary results on this research were presented at the 1999 Western Psychological Association convention, Irvine, CA, and in the UCLA Statistics Preprint No. 265 (http://www.stat.ucla.edu). The assistance of Ke-Hai Yuan and several anonymous reviewers is gratefully acknowledged.  相似文献   

6.
The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure is correctly specified, the four procedures are asymptotically equivalent. However, this equivalence is often used when models are not correctly specified. This short paper clarifies conditions under which these procedures are not asymptotically equivalent. Analytical results indicate that, when a model is not correct, two factors contribute to the nonequivalence of the different procedures. One is that the estimated covariance matrices by different procedures are different, the other is that they use different scales to measure the distance between the sample covariance matrix and the estimated covariance matrix. The results are illustrated using real as well as simulated data. Implication of the results to model fit indices is also discussed using the comparative fit index as an example. The work described in this paper was supported by a grant from the Research Grants Council of Hong Kong Special Administrative Region (Project No. CUHK 4170/99M) and by NSF grant DMS04-37167.  相似文献   

7.
Data are ipsative if they are subject to a constant-sum constraint for each individual. In the present study, ordinal ipsative data (OID) are defined as the ordinal rankings across a vector of variables. It is assumed that OID are the manifestations of their underlying nonipsative vector y, which are difficult to observe directly. A two-stage estimation procedure is suggested for the analysis of structural equation models with OID. In the first stage, the partition maximum likelihood (PML) method and the generalized least squares (GLS) method are proposed for estimating the means and the covariance matrix of Acy, where Ac is a known contrast matrix. Based on the joint asymptotic distribution of the first stage estimator and an appropriate weight matrix, the generalized least squares method is used to estimate the structural parameters in the second stage. A goodness-of-fit statistic is given for testing the hypothesized covariance structure. Simulation results show that the proposed method works properly when a sufficiently large sample is available.This research was supported by National Institute on Drug Abuse Grants DA01070 and DA10017. The authors are indebted to Dr. Lee Cooper, Dr. Eric Holman, Dr. Thomas Wickens for their valuable suggestions on this study, and Dr. Fanny Cheung for allowing us to use her CPAI data set in this article. The authors would also like to acknowledge the helpful comments from the editor and the two anonymous reviewers.  相似文献   

8.
A direct method in handling incomplete data in general covariance structural models is investigated. Asymptotic statistical properties of the generalized least squares method are developed. It is shown that this approach has very close relationships with the maximum likelihood approach. Iterative procedures for obtaining the generalized least squares estimates, the maximum likelihood estimates, as well as their standard error estimates are derived. Computer programs for the confirmatory factor analysis model are implemented. A longitudinal type data set is used as an example to illustrate the results.This research was supported in part by Research Grant DAD1070 from the U.S. Public Health Service. The author is indebted to anonymous reviewers for some very valuable suggestions. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   

9.
Jöreskog  K. G. 《Psychometrika》1962,27(4):335-354
A method for estimation in factor analysis is presented. The method is based on the assumption that the residual (specific and error) variances are proportional to the reciprocal values of the diagonal elements of the inverted covariance (correlation) matrix. The estimation is performed by a modification of Whittle's least squares technique. The method is independent of the unit of scoring in the tests. Applications are given in the form of nine reanalyses of data of various kinds found in earlier literature.The writer wishes to thank Prof. H. Wold, Dr. E. Lyttkens, and Dr. P. Whittle for valuable comments and suggestions.  相似文献   

10.
Robust schemes in regression are adapted to mean and covariance structure analysis, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is properly weighted according to its distance, based on first and second order moments, from the structural model. A simple weighting function is adopted because of its flexibility with changing dimensions. The weight matrix is obtained from an adaptive way of using residuals. Test statistic and standard error estimators are given, based on iteratively reweighted least squares. The method reduces to a standard distribution-free methodology if all cases are equally weighted. Examples demonstrate the value of the robust procedure.The authors acknowledge the constructive comments of three referees and the Editor that lead to an improved version of the paper. This work was supported by National Institute on Drug Abuse Grants DA01070 and DA00017 and by the University of North Texas Faculty Research Grant Program.  相似文献   

11.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

12.
Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.  相似文献   

13.
Spiess  Martin  Jordan  Pascal  Wendt  Mike 《Psychometrika》2019,84(1):212-235

In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  相似文献   

14.
Data in psychology are often collected using Likert‐type scales, and it has been shown that factor analysis of Likert‐type data is better performed on the polychoric correlation matrix than on the product‐moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real‐data example indicates that estimates by ridge GLS are 9–20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich‐type standard errors following the ridge GLS methods also perform reasonably well.  相似文献   

15.
In this article, we present a Bayesian spatial factor analysis model. We extend previous work on confirmatory factor analysis by including geographically distributed latent variables and accounting for heterogeneity and spatial autocorrelation. The simulation study shows excellent recovery of the model parameters and demonstrates the consequences of ignoring spatial dependence. Specifically, we find inefficiency in the estimates of the factor score means and bias and inefficiency in the estimates of the corresponding covariance matrix. We apply the model to Schwartz value priority data obtained from 5 European countries. We show that the Schwartz motivational types of values, such as Conformity, Tradition, Benevolence, and Hedonism, possess high spatial autocorrelation. We identify several spatial patterns—specifically, Conformity and Hedonism have a country-specific structure, Tradition has a North–South gradient that cuts across national borders, and Benevolence has South–North cross-national gradient. Finally, we show that conventional factor analysis may lead to a loss of valuable insights compared with the proposed approach.  相似文献   

16.
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice.  相似文献   

17.
Cross validation is a useful way of comparing predictive generalizability of theoretically plausible a priori models in structural equation modeling (SEM). A number of overall or local cross validation indices have been proposed for existing factor-based and component-based approaches to SEM, including covariance structure analysis and partial least squares path modeling. However, there is no such cross validation index available for generalized structured component analysis (GSCA) which is another component-based approach. We thus propose a cross validation index for GSCA, called Out-of-bag Prediction Error (OPE), which estimates the expected prediction error of a model over replications of so-called in-bag and out-of-bag samples constructed through the implementation of the bootstrap method. The calculation of this index is well-suited to the estimation procedure of GSCA, which uses the bootstrap method to obtain the standard errors or confidence intervals of parameter estimates. We empirically evaluate the performance of the proposed index through the analyses of both simulated and real data.  相似文献   

18.
This paper is concerned with the analysis of structural equation models with polytomous variables. A computationally efficient three-stage estimator of the thresholds and the covariance structure parameters, based on partition maximum likelihood and generalized least squares estimation, is proposed. An example is presented to illustrate the method.This research was supported in part by a research grant DA01070 from the U.S. Public Health Service. The production assistance of Julie Speckart is gratefully acknowledged.  相似文献   

19.
20.
To assess the effect of a manipulation on a response time distribution, psychologists often use Vincentizing or quantile averaging to construct group or “average” distributions. We provide a theorem characterizing the large sample properties of the averaged quantiles when the individual RT distributions all belong to the same location-scale family. We then apply the theorem to estimating parameters for the quantile-averaged distributions. From the theorem, it is shown that parameters of the group distribution can be estimated by generalized least squares. This method provides accurate estimates of standard errors of parameters and can therefore be used in formal inference. The method is benchmarked in a small simulation study against both a maximum likelihood method and an ordinary least-squares method. Generalized least squares essentially is the only method based on the averaged quantiles that is both unbiased and provides accurate estimates of parameter standard errors. It is also proved that for location-scale families, performing generalized least squares on quantile averages is formally equivalent to averaging parameter estimates from generalized least squares performed on individuals. A limitation on the method is that individual RT distributions must be members of the same location-scale family.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号