首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study explores the performance of several two‐stage procedures for testing ordinary least‐squares (OLS) coefficients under heteroscedasticity. A test of the usual homoscedasticity assumption is carried out in the first stage of the procedure. Subsequently, a test of the regression coefficients is chosen and performed in the second stage. Three recently developed methods for detecting heteroscedasticity are examined. In addition, three heteroscedastic robust tests of OLS coefficients are considered. A major finding is that performing a test of heteroscedasticity prior to applying a heteroscedastic robust test can lead to poor control over Type I errors.  相似文献   

2.
One of the most problematic issues in contemporary meta-analysis is the estimation and interpretation of moderating effects. Monte Carlo analyses are developed in this article that compare bivariate correlations, ordinary least squares and weighted least squares (WLS) multiple regression, and hierarchical subgroup (HS) analysis for assessing the influence of continuous moderators under conditions of multicollinearity and skewed distribution of study sample sizes (heteroscedasticity). The results show that only WLS is largely unaffected by multicollinearity and heteroscedasticity, whereas the other techniques are substantially weakened. Of note, HS, one of the most popular methods, typically provides the most inaccurate results, whereas WLS, one of the least popular methods, typically provides the most accurate results.  相似文献   

3.
The common maximum likelihood (ML) estimator for structural equation models (SEMs) has optimal asymptotic properties under ideal conditions (e.g., correct structure, no excess kurtosis, etc.) that are rarely met in practice. This paper proposes model-implied instrumental variable – generalized method of moments (MIIV-GMM) estimators for latent variable SEMs that are more robust than ML to violations of both the model structure and distributional assumptions. Under less demanding assumptions, the MIIV-GMM estimators are consistent, asymptotically unbiased, asymptotically normal, and have an asymptotic covariance matrix. They are “distribution-free,” robust to heteroscedasticity, and have overidentification goodness-of-fit J-tests with asymptotic chi-square distributions. In addition, MIIV-GMM estimators are “scalable” in that they can estimate and test the full model or any subset of equations, and hence allow better pinpointing of those parts of the model that fit and do not fit the data. An empirical example illustrates MIIV-GMM estimators. Two simulation studies explore their finite sample properties and find that they perform well across a range of sample sizes.  相似文献   

4.
Uncorrectable skew and heteroscedasticity are among the "lemons" of psychological data, yet many important variables naturally exhibit these properties. For scales with a lower and upper bound, a suitable candidate for models is the beta distribution, which is very flexible and models skew quite well. The authors present maximum-likelihood regression models assuming that the dependent variable is conditionally beta distributed rather than Gaussian. The approach models both means (location) and variances (dispersion) with their own distinct sets of predictors (continuous and/or categorical), thereby modeling heteroscedasticity. The location sub-model link function is the logit and thereby analogous to logistic regression, whereas the dispersion sub-model is log linear. Real examples show that these models handle the independent observations case readily. The article discusses comparisons between beta regression and alternative techniques, model selection and interpretation, practical estimation, and software.  相似文献   

5.
Abstract This article considers the problem of comparing two independent groups in terms of some measure of location. It is well known that with Student's two-independent-sample t test, the actual level of significance can be well above or below the nominal level, confidence intervals can have inaccurate probability coverage, and power can be low relative to other methods. A solution to deal with heterogeneity is Welch's (1938) test. Welch's test deals with heteroscedasticity but can have poor power under arbitrarily small departures from normality. Yuen (1974) generalized Welch's test to trimmed means; her method provides improved control over the probability of a Type I error, but problems remain. Transformations for skewness improve matters, but the probability of a Type I error remains unsatisfactory in some situations. We find that a transformation for skewness combined with a bootstrap method improves Type I error control and probability coverage even if sample sizes are small.  相似文献   

6.
In this study, we explore the effects of non-normality and heteroscedasticity when testing the hypothesis that the regression lines associated with multiple independent groups have the same slopes. The conventional approach involving the F-test and the t-test (F/t approach) is examined. In addition, we introduce two robust methods which allow simultaneous testing of regression slopes. Our results suggest that the F/t approach is extremely sensitive to violations of assumptions and tends to yield misleading conclusions. The new robust alternatives are recommended for general use.  相似文献   

7.
Distribution-free tests of stochastic dominance for small samples   总被引:1,自引:0,他引:1  
One variable is said to “stochastically dominate” another if the probability of observations smaller than x is greater for one variable than the other, for all x. Inferring stochastic dominance from data samples is important for many applications of econometrics and experimental psychology, but little is known about the performance of existing inferential methods. Through simulation, we show that three of the most widely used inferential methods are inadequate for use in small samples of the size commonly encountered in many applications (up to 400 observations from each distribution). We develop two new inferential methods that perform very well in a limited, but practically important, case where the two variables are guaranteed not to be equal in distribution. We also show that extensions of these new methods, and an improved version of an existing method, perform quite well in the original, unlimited case.  相似文献   

8.
During the last half century, hundreds of papers published in statistical journals have documented general conditions where reliance on least squares regression and Pearson's correlation can result in missing even strong associations between variables. Moreover, highly misleading conclusions can be made, even when the sample size is large. There are, in fact, several fundamental concerns related to non‐normality, outliers, heteroscedasticity, and curvature that can result in missing a strong association. Simultaneously, a vast array of new methods has been derived for effectively dealing with these concerns. The paper (i) reviews why least squares regression and classic inferential methods can fail, (ii) provides an overview of the many modern strategies for dealing with known problems, including some recent advances, and (iii) illustrates that modern robust methods can make a practical difference in our understanding of data. Included are some general recommendations regarding how modern methods might be used. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Heteroscedasticity is a well-known issue in linear regression modeling. When heteroscedasticity is observed, researchers are advised to remedy possible model misspecification of the explanatory part of the model (e.g., considering alternative functional forms and/or omitted variables). The present contribution discusses another source of heteroscedasticity in observational data: Directional model misspecifications in the case of nonnormal variables. Directional misspecification refers to situations where alternative models are equally likely to explain the data-generating process (e.g., xy versus yx). It is shown that the homoscedasticity assumption is likely to be violated in models that erroneously treat true nonnormal predictors as response variables. Recently, Direction Dependence Analysis (DDA) has been proposed as a framework to empirically evaluate the direction of effects in linear models. The present study links the phenomenon of heteroscedasticity with DDA and describes visual diagnostics and nine homoscedasticity tests that can be used to make decisions concerning the direction of effects in linear models. Results of a Monte Carlo simulation that demonstrate the adequacy of the approach are presented. An empirical example is provided, and applicability of the methodology in cases of violated assumptions is discussed.  相似文献   

10.
For one‐way fixed effects ANOVA, it is well known that the conventional F test of the equality of means is not robust to unequal variances, and numerous methods have been proposed for dealing with heteroscedasticity. On the basis of extensive empirical evidence of Type I error control and power performance, Welch's procedure is frequently recommended as the major alternative to the ANOVA F test under variance heterogeneity. To enhance its practical usefulness, this paper considers an important aspect of Welch's method in determining the sample size necessary to achieve a given power. Simulation studies are conducted to compare two approximate power functions of Welch's test for their accuracy in sample size calculations over a wide variety of model configurations with heteroscedastic structures. The numerical investigations show that Levy's (1978a) approach is clearly more accurate than the formula of Luh and Guo (2011) for the range of model specifications considered here. Accordingly, computer programs are provided to implement the technique recommended by Levy for power calculation and sample size determination within the context of the one‐way heteroscedastic ANOVA model.  相似文献   

11.
Data in psychology are often collected using Likert‐type scales, and it has been shown that factor analysis of Likert‐type data is better performed on the polychoric correlation matrix than on the product‐moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real‐data example indicates that estimates by ridge GLS are 9–20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich‐type standard errors following the ridge GLS methods also perform reasonably well.  相似文献   

12.
Bonett DG 《心理学方法》2008,13(3):173-181
The currently available meta-analytic methods for correlations have restrictive assumptions. The fixed-effects methods assume equal population correlations and exhibit poor performance under correlation heterogeneity. The random-effects methods do not assume correlation homogeneity but are based on an equally unrealistic assumption that the selected studies are a random sample from a well-defined superpopulation of study populations. The random-effects methods can accommodate correlation heterogeneity, but these methods do not perform properly in typical applications where the studies are nonrandomly selected. A new fixed-effects meta-analytic confidence interval for bivariate correlations is proposed that is easy to compute and performs well under correlation heterogeneity and nonrandomly selected studies.  相似文献   

13.
This study provides a review of two methods for analyzing multilevel data with group-level outcome variables and compares them in a simulation study. The analytical methods included an unadjusted ordinary least squares (OLS) analysis of group means and a two-step adjustment of the group means suggested by Croon and van Veldhoven (2007). The Type I error control, power, bias, standard errors, and RMSE in parameter estimates were compared across design conditions that included manipulations of number of predictor variables, level of correlation between predictors, level of intraclass correlation, predictor reliability, effect size, and sample size. The results suggested that an OLS analysis of the group means, with White’s heteroscedasticity adjustment, provided more power for tests of group-level predictors, but less power for tests of individual-level predictors. Furthermore, this simple analysis avoided the extreme bias in parameter estimates and inadmissible solutions that were encountered with other strategies. These results were interpreted in terms of recommended analytical methods for applied researchers.  相似文献   

14.
Small-sample inference with clustered data has received increased attention recently in the methodological literature, with several simulation studies being presented on the small-sample behavior of many methods. However, nearly all previous studies focus on a single class of methods (e.g., only multilevel models, only corrections to sandwich estimators), and the differential performance of various methods that can be implemented to accommodate clustered data with very few clusters is largely unknown, potentially due to the rigid disciplinary preferences. Furthermore, a majority of these studies focus on scenarios with 15 or more clusters and feature unrealistically simple data-generation models with very few predictors. This article, motivated by an applied educational psychology cluster randomized trial, presents a simulation study that simultaneously addresses the extreme small sample and differential performance (estimation bias, Type I error rates, and relative power) of 12 methods to account for clustered data with a model that features a more realistic number of predictors. The motivating data are then modeled with each method, and results are compared. Results show that generalized estimating equations perform poorly; the choice of Bayesian prior distributions affects performance; and fixed effect models perform quite well. Limitations and implications for applications are also discussed.  相似文献   

15.
Missing data are a common issue in statistical analyses. Multiple imputation is a technique that has been applied in countless research studies and has a strong theoretical basis. Most of the statistical literature on multiple imputation has focused on unbounded continuous variables, with mostly ad hoc remedies for variables with bounded support. These approaches can be unsatisfactory when applied to bounded variables as they can produce misleading inferences. In this paper, we propose a flexible quantile-based imputation model suitable for distributions defined over singly or doubly bounded intervals. Proper support of the imputed values is ensured by applying a family of transformations with singly or doubly bounded range. Simulation studies demonstrate that our method is able to deal with skewness, bimodality, and heteroscedasticity and has superior properties as compared to competing approaches, such as log-normal imputation and predictive mean matching. We demonstrate the application of the proposed imputation procedure by analysing data on mathematical development scores in children from the Millennium Cohort Study, UK. We also show a specific advantage of our methods using a small psychiatric dataset. Our methods are relevant in a number of fields, including education and psychology.  相似文献   

16.
Shieh (2013) discussed in detail δ*, a proposed standardized effect size measure for the two-independent-groups design with heteroscedasticity. Shieh focused on inference—notably, the large challenge of calculating confidence intervals for δ*. I contend, however, that the standardizer chosen for δ*, meaning the units in which it is expressed, is appropriate for inference but causes δ* to be inconsistent with conventional Cohen’s d. In addition, δ* depends on the relative sample sizes in the particular experiment and, thus, lacks the generality that is highly desirable if a standardized effect size is to be readily interpretable and also usable in meta-analysis. In the case of heteroscedasticity, I suggest that researchers should choose as standardizer for Cohen’s δ the best available estimate of the SD of an appropriate population, usually the control population, in preference to δ* as discussed by Shieh.  相似文献   

17.
Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. In this article, we consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. We compare single-level and hierarchical methods for estimation of the parameters of ex-Gaussian distributions. In addition, for each approach, we compare maximum likelihood estimation with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian, by reducing variability in the recovered parameters. At each level, little overall difference was observed between the maximum likelihood and Bayesian methods.  相似文献   

18.
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.  相似文献   

19.
This article compares several methods for performing robust principal component analysis, two of which have not been considered in previous articles. The criterion here, unlike that of extant articles aimed at comparing methods, is how well a method maximizes a robust version of the generalized variance of the projected data. This is in contrast to maximizing some measure of scatter associated with the marginal distributions of the projected scores, which does not take into account the overall structure of the projected data. Included are comparisons in which distributions are not elliptically symmetric. One of the new methods simply removes outliers using a projection-type multivariate outlier detection method that has been found to perform well relative to other outlier detection methods that have been proposed. The other new method belongs to the class of projection pursuit techniques and differs from other projection pursuit methods in terms of the function it tries to maximize. The comparisons include the method derived by Maronna (2005), the spherical method derived by Locantore et al. (1999), as well as a method proposed by Hubert, Rousseeuw, and Vanden Branden (2005). From the perspective used, the method by Hubert et al. (2005), the spherical method, and one of the new methods dominate the method derived by Maronna.  相似文献   

20.
The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects. Simulations show that the estimates are unbiased under most conditions. Confidence intervals based on a normal approximation or a simulated sampling distribution perform well when the random effects are normally distributed but less so when they are nonnormally distributed. These methods are further developed to address hypotheses of moderated mediation in the multilevel context. An example demonstrates the feasibility and usefulness of the proposed methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号