首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Moderated multiple regression (MMR) arguably is the most popular statistical technique for investigating regression slope differences (interactions) across groups (e.g., aptitude-treatment interactions in training and differential test score-job performance prediction in selection testing). However, heterogeneous error variances can greatly bias the typical MMR analysis, and the conditions that cause heterogeneity are not uncommon. Statistical corrections that have been developed require special calculations and are not conducive to follow-up analyses that describe an interaction effect in depth. A weighted least squares (WLS) approach is recommended for 2-group studies. For 2-group studies, WLS is statistically accurate, is readily executed through popular software packages (e.g., SAS Institute, 1999; SPSS, 1999), and allows follow-up tests.  相似文献   

2.
Weighted least squares fitting using ordinary least squares algorithms   总被引:2,自引:0,他引:2  
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. This approach consists of iteratively performing (steps of) existing algorithms for ordinary least squares (OLS) fitting of the same model. The approach is based on minimizing a function that majorizes the WLS loss function. The generality of the approach implies that, for every model for which an OLS fitting algorithm is available, the present approach yields a WLS fitting algorithm. In the special case where the WLS weight matrix is binary, the approach reduces to missing data imputation.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the author.  相似文献   

3.
This study explores the performance of several two‐stage procedures for testing ordinary least‐squares (OLS) coefficients under heteroscedasticity. A test of the usual homoscedasticity assumption is carried out in the first stage of the procedure. Subsequently, a test of the regression coefficients is chosen and performed in the second stage. Three recently developed methods for detecting heteroscedasticity are examined. In addition, three heteroscedastic robust tests of OLS coefficients are considered. A major finding is that performing a test of heteroscedasticity prior to applying a heteroscedastic robust test can lead to poor control over Type I errors.  相似文献   

4.
During the last half century, hundreds of papers published in statistical journals have documented general conditions where reliance on least squares regression and Pearson's correlation can result in missing even strong associations between variables. Moreover, highly misleading conclusions can be made, even when the sample size is large. There are, in fact, several fundamental concerns related to non‐normality, outliers, heteroscedasticity, and curvature that can result in missing a strong association. Simultaneously, a vast array of new methods has been derived for effectively dealing with these concerns. The paper (i) reviews why least squares regression and classic inferential methods can fail, (ii) provides an overview of the many modern strategies for dealing with known problems, including some recent advances, and (iii) illustrates that modern robust methods can make a practical difference in our understanding of data. Included are some general recommendations regarding how modern methods might be used. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
This paper develops synthetic validity estimates based on a meta-analytic-weighted least squares (WLS) approach to job component validity (JCV), using position analysis questionnaire (PAQ) estimates of job characteristics, and the Data, People, & Things ratings from the  Dictionary of Occupational Titles  as indices of job complexity. For the general aptitude test battery database of 40,487 employees, nine validity coefficients were estimated for 192 positions. The predicted validities from the WLS approach had lower estimated variability than would be obtained from either the classic JCV approach or local criterion-related validity studies. Data, People, & Things summary ratings did not consistently moderate validity coefficients, whereas the PAQ data did moderate validity coefficients. In sum, these results suggest that synthetic validity procedures should incorporate a WLS regression approach. Moreover, researchers should consider a comprehensive set of job characteristics when considering job complexity rather than a single aggregated index.  相似文献   

6.
A central assumption that is implicit in estimating item parameters in item response theory (IRT) models is the normality of the latent trait distribution, whereas a similar assumption made in categorical confirmatory factor analysis (CCFA) models is the multivariate normality of the latent response variables. Violation of the normality assumption can lead to biased parameter estimates. Although previous studies have focused primarily on unidimensional IRT models, this study extended the literature by considering a multidimensional IRT model for polytomous responses, namely the multidimensional graded response model. Moreover, this study is one of few studies that specifically compared the performance of full-information maximum likelihood (FIML) estimation versus robust weighted least squares (WLS) estimation when the normality assumption is violated. The research also manipulated the number of nonnormal latent trait dimensions. Results showed that FIML consistently outperformed WLS when there were one or multiple skewed latent trait distributions. More interestingly, the bias of the discrimination parameters was non-ignorable only when the corresponding factor was skewed. Having other skewed factors did not further exacerbate the bias, whereas biases of boundary parameters increased as more nonnormal factors were added. The item parameter standard errors recovered well with both estimation algorithms regardless of the number of nonnormal dimensions.  相似文献   

7.
Moderated multiple regression (MMR) has been widely used to investigate the interaction or moderating effects of a categorical moderator across a variety of subdisciplines in the behavioral and social sciences. In view of the frequent violation of the homogeneity of error variance assumption in MMR applications, the weighted least squares (WLS) approach has been proposed as one of the alternatives to the ordinary least squares method for the detection of the interaction effect between a dichotomous moderator and a continuous predictor. Although the existing result is informative in assuring the statistical accuracy and computational ease of the WLS-based method, no explicit algebraic formulation and underlying distributional details are available. This article aims to delineate the fundamental properties of the WLS test in connection with the well-known Welch procedure for regression slope homogeneity under error variance heterogeneity. With elaborately systematic derivation and analytic assessment, it is shown that the notion of WLS is implicitly embedded in the Welch approach. More importantly, extensive simulation study is conducted to demonstrate the conditions in which the Welch test will substantially outperform the WLS method; they may yield different conclusions. Welch’s solution to the Behrens-Fisher problem is so entrenched that the use of its direct extension within the linear regression framework can arguably be recommended. In order to facilitate the application of Welch’s procedure, the SAS and R computing algorithms are presented. The study contributes to the understanding of methodological variants for detecting the effect of a dichotomous moderator in the context of moderated multiple regression. Supplemental materials for this article may be downloaded from brm.psychonomic-journals.org/content/supplemental.  相似文献   

8.
Confirmatory factor analysis (CFA) is widely used for examining hypothesized relations among ordinal variables (e.g., Likert-type items). A theoretically appropriate method fits the CFA model to polychoric correlations using either weighted least squares (WLS) or robust WLS. Importantly, this approach assumes that a continuous, normal latent process determines each observed variable. The extent to which violations of this assumption undermine CFA estimation is not well-known. In this article, the authors empirically study this issue using a computer simulation study. The results suggest that estimation of polychoric correlations is robust to modest violations of underlying normality. Further, WLS performed adequately only at the largest sample size but led to substantial estimation difficulties with smaller samples. Finally, robust WLS performed well across all conditions.  相似文献   

9.
Moderated multiple regression (MMR) is frequently employed to analyse interaction effects between continuous predictor variables. The procedure of mean centring is commonly recommended to mitigate the potential threat of multicollinearity between predictor variables and the constructed cross-product term. Also, centring does typically provide more straightforward interpretation of the lower-order terms. This paper attempts to clarify two methodological issues of potential confusion. First, the positive and negative effects of mean centring on multicollinearity diagnostics are explored. It is illustrated that the mean centring method is, depending on the characteristics of the data, capable of either increasing or decreasing various measures of multicollinearity. Second, the exact reason why mean centring does not affect the detection of interaction effects is given. The explication shows the symmetrical influence of mean centring on the corrected sum of squares and variance inflation factor of the product variable while maintaining the equivalence between the two residual sums of squares for the regression of the product term on the two predictor variables. Thus the resulting test statistic remains unchanged regardless of the obvious modification of multicollinearity with mean centring. These findings provide a clear understanding and demonstration on the diverse impact of mean centring in MMR applications.  相似文献   

10.
Li CH 《心理评价》2012,24(3):770-776
Of the several measures of optimism presently available in the literature, the Life Orientation Test (LOT; Scheier & Carver, 1985) has been the most widely used in empirical research. This article explores, confirms, and cross-validates the factor structure of the Chinese version of the LOT with ordinal data by using robust weighted least squares (robust WLS) estimation within the Taiwanese cultural context. Results of exploratory and confirmatory factor analyses using 3 different samples (Ntotal = 1,119) show that the factor structure of the Chinese version of the LOT is better conceptualized as a correlated 2-factor model than a single-factor model. The composite reliability was 0.7 for the "disagreement on optimism" factor and 0.74 for the "agreement on optimism" factor. In addition, comparison results of the 2 estimators using empirical data and simulation data suggest that robust WLS is less biased than maximum likelihood (ML) for estimating factor loadings and interfactor correlations in the factor analytic model of the Chinese version of the LOT. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

11.
This study provides a review of two methods for analyzing multilevel data with group-level outcome variables and compares them in a simulation study. The analytical methods included an unadjusted ordinary least squares (OLS) analysis of group means and a two-step adjustment of the group means suggested by Croon and van Veldhoven (2007). The Type I error control, power, bias, standard errors, and RMSE in parameter estimates were compared across design conditions that included manipulations of number of predictor variables, level of correlation between predictors, level of intraclass correlation, predictor reliability, effect size, and sample size. The results suggested that an OLS analysis of the group means, with White’s heteroscedasticity adjustment, provided more power for tests of group-level predictors, but less power for tests of individual-level predictors. Furthermore, this simple analysis avoided the extreme bias in parameter estimates and inadmissible solutions that were encountered with other strategies. These results were interpreted in terms of recommended analytical methods for applied researchers.  相似文献   

12.
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.  相似文献   

13.
Miller suggested ordinary least squares estimation of a constant transition matrix; Madansky proposed a relatively more efficient weighted least squares estimator which corrects for heteroscedasticity. In this paper an efficient generalized least squares estimator is derived which utilizes the entire covariance matrix of the distrubances. This estimator satisfies the condition that each row of the transition matrix must sum to unity. Madansky noted that estimates of the variances could be negative; a method for obtaining consistent non-negative estimates of the variances is suggested in this paper. The technique is applied to the hypothetical sample data used by Miller and Madansky.I am indebted to a referee for his thoughtful suggestions on content and style.  相似文献   

14.
In many psychological questionnaires the need to analyze empirical data raises the fundamental problem of possible fake or fraudulent observations in the data. This aspect is particularly relevant for researchers working on sensitive topics such as, for example, risky sexual behaviors and drug addictions. Our contribution presents a new probabilistic approach, called Sample Generation by Replacement (SGR), to address the problem of evaluating the sensitivity of 8 commonly used SEM-based fit indices (Goodness of Fit Index, GFI; Adjusted Goodness of Fit Index, AGFI; Expected Cross Validation Index, ECVI; Standardized Root-Mean-Square Residual Index, SRMR; Root-Mean-Square Error of Approximation, RMSEA; Comparative Fit Index, CFI; Nonnormed Fit Index, NNFI; and Normed Fit Index, NFI) to fake-good ordinal data. We used SGR to perform a simulation study involving 3 different SEM models, 2 sample size conditions, and 2 estimation methods: maximum likelihood (ML) and weighted least squares (WLS). Our results show that the incremental fit indices (CFI, NNFI, and NFI) are clearly more sensitive to fake perturbation than the absolute fit indices (GFI, AGFI, and ECVI). Overall, NFI turned out to be the best and most reliable fit index. We also applied SGR to real behavioral data on (non)compliance in liver transplant patients.  相似文献   

15.
传统的最小二乘回归法关注于对当前数据集的准确估计, 容易导致模型的过拟合, 影响模型结论的可重复性。随着方法学领域的发展, 涌现出的新兴统计工具可以弥补传统方法的局限, 从过度关注回归系数值的解释转向提升研究结果的预测能力也愈加成为心理学领域重要的发展趋势。Lasso方法通过在模型估计中引入惩罚项的方式, 可以获得更高的预测准确度和模型概化能力, 同时也可以有效地处理过拟合和多重共线性问题, 有助于心理学理论的构建和完善。  相似文献   

16.
The data obtained from one‐way independent groups designs is typically non‐normal in form and rarely equally variable across treatment populations (i.e. population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e. the analysis of variance F test) typically provides invalid results (e.g. too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non‐normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e. trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non‐normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non‐normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non‐normal.  相似文献   

17.
This paper presents a comparative study of three popular methods for multicriteria decision analysis based on a particular model of human preferential judgement. Since decisions are invariably made within a given context, we model relative preferences as ratios of increments or decrements in an interval on an axis of desirability. Next we sort the ratio magnitudes into a small number of categories, represented by numerical values on a geometric scale. We explain why the analytic hierarchy process (AHP) and the French collection of ELECTRE methods, typically based on pairwise comparison methods, are concerned with categories of ratio magnitudes, whereas the simple multiattribute rating technique (SMART) essentially uses orders of magnitude of these ratios. This phenomenon provides a common basis for the analysis of the methods in question and for a cross-validation of their results. We illustrate the approach via a well-known case study, the choice of a location for a nuclear power plant. We conclude by discussing the scope of the comparative study. © 1997 John Wiley & Sons, Ltd.  相似文献   

18.
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.  相似文献   

19.
Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 ? α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding.  相似文献   

20.
Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号