首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Moderated multiple regression (MMR) arguably is the most popular statistical technique for investigating regression slope differences (interactions) across groups (e.g., aptitude-treatment interactions in training and differential test score-job performance prediction in selection testing). However, heterogeneous error variances can greatly bias the typical MMR analysis, and the conditions that cause heterogeneity are not uncommon. Statistical corrections that have been developed require special calculations and are not conducive to follow-up analyses that describe an interaction effect in depth. A weighted least squares (WLS) approach is recommended for 2-group studies. For 2-group studies, WLS is statistically accurate, is readily executed through popular software packages (e.g., SAS Institute, 1999; SPSS, 1999), and allows follow-up tests.  相似文献   

2.
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.  相似文献   

3.
Welch’s (Biometrika 29: 350–362, 1938) procedure has emerged as a robust alternative to the Student’s t test for comparing the means of two normal populations with unknown and possibly unequal variances. To facilitate the advocated statistical practice of confidence intervals and further improve the potential applicability of Welch’s procedure, in the present article, we consider exact approaches to optimize sample size determinations for precise interval estimation of the difference between two means under various allocation and cost considerations. The desired precision of a confidence interval is assessed with respect to the control of expected half-width, and to the assurance probability of interval half-width within a designated value. Furthermore, the design schemes in terms of participant allocation and cost constraints include (a) giving the ratio of group sizes, (b) specifying one sample size, (c) attaining maximum precision performance for a fixed cost, and (d) meeting a specified precision level for the least cost. The proposed methods provide useful alternatives to the conventional sample size procedures. Also, the developed programs expand the degree of generality for the existing statistical software packages and can be accessed at brm.psychonomic-journals.org/content/ supplemental.  相似文献   

4.
Moderated multiple regression (MMR) has been widely employed to analyze the interaction or moderating effects in behavior and related disciplines of social science. Much of the methodological literature in the context of MMR concerns statistical power and sample size calculations of hypothesis tests for detecting moderator variables. Notably, interval estimation is a distinct and more informative alternative to significance testing for inference purposes. To facilitate the practice of reporting confidence intervals in MMR analyses, the present article presents two approaches to sample size determinations for precise interval estimation of interaction effects between continuous moderator and predictor variables. One approach provides the necessary sample size so that the designated interval for the least squares estimator of moderating effects attains the specified coverage probability. The other gives the sample size required to ensure, with a given tolerance probability, that a confidence interval of moderating effects with a desired confidence coefficient will be within a specified range. Numerical examples and simulation results are presented to illustrate the usefulness and advantages of the proposed methods that account for the embedded randomness and distributional characteristic of the moderator and predictor variables.  相似文献   

5.
The data obtained from one‐way independent groups designs is typically non‐normal in form and rarely equally variable across treatment populations (i.e. population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e. the analysis of variance F test) typically provides invalid results (e.g. too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non‐normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e. trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non‐normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non‐normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non‐normal.  相似文献   

6.
方杰  温忠麟 《心理科学进展》2022,30(5):1183-1190
使用多元回归法进行调节效应分析在社科领域已常有应用。简述了目前多元回归法的调节效应分析存在的不足,包括人为变换检验模型、自变量和调节变量区分不足、误差方差齐性的假设难以满足、调节效应量指标ΔR2没有直接测量调节变量对自变量与因变量关系的调节程度。比较好的方法是用两水平回归模型进行调节效应分析并使用相应的效应量指标。在介绍新方法和新效应量后,总结出一套调节效应的分析流程,通过一个例子来演示如何用Mplus软件进行两水平回归模型的调节效应及其效应量分析。最后讨论了两水平回归模型的调节效应分析的发展,包括稳健的调节效应分析、潜变量的调节效应分析、有调节的中介效应分析和有中介的调节效应分析等。  相似文献   

7.
The goal of this study was to investigate the performance of Hall’s transformation of the Brunner-Dette-Munk (BDM) and Welch-James (WJ) test statistics and Box-Cox’s data transformation in factorial designs when normality and variance homogeneity assumptions were violated separately and jointly. On the basis of unweighted marginal means, we performed a simulation study to explore the operating characteristics of the methods proposed for a variety of distributions with small sample sizes. Monte Carlo simulation results showed that when data were sampled from symmetric distributions, the error rates of the original BDM and WJ tests were scarcely affected by the lack of normality and homogeneity of variance. In contrast, when data were sampled from skewed distributions, the original BDM and WJ rates were not well controlled. Under such circumstances, the results clearly revealed that Hall’s transformation of the BDM and WJ tests provided generally better control of Type I error rates than did the same tests based on Box-Cox’s data transformation. Among all the methods considered in this study, we also found that Hall’s transformation of the BDM test yielded the best control of Type I errors, although it was often less powerful than either of the WJ tests when both approaches reasonably controlled the error rates.  相似文献   

8.
One of the most problematic issues in contemporary meta-analysis is the estimation and interpretation of moderating effects. Monte Carlo analyses are developed in this article that compare bivariate correlations, ordinary least squares and weighted least squares (WLS) multiple regression, and hierarchical subgroup (HS) analysis for assessing the influence of continuous moderators under conditions of multicollinearity and skewed distribution of study sample sizes (heteroscedasticity). The results show that only WLS is largely unaffected by multicollinearity and heteroscedasticity, whereas the other techniques are substantially weakened. Of note, HS, one of the most popular methods, typically provides the most inaccurate results, whereas WLS, one of the least popular methods, typically provides the most accurate results.  相似文献   

9.
Moderated multiple regression (MMR) is frequently employed to analyse interaction effects between continuous predictor variables. The procedure of mean centring is commonly recommended to mitigate the potential threat of multicollinearity between predictor variables and the constructed cross-product term. Also, centring does typically provide more straightforward interpretation of the lower-order terms. This paper attempts to clarify two methodological issues of potential confusion. First, the positive and negative effects of mean centring on multicollinearity diagnostics are explored. It is illustrated that the mean centring method is, depending on the characteristics of the data, capable of either increasing or decreasing various measures of multicollinearity. Second, the exact reason why mean centring does not affect the detection of interaction effects is given. The explication shows the symmetrical influence of mean centring on the corrected sum of squares and variance inflation factor of the product variable while maintaining the equivalence between the two residual sums of squares for the regression of the product term on the two predictor variables. Thus the resulting test statistic remains unchanged regardless of the obvious modification of multicollinearity with mean centring. These findings provide a clear understanding and demonstration on the diverse impact of mean centring in MMR applications.  相似文献   

10.
A Monte Carlo study was used to compare four approaches to growth curve analysis of subjects assessed repeatedly with the same set of dichotomous items: A two‐step procedure first estimating latent trait measures using MULTILOG and then using a hierarchical linear model to examine the changing trajectories with the estimated abilities as the outcome variable; a structural equation model using modified weighted least squares (WLSMV) estimation; and two approaches in the framework of multilevel item response models, including a hierarchical generalized linear model using Laplace estimation, and Bayesian analysis using Markov chain Monte Carlo (MCMC). These four methods have similar power in detecting the average linear slope across time. MCMC and Laplace estimates perform relatively better on the bias of the average linear slope and corresponding standard error, as well as the item location parameters. For the variance of the random intercept, and the covariance between the random intercept and slope, all estimates are biased in most conditions. For the random slope variance, only Laplace estimates are unbiased when there are eight time points.  相似文献   

11.
ABSTRACT Personality moderating variables act to qualify the relationship between a personality trait measure and a relevant behavioral criterion. Two data analytic techniques that can be used to test for significant moderating effects are the "median split" (MS) approach and the "moderated multiple regression" (MMR) approach. The goals of the present research were ( a ) to apply the MS approach to computer-simulated data in which the moderator and trait extremity are confounded, to determine the extent of artifact, and ( b ) to compare the performance (Type I and Type II error rates) of the two approaches when applied to confounded and nonconfounded data. It was found that when the MS approach was applied to confounded data in which no real moderating effect existed, this approach produced an alarming rate of apparent, but spurious, moderating effects. When the MMR approach was applied to the same data, the rate of spurious effects was reduced to that expected by chance. When both approaches were applied to simulated data which contained genuine moderating effects, the MMR approach consistently resulted in more correct detections of these effects than the MS approach. We conclude that researchers should always employ the MMR rather than the MS approach when testing for personality moderator variable effects.  相似文献   

12.
Weighted least squares fitting using ordinary least squares algorithms   总被引:2,自引:0,他引:2  
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. This approach consists of iteratively performing (steps of) existing algorithms for ordinary least squares (OLS) fitting of the same model. The approach is based on minimizing a function that majorizes the WLS loss function. The generality of the approach implies that, for every model for which an OLS fitting algorithm is available, the present approach yields a WLS fitting algorithm. In the special case where the WLS weight matrix is binary, the approach reduces to missing data imputation.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the author.  相似文献   

13.
Stephen Carr 《Sophia》2001,40(2):31-45
This article critically examines some of the theological and Neo-Orthodox readings of Foucault. An exploration of some key texts reveals limitations in, e.g., Milbank’s account, and is developed further through an examination of Sharon Welch’s discussion of feminist liberation theology. A deeper engagement with Foucault’s work emerges, clarifying issues of power, disclosure, truth and ‘agonism’. The paper proposes that Foucault’s work is not an expression of ‘nihilism’ but rather is important for the self-critique and integrity of theology.  相似文献   

14.
The purpose of this study was to evaluate a modified test of equivalence for conducting normative comparisons when distribution shapes are non‐normal and variances are unequal. A Monte Carlo study was used to compare the empirical Type I error rates and power of the proposed Schuirmann–Yuen test of equivalence, which utilizes trimmed means, with that of the previously recommended Schuirmann and Schuirmann–Welch tests of equivalence when the assumptions of normality and variance homogeneity are satisfied, as well as when they are not satisfied. The empirical Type I error rates of the Schuirmann–Yuen were much closer to the nominal α level than those of the Schuirmann or Schuirmann–Welch tests, and the power of the Schuirmann–Yuen was substantially greater than that of the Schuirmann or Schuirmann–Welch tests when distributions were skewed or outliers were present. The Schuirmann–Yuen test is recommended for assessing clinical significance with normative comparisons.  相似文献   

15.
For small and balanced analysis of variance problems, standard computer programs are convenient and efficient. For large problems, regression pro- grams are at least competitive with analysis of variance programs; and, when a problem is unbalanced, they usually provide the only reasonable solution. This paper discusses procedures for using regression programs for the computing of analyses of variance. A procedure for coding matrices is described for experimental designs having nested and crossed factors. Several illustrations are given, and the limitation of the procedure with large repeated measures designs is discussed. A second algorithm is offered for obtaining the sums of squares for nested factors and their interactions in such designs.  相似文献   

16.
A program is described for fitting a regression model in which the relationship between the dependent and the independent variables is described by two regression equations, one for each of two mutually exclusive ranges of the independent variable. The point at which the change from one equation to the other occurs is often unknown, and thus must be estimated. In cognitive psychology, such models are relevant for studying the phenomenon of strategy shifts. The program uses a (weighted) least squares algorithm to estimate the regression parameters and the change point. The algorithm always finds the global minimum of the error sum of squares. The model is applied to data from a mental-rotation experiment. The program’s estimates of the point at which the strategy shift occurs are compared with estimates obtained from a nonlinear least squares minimization procedure in SPSSX.  相似文献   

17.
Many books on statistical methods advocate a ‘conditional decision rule’ when comparing two independent group means. This rule states that the decision as to whether to use a ‘pooled variance’ test that assumes equality of variance or a ‘separate variance’ Welch t test that does not should be based on the outcome of a variance equality test. In this paper, we empirically examine the Type I error rate of the conditional decision rule using four variance equality tests and compare this error rate to the unconditional use of either of the t tests (i.e. irrespective of the outcome of a variance homogeneity test) as well as several resampling‐based alternatives when sampling from 49 distributions varying in skewness and kurtosis. Several unconditional tests including the separate variance test performed as well as or better than the conditional decision rule across situations. These results extend and generalize the findings of previous researchers who have argued that the conditional decision rule should be abandoned.  相似文献   

18.
In this study, we investigated whether computer-animated graphics are more effective than static graphics in teaching statistics. Four statistical concepts were presented and explained to students in class. The presentations included graphics either in static or in animated form. The concepts explained were the multiplication of two matrices, the covariance of two random variables, the method of least squares in linear regression, α error, β error, and strength of effect. A comprehension test was immediately administered following the presentation. Test results showed a significant advantage for the animated graphics on retention and understanding of the concepts presented.  相似文献   

19.
A common question of interest to researchers in psychology is the equivalence of two or more groups. Failure to reject the null hypothesis of traditional hypothesis tests such as the ANOVA F‐test (i.e., H0: μ1 = … = μk) does not imply the equivalence of the population means. Researchers interested in determining the equivalence of k independent groups should apply a one‐way test of equivalence (e.g., Wellek, 2003). The goals of this study were to investigate the robustness of the one‐way Wellek test of equivalence to violations of homogeneity of variance assumption, and compare the Type I error rates and power of the Wellek test with a heteroscedastic version which was based on the logic of the one‐way Welch (1951) F‐test. The results indicate that the proposed Wellek–Welch test was insensitive to violations of the homogeneity of variance assumption, whereas the original Wellek test was not appropriate when the population variances were not equal.  相似文献   

20.
Cronbach’s α is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach’s α coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach’s α for a scale with dichotomous items can be improved by using the upper bound of coefficient ϕ. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach’s α via this method. The simulation analysis showed that Cronbach’s α from upper-bound ϕ might be appropriate for estimating the real reliability when standardized Cronbach’s α is problematic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号