首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A policy capturing method combining human judgment with ridge regression is offered which results in superior judgment policy models. The new method (termed smart ridge regression) was tested against four others in seven judgment policy capturing applications. Performance criteria were two cross-validation indices: cross-validated multiple correlation and mean squared error of prediction of new judgments. Smart ridge regression was found to outperform ordinary least squares regression and conventional ridge regression, as well as subjective weighting and equal weighting of cues.  相似文献   

2.
Robust schemes in regression are adapted to mean and covariance structure analysis, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is properly weighted according to its distance, based on first and second order moments, from the structural model. A simple weighting function is adopted because of its flexibility with changing dimensions. The weight matrix is obtained from an adaptive way of using residuals. Test statistic and standard error estimators are given, based on iteratively reweighted least squares. The method reduces to a standard distribution-free methodology if all cases are equally weighted. Examples demonstrate the value of the robust procedure.The authors acknowledge the constructive comments of three referees and the Editor that lead to an improved version of the paper. This work was supported by National Institute on Drug Abuse Grants DA01070 and DA00017 and by the University of North Texas Faculty Research Grant Program.  相似文献   

3.
“Improper linear models” (see Dawes, Am. Psychol. 34:571–582, 1979), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of “improper” linear models as “proper” statistical models with a single predictor. We derive the upper bound on the mean squared error of this estimator and demonstrate that it has less variance than ordinary least squares estimates. We examine common choices of the weighting vector used in the literature, e.g., single variable heuristics and equal weighting, and illustrate their performance in various test cases.  相似文献   

4.
This study explores the performance of several two‐stage procedures for testing ordinary least‐squares (OLS) coefficients under heteroscedasticity. A test of the usual homoscedasticity assumption is carried out in the first stage of the procedure. Subsequently, a test of the regression coefficients is chosen and performed in the second stage. Three recently developed methods for detecting heteroscedasticity are examined. In addition, three heteroscedastic robust tests of OLS coefficients are considered. A major finding is that performing a test of heteroscedasticity prior to applying a heteroscedastic robust test can lead to poor control over Type I errors.  相似文献   

5.
A program is described for fitting a regression model in which the relationship between the dependent and the independent variables is described by two regression equations, one for each of two mutually exclusive ranges of the independent variable. The point at which the change from one equation to the other occurs is often unknown, and thus must be estimated. In cognitive psychology, such models are relevant for studying the phenomenon of strategy shifts. The program uses a (weighted) least squares algorithm to estimate the regression parameters and the change point. The algorithm always finds the global minimum of the error sum of squares. The model is applied to data from a mental-rotation experiment. The program’s estimates of the point at which the strategy shift occurs are compared with estimates obtained from a nonlinear least squares minimization procedure in SPSSX.  相似文献   

6.
Manolov R  Arnau J  Solanas A  Bono R 《Psicothema》2010,22(4):1026-1032
The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions about intervention effectiveness in single-case designs. Ordinary least square estimation is compared to two correction techniques dealing with general trend and a procedure that eliminates autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approach the nominal ones in the presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.  相似文献   

7.
Bruce Bloxom 《Psychometrika》1978,43(3):397-408
A gradient method is used to obtain least squares estimates of parameters of them-dimensional euclidean model simultaneously inN spaces, given the observation of all pairwise distances ofn stimuli for each space. The procedure can estimate an additive constant as well as stimulus projections and the metric of the reference axes of the configuration in each space. Each parameter in the model can be fixed to equal some a priori value, constrained to be equal to any other parameter, or free to take on any value in the parameter space. Two applications of the procedure are described.  相似文献   

8.
Procedures for assessing the invariance of factors found in data sets using different subjects and the same variables are often using the least squares criterion, which appears to be too restrictive for comparing factors.Tucker's coefficient of congruence, on the other hand, is more closely related to the human interpretation of factorial invariance than the least squares criterion. A method maximizing simultaneously the sum of coefficients of congruence between two matrices of factor loadings, using orthogonal rotation of one matrix is presented. As shown in examples, the sum of coefficients of congruence obtained using the presented rotation procedure is slightly higher than the sum of coefficients of congruence using Orthogonal Procrustes Rotation based on the least squares criterion.The author is obliged to Lewis R. Goldberg for critically reviewing the first draft of this paper.  相似文献   

9.
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.  相似文献   

10.
In the vast majority of psychological research utilizing multiple regression analysis, asymptotic probability values are reported. This paper demonstrates that asymptotic estimates of standard errors provided by multiple regression are not always accurate. A resampling permutation procedure is used to estimate the standard errors. In some cases the results differ substantially from the traditional least squares regression estimates.  相似文献   

11.
We report five experiments in which the role of background beliefs in social judgments of posterior probability was investigated. From a Bayesian perspective, people should combine prior probabilities (or base rates) and diagnostic evidence with equal weighting, although previous research shows that base rates are often underweighted. These experiments were designed so that either piece of information was supplied either by personal beliefs or by presented statistics, and regression analyses were performed on individual participants to assess the relative influence of information. We found that both prior probabilities and diagnostic information significantly influenced judgments, whether supplied by beliefs or by statistical information, but that belief-based information tended to dominate the judgments made.  相似文献   

12.
In this article, the calculation of effect size measures in single-case research and the use of hierarchical linear models for combining these measures are discussed. Special attention is given to meta-analyses that take into account a possible linear trend in the data. We show that effect size measures that have been proposed for this situation appear to be systematically affected by the duration of the experiment and fail to distinguish between effects on level and slope. To avoid these flaws, we propose to perform a multivariate meta-analysis on the standardized ordinary least squares regression coefficients from the study-specific regression equations describing the response variable.  相似文献   

13.
The generalized graded unfolding model (GGUM) is capable of analyzing polytomous scored, unfolding data such as agree‐disagree responses to attitude statements. In the present study, we proposed a GGUM with structural equation for subject parameters, which enabled us to evaluate the relation between subject parameters and covariates and/or latent variables simultaneously, in order to avoid the influence of attenuation. Additionally, an algorithm for parameter estimation is newly implemented via the Markov Chain Monte Carlo (MCMC) method, based on Bayesian statistics. In the simulation, we compared the accuracy of estimates of regression coefficients between the proposed model and a conventional method using a GGUM (where regression coefficients are estimated using estimates of θ). As a result, the proposed model performed much better than the conventional method in terms of bias and root mean squared errors of estimates of regression coefficients. The study concluded by verifying the efficacy of the proposed model, using an actual data example of attitude measurement.  相似文献   

14.
For small and balanced analysis of variance problems, standard computer programs are convenient and efficient. For large problems, regression pro- grams are at least competitive with analysis of variance programs; and, when a problem is unbalanced, they usually provide the only reasonable solution. This paper discusses procedures for using regression programs for the computing of analyses of variance. A procedure for coding matrices is described for experimental designs having nested and crossed factors. Several illustrations are given, and the limitation of the procedure with large repeated measures designs is discussed. A second algorithm is offered for obtaining the sums of squares for nested factors and their interactions in such designs.  相似文献   

15.
In most research, linear regression analyses are performed without taking into account published results (i.e., reported summary statistics) of similar previous studies. Although the prior density in Bayesian linear regression could accommodate such prior knowledge, formal models for doing so are absent from the literature. The goal of this article is therefore to develop a Bayesian model in which a linear regression analysis on current data is augmented with the reported regression coefficients (and standard errors) of previous studies. Two versions of this model are presented. The first version incorporates previous studies through the prior density and is applicable when the current and all previous studies are exchangeable. The second version models all studies in a hierarchical structure and is applicable when studies are not exchangeable. Both versions of the model are assessed using simulation studies. Performance for each in estimating the regression coefficients is consistently superior to using current data alone and is close to that of an equivalent model that uses the data from previous studies rather than reported regression coefficients. Overall the results show that augmenting data with results from previous studies is viable and yields significant improvements in the parameter estimation.  相似文献   

16.
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method avoids the principal limitation of partial least squares (i.e., the lack of a global optimization procedure) while fully retaining all the advantages of partial least squares (e.g., less restricted distributional assumptions and no improper solutions). The method is also versatile enough to capture complex relationships among variables, including higher-order components and multi-group comparisons. A straightforward estimation algorithm is developed to minimize the criterion.The work reported in this paper was supported by Grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the second author. We wish to thank Richard Bagozzi for permitting us to use his organizational identification data and Wynne Chin for providing PLS-Graph 3.0.  相似文献   

17.
Methods of incorporating a ridge type of regularization into partial redundancy analysis (PRA), constrained redundancy analysis (CRA), and partial and constrained redundancy analysis (PCRA) were discussed. The usefulness of ridge estimation in reducing mean square error (MSE) has been recognized in multiple regression analysis for some time, especially when predictor variables are nearly collinear, and the ordinary least squares estimator is poorly determined. The ridge estimation method was extended to PRA, CRA, and PCRA, where the reduced rank ridge estimates of regression coefficients were obtained by minimizing the ridge least squares criterion. It was shown that in all cases they could be obtained in closed form for a fixed value of ridge parameter. An optimal value of the ridge parameter is found by G-fold cross validation. Illustrative examples were given to demonstrate the usefulness of the method in practical data analysis situations. We thank Jim Ramsay for his insightful comments on an earlier draft of this paper. The work reported in this paper is supported by Grants 10630 from the Natural Sciences and Engineering Research Council of Canada to the first author.  相似文献   

18.
A Monte Carlo study was used to compare four approaches to growth curve analysis of subjects assessed repeatedly with the same set of dichotomous items: A two‐step procedure first estimating latent trait measures using MULTILOG and then using a hierarchical linear model to examine the changing trajectories with the estimated abilities as the outcome variable; a structural equation model using modified weighted least squares (WLSMV) estimation; and two approaches in the framework of multilevel item response models, including a hierarchical generalized linear model using Laplace estimation, and Bayesian analysis using Markov chain Monte Carlo (MCMC). These four methods have similar power in detecting the average linear slope across time. MCMC and Laplace estimates perform relatively better on the bias of the average linear slope and corresponding standard error, as well as the item location parameters. For the variance of the random intercept, and the covariance between the random intercept and slope, all estimates are biased in most conditions. For the random slope variance, only Laplace estimates are unbiased when there are eight time points.  相似文献   

19.
An alternating least squares method for iteratively fitting the longitudinal reduced-rank regression model is proposed. The method uses ordinary least squares and majorization substeps to estimate the unknown parameters in the system and measurement equations of the model. In an example with cross-sectional data, it is shown how the results conform closely to results from eigenanalysis. Optimal scaling of nominal and ordinal variables is added in a third substep, and illustrated with two examples involving cross-sectional and longitudinal data.Financial support by the Institute for Traffic Safety Research (SWOV) in Leidschendam, The Netherlands is gratefully acknowledged.  相似文献   

20.
Moderated multiple regression (MMR) has been widely used to investigate the interaction or moderating effects of a categorical moderator across a variety of subdisciplines in the behavioral and social sciences. In view of the frequent violation of the homogeneity of error variance assumption in MMR applications, the weighted least squares (WLS) approach has been proposed as one of the alternatives to the ordinary least squares method for the detection of the interaction effect between a dichotomous moderator and a continuous predictor. Although the existing result is informative in assuring the statistical accuracy and computational ease of the WLS-based method, no explicit algebraic formulation and underlying distributional details are available. This article aims to delineate the fundamental properties of the WLS test in connection with the well-known Welch procedure for regression slope homogeneity under error variance heterogeneity. With elaborately systematic derivation and analytic assessment, it is shown that the notion of WLS is implicitly embedded in the Welch approach. More importantly, extensive simulation study is conducted to demonstrate the conditions in which the Welch test will substantially outperform the WLS method; they may yield different conclusions. Welch’s solution to the Behrens-Fisher problem is so entrenched that the use of its direct extension within the linear regression framework can arguably be recommended. In order to facilitate the application of Welch’s procedure, the SAS and R computing algorithms are presented. The study contributes to the understanding of methodological variants for detecting the effect of a dichotomous moderator in the context of moderated multiple regression. Supplemental materials for this article may be downloaded from brm.psychonomic-journals.org/content/supplemental.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号