首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 ? α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding.  相似文献   

2.
3.
SPSS and SAS programs for generalizability theory analyses   总被引:1,自引:0,他引:1  
The identification and reduction of measurement errors is a major challenge in psychological testing. Most investigators rely solely on classical test theory for assessing reliability, whereas most experts have long recommended using generalizability theory instead. One reason for the common neglect of generalizability theory is the absence of analytic facilities for this purpose in popular statistical software packages. This article provides a brief introduction to generalizability theory, describes easy to use SPSS, SAS, and MATLAB programs for conducting the recommended analyses, and provides an illustrative example, using data (N= 329) for the Rosenberg Self-Esteem Scale. Program output includes variance components, relative and absolute errors and generalizability coefficients, coefficients for D studies, and graphs of D study results.  相似文献   

4.
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule’s Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.  相似文献   

5.
When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http:// brm.psychonomic-journals.org/content/supplemental.  相似文献   

6.
Equivalence tests are an alternative to traditional difference‐based tests for demonstrating a lack of association between two variables. While there are several recent studies investigating equivalence tests for comparing means, little research has been conducted on equivalence methods for evaluating the equivalence or similarity of two correlation coefficients or two regression coefficients. The current project proposes novel tests for evaluating the equivalence of two regression or correlation coefficients derived from the two one‐sided tests (TOST) method (Schuirmann, 1987, J. Pharmacokinet. Biopharm, 15, 657) and an equivalence test by Anderson and Hauck (1983, Stat. Commun., 12, 2663). A simulation study was used to evaluate the performance of these tests and compare them with the common, yet inappropriate, method of assessing equivalence using non‐rejection of the null hypothesis in difference‐based tests. Results demonstrate that equivalence tests have more accurate probabilities of declaring equivalence than difference‐based tests. However, equivalence tests require large sample sizes to ensure adequate power. We recommend the Anderson–Hauck equivalence test over the TOST method for comparing correlation or regression coefficients.  相似文献   

7.
Levels-of-analysis issues arise whenever individual-level data are collected from more than one person from the same dyad, family, classroom, work group, or other interaction unit. Interdependence in data from individuals in the same interaction units also violates the independence-of-observations assumption that underlies commonly used statistical tests. This article describes the data analysis challenges that are presented by these issues and presents SPSS and SAS programs for conducting appropriate analyses. The programs conduct the within- and-between-analyses described by Dansereau, Alutto, and Yammarino (1984) and the dyad-level analyses describedby Gonzalez and Griffin (1999) and Griffin and Gonzalez (1995). Contrasts with general multilevel modeling procedures are then discussed.  相似文献   

8.
Levels-of-analysis issues arise whenever individual-level data are collected from more than one person from the same dyad, family, classroom, work group, or other interaction unit. Interdependence in data from individuals in the same interaction units also violates the independence-of-observations assumption that underlies commonly used statistical tests. This article describes the data analysis challenges that are presented by these issues and presents SPSS and SAS programs for conducting appropriate analyses. The programs conduct the within-and-between-analyses described by Dansereau, Alutto, and Yammarino (1984) and the dyad-level analyses described by Gonzalez and Griffin (1999) and Griffin and Gonzalez (1995). Contrasts with general multilevel modeling procedures are then discussed.  相似文献   

9.
Factor analysis programs in SAS, BMDP, and SPSS are discussed and compared in terms of documentation, methods and options available, internal logic, computational accuracy, and results provided. Some problems with respect to logic and output are described. Based on these comparisons, recommendations are offered which include a clear overall preference for SAS, and advice against general use of SPSS for factor analysis.  相似文献   

10.
This article describes the functions of a SAS macro and an SPSS syntax that produce common statistics for conventional item analysis including Cronbach’s alpha, item difficulty index (p-value or item mean), and item discrimination indices (D-index, point biserial and biserial correlations for dichotomous items and item-total correlation for polytomous items). These programs represent an improvement over the existing SAS and SPSS item analysis routines in terms of completeness and user-friendliness. To promote routine evaluations of item qualities in instrument development of any scale, the programs are available at no charge for interested users. The program codes along with a brief user’s manual that contains instructions and examples are downloadable from suen.ed.psu.edu/~pwlei/plei.htm.  相似文献   

11.
Programs suitable for pocket calculators using reverse Polish notation are described. Program 1 computes regression coefficients, correlation coefficient, and standard error of estimate for paired data. Program 2 performs at test to compare the slopes of two regression lines. Program 3 computes F ratios to test the departure of a regression slope from zero and to test linearity of the regression. Programs 4 and 5 test the significance of (differences between independent and correlated correlation coefficients, respectively.  相似文献   

12.
13.
14.
15.
To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.  相似文献   

16.
17.
18.
Principal component regression (PCR) is a popular technique in data analysis and machine learning. However, the technique has two limitations. First, the principal components (PCs) with the largest variances may not be relevant to the outcome variables. Second, the lack of standard error estimates for the unstandardized regression coefficients makes it hard to interpret the results. To address these two limitations, we propose a model-based approach that includes two mean and covariance structure models defined for multivariate PCR. By estimating the defined models, we can obtain inferential information that will allow us to test the explanatory power of individual PCs and compute the standard error estimates for the unstandardized regression coefficients. A real example is used to illustrate our approach, and simulation studies under normality and nonnormality conditions are presented to validate the standard error estimates for the unstandardized regression coefficients. Finally, future research topics are discussed.  相似文献   

19.
20.
An approach to sample size planning for multiple regression is presented that emphasizes accuracy in parameter estimation (AIPE). The AIPE approach yields precise estimates of population parameters by providing necessary sample sizes in order for the likely widths of confidence intervals to be sufficiently narrow. One AIPE method yields a sample size such that the expected width of the confidence interval around the standardized population regression coefficient is equal to the width specified. An enhanced formulation ensures, with some stipulated probability, that the width of the confidence interval will be no larger than the width specified. Issues involving standardized regression coefficients and random predictors are discussed, as are the philosophical differences between AIPE and the power analytic approaches to sample size planning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号