共查询到20条相似文献,搜索用时 0 毫秒
1.
SPSS and SAS programs for generalizability theory analyses 总被引:1,自引:0,他引:1
The identification and reduction of measurement errors is a major challenge in psychological testing. Most investigators rely
solely on classical test theory for assessing reliability, whereas most experts have long recommended using generalizability
theory instead. One reason for the common neglect of generalizability theory is the absence of analytic facilities for this
purpose in popular statistical software packages. This article provides a brief introduction to generalizability theory, describes
easy to use SPSS, SAS, and MATLAB programs for conducting the recommended analyses, and provides an illustrative example,
using data (N= 329) for the Rosenberg Self-Esteem Scale. Program output includes variance components, relative and absolute errors and
generalizability coefficients, coefficients for D studies, and graphs of D study results. 相似文献
2.
Levels-of-analysis issues arise whenever individual-level data are collected from more than one person from the same dyad,
family, classroom, work group, or other interaction unit. Interdependence in data from individuals in the same interaction
units also violates the independence-of-observations assumption that underlies commonly used statistical tests. This article
describes the data analysis challenges that are presented by these issues and presents SPSS and SAS programs for conducting
appropriate analyses. The programs conduct the within- and-between-analyses described by Dansereau, Alutto, and Yammarino
(1984) and the dyad-level analyses describedby Gonzalez and Griffin (1999) and Griffin and Gonzalez (1995). Contrasts with
general multilevel modeling procedures are then discussed. 相似文献
3.
Brian P O'Connor 《Behavior research methods, instruments & computers》2004,36(1):17-28
Levels-of-analysis issues arise whenever individual-level data are collected from more than one person from the same dyad, family, classroom, work group, or other interaction unit. Interdependence in data from individuals in the same interaction units also violates the independence-of-observations assumption that underlies commonly used statistical tests. This article describes the data analysis challenges that are presented by these issues and presents SPSS and SAS programs for conducting appropriate analyses. The programs conduct the within-and-between-analyses described by Dansereau, Alutto, and Yammarino (1984) and the dyad-level analyses described by Gonzalez and Griffin (1999) and Griffin and Gonzalez (1995). Contrasts with general multilevel modeling procedures are then discussed. 相似文献
4.
Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 ? α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding. 相似文献
5.
Nonparametric and distribution-free tests of categorical data provide an evaluation of statistical significance between groups of subjects differing in their assignment to a set of categories. This paper describes an implementation in the SAS programming language of three tests to evaluate categorical data. One of these tests, the Contingency Table Test for Ordered Categories evaluates data assessed on at least an ordinal scale where the categories are in ascending or descending rank order. The remaining two tests, Fisher’s Fourfold-Table Test for Variables with Two Categories and Fisher’s Contingency Table Test for Variables with More than Two Categories, evaluate data assessed on either a nominal or an ordinal scale. The program described completes analysis of a 2°C categorical contingency table as would be obtained from the application of a multiple-level rating scale to the behavior of a treatment and a control group. 相似文献
6.
7.
Robert MacCallum 《Psychometrika》1983,48(2):223-231
Factor analysis programs in SAS, BMDP, and SPSS are discussed and compared in terms of documentation, methods and options available, internal logic, computational accuracy, and results provided. Some problems with respect to logic and output are described. Based on these comparisons, recommendations are offered which include a clear overall preference for SAS, and advice against general use of SPSS for factor analysis. 相似文献
8.
9.
This article describes the functions of a SAS macro and an SPSS syntax that produce common statistics for conventional item
analysis including Cronbach’s alpha, item difficulty index (p-value or item mean), and item discrimination indices (D-index, point biserial and biserial correlations for dichotomous items
and item-total correlation for polytomous items). These programs represent an improvement over the existing SAS and SPSS item
analysis routines in terms of completeness and user-friendliness. To promote routine evaluations of item qualities in instrument
development of any scale, the programs are available at no charge for interested users. The program codes along with a brief
user’s manual that contains instructions and examples are downloadable from suen.ed.psu.edu/~pwlei/plei.htm. 相似文献
10.
11.
When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of
the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial
programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence
of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques
and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account
the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second
program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax,
a short manual, and data files related to this article are available as supplemental materials from http:// brm.psychonomic-journals.org/content/supplemental. 相似文献
12.
13.
14.
Arthur Snapper Dennis Lee Leonard Burczyk Jose C. Simoes-Fontes 《Behavior research methods》1974,6(2):176-180
Several programs have been written in the FOCAL, FORTRAN, and BASIC languages for reformatting and analyzing SKED data. These programs include selection and explicit labeling of sets of recording counters representing distributions and/or total counts of events, several general manipulations of distributional data, and standard statistical treatment of distributions. 相似文献
15.
Dyadic research is becoming more common in the social and behavioral sciences. The most common dyadic design is one in which
two persons are measured on the same set of variables. Very often, the first analysis of dyadic data is to determine the extent
to which the responses of the two persons are correlated—that is, whether there is nonindependence in the data. We describe
two user-friendly SPSS programs for measuring nonindependence of dyadic data. Both programs can be used for distinguishable
and indistinguishable dyad members. Inter1.sps is appropriate for interval measures. Inter2.sps applies to categorical variables.
The SPSS syntax and data files related to this article may be downloaded as supplemental materials from brm.psychonomic-journals.org/content/supplemental. 相似文献
16.
To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors
may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor
analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact
of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development.
The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not
part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/. 相似文献
17.
Dual scaling is a set of related techniques for the analysis of a wide assortment of categorical data types including contingency tables and multiple-choice, rank order, and paired comparison data. When applied to a contingency table, dual scaling also goes by the name "correspondence analysis," and when applied to multiple-choice data in which there are more than 2 items, "optimal scaling" and "multiple correspondence analysis. " Our aim of this article was to explain in nontechnical terms what dual scaling offers to an analysis of contingency table and multiple-choice data. 相似文献
18.
19.
A system is described that automatically analyzes the time constraints in a real-time experiment control program and automatically makes corrections to that program to provide any degree of temporal accuracy desired by the experimenter within the capabilities of the hardware. A generalized procedure is presented to allow similar systems to be developed for most common languages and hardware platforms. 相似文献
20.
Geert J. M. Van Boxtel 《Behavior research methods》1998,30(1):87-102
Some computational and statistical techniques that can be used in the analysis of event-related potential (ERP) data are demonstrated. The techniques are fairly elementary but go one step further than do simple area measurement or peak picking, which are most often used in ERP analysis. Both amplitude and latency measurement techniques are considered. Principal components analysis (PCA) and methods for electromyographic onset determination are presented in detail, and Woody filtering is discussed briefly. The techniques are introduced in a nontechnical, tutorial review style. One and the same existing data set is presented, to which the techniques are applied, and practical guidelines for their use are given. The methods are demonstrated against a background of theoretical notions that are related to the definition of ERP components. 相似文献