首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
2.
A simple procedure for testing heterogeneity of variance is developed which generalizes readily to complex, multi-factor experimental designs. Monte Carlo Studies indicate that the Z-variance test statistic presented here yields results equivalent to other familiar tests for heterogeneity of variance in simple one-way designs where comparisons are feasible. The primary advantage of the Z-variance test is in the analysis of factorial effects on sample variances in more complex designs. An example involving a three-way factorial design is presented.  相似文献   

3.
Several ways of using the traditional analysis of variance to test heterogeneity of spread in factorial designs with equal or unequaln are compared using both theoretical and Monte Carlo results. Two types of spread variables, (1) the jackknife pseudovalues ofs 2 and (2) the absolute deviations from the cell median, are shown to be robust and relatively powerful. These variables seem to be generally superior to the Z-variance and Box-Scheffé procedures.This research was sponsored by Public Health Service Training Grant MH-08258 from the National Institute of Mental Health. The author thanks Mark I. Appelbaum, Elliot M. Cramer, and Scott E. Maxwell for their helpful criticisms of this paper. An earlier version of this work was presented at the Annual Meeting of the Psychometric Society, Murray Hill, New Jersey, April, 1976.  相似文献   

4.
Up to the present only empirical methods have been available for determining the number of factors to be extracted from a matrix of correlations. The problem has been confused by the implicit attitude that a matrix of intercorrelations between psychological variables has a rank which is determinable. A table of residuals always contains error variance and common factor variance. The extraction of successive factors increases the proportion of error variance remaining to common factor variance remaining, and a point is reached where the extraction of more dimensions would contain so much error variance that the common factor variance would be overshadowed. The critical value for this point is determined by probability theory and does not take into account the size of the residuals. Interpretation of the criterion is discussed.  相似文献   

5.
Procedures are described which enable researchers to implement balanced covariance designs of from one to four independent variables. Use is made of three subroutines from IBM’s Scientific Subroutine Package which implement a general decomposition algorithm for balanced designs. FORTRAN instructions, illustrating the main calling program, are given.  相似文献   

6.
7.
Researchers are often concerned with common method variance (CMV) in cases where it is believed to bias relationships of predictors with criteria. However, CMV may also bias relationships within sets of predictors; this is cause for concern, given the rising popularity of higher order multidimensional constructs. The authors examined the extent to which CMV inflates interrelationships among indicators of higher order constructs and the relationships of those constructs with criteria. To do so, they examined core self-evaluation, a higher order construct comprising self-esteem, generalized self-efficacy, emotional stability, and locus of control. Across 2 studies, the authors systematically applied statistical (Study 1) and procedural (Study 2) CMV remedies to core self-evaluation data collected from multiple samples. Results revealed that the nature of the higher order construct and its relationship with job satisfaction were altered when the CMV remedies were applied. Implications of these findings for higher order constructs are discussed.  相似文献   

8.
The editorial policies of several prominent educational and psychological journals require that researchers report some measure of effect size along with tests for statistical significance. In analysis of variance contexts, this requirement might be met by using eta squared or omega squared statistics. Current procedures for computing these measures of effect often do not consider the effect that design features of the study have on the size of these statistics. Because research-design features can have a large effect on the estimated proportion of explained variance, the use of partial eta or omega squared can be misleading. The present article provides formulas for computing generalized eta and omega squared statistics, which provide estimates of effect size that are comparable across a variety of research designs.  相似文献   

9.
Neil Gourlay 《Psychometrika》1955,20(3):227-248
Reference is made to Neyman's study ofF-test bias for the randomized blocks and Latin square designs employed in agriculture, and some account is given of later statistical developments which sprang from his work—in particular, the classification of model-types and the technique of variance component analysis. It is claimed that there is a need to carry out an examination ofF-test bias for experimental designs in education and psychology which will utilize the method and, where appropriate, the known' results of this new branch of variance analysis. In the present paper, such an investigation is carried out for designs which may be regarded as derivatives of the agricultural randomized blocks design. In a paper to follow, a similar investigation will be carried out for experimental designs of the Latin square type.  相似文献   

10.
The factor structures of the International Personality Item Pool (IPIP) and NEO-FFI Big Five questionnaires were examined via confirmatory factor analyses. Analyses of IPIP data for five samples and NEO data for one sample showed that a CFA model with three method bias factors, one influencing all items, one influencing negatively worded items, and one influencing positively worded items fit the data significantly better than models without method factors or models with only one method factor . With the method factors estimated, our results indicated that the Big Five dimensions may be more nearly orthogonal than previously demonstrated. Implications of the presence of method variance in Big Five scales are discussed.  相似文献   

11.
12.
13.
14.
Manolov R  Solanas A 《Psicothema》2008,20(2):297-303
Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.  相似文献   

15.
The authors describe 2 efficiency (planned missing data) designs for measurement: the 3-form design and the 2-method measurement design. The 3-form design, a kind of matrix sampling, allows researchers to leverage limited resources to collect data for 33% more survey questions than can be answered by any 1 respondent. Power tables for estimating correlation effects illustrate the benefit of this design. The 2-method measurement design involves a relatively cheap, less valid measure of a construct and an expensive, more valid measure of the same construct. The cost effectiveness of this design stems from the fact that few cases have both measures, and many cases have just the cheap measure. With 3 brief simulations involving structural equation models, the authors show that compared with the same-cost complete cases design, a 2-method measurement design yields lower standard errors and a higher effective sample size for testing important study parameters. With a large cost differential between cheap and expensive measures and small effect sizes, the benefits of the design can be enormous. Strategies for using these 2 designs are suggested.  相似文献   

16.
Operating instructions for a series of analysis of variance programs for one-, two-, and three-treatment experimental designs is described. The emphasis is on versatility, speed, accuracy, and sufficiency of output. The on-line aspect of FOCAL allows extensive transformations of raw data. Procedures and terminology conform to Kirk (1968) to provide information for pooling error terms for various models, mean comparisons, and trends analysis. Patches are given for data tape input.  相似文献   

17.
Experience-sampling research involves trade-offs between the number of questions asked per signal, the number of signals per day, and the number of days. By combining planned missing-data designs and multilevel latent variable modeling, we show how to reduce the items per signal without reducing the number of items. After illustrating different designs using real data, we present two Monte Carlo studies that explored the performance of planned missing-data designs across different within-person and between-person sample sizes and across different patterns of response rates. The missing-data designs yielded unbiased parameter estimates but slightly higher standard errors. With realistic sample sizes, even designs with extensive missingness performed well, so these methods are promising additions to an experience-sampler’s toolbox.  相似文献   

18.
Organizational research and practice involving ratings are rife with what the authors term ill-structured measurement designs (ISMDs)--designs in which raters and ratees are neither fully crossed nor nested. This article explores the implications of ISMDs for estimating interrater reliability. The authors first provide a mock example that illustrates potential problems that ISMDs create for common reliability estimators (e.g., Pearson correlations, intraclass correlations). Next, the authors propose an alternative reliability estimator--G(q,k)--that resolves problems with traditional estimators and is equally appropriate for crossed, nested, and ill-structured designs. By using Monte Carlo simulation, the authors evaluate the accuracy of traditional reliability estimators compared with that of G(q,k) for ratings arising from ISMDs. Regardless of condition, G(q,k) yielded estimates as precise or more precise than those of traditional estimators. The advantage of G(q,k) over the traditional estimators became more pronounced with increases in the (a) overlap between the sets of raters that rated each ratee and (b) ratio of rater main effect variance to true score variance. Discussion focuses on implications of this work for organizational research and practice.  相似文献   

19.
This article is about analysis of data obtained in repeated measures designs in psycholinguistics and related disciplines with items (words) nested within treatment (= type of words). Statistics tested in a series of computer simulations are: F1, F2, F1 & F2, F', min F', plus two decision procedures, the one suggested by Forster and Dickinson (1976) and one suggested by the authors of this article. The most common test statistic, F1 & F2, turns out to be wrong, but all alternative statistics suggested in the literature have problems too. The two decision procedures perform much better, especially the new one, because it systematically takes into account the subject by treatment interaction and the degree of word variability.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号