首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Variable Error     
The degree to which blocked (VE) data satisfies the assumptions of compound symmetry required for a repeated measures ANOVA was studied. Monte Carlo procedures were used to study the effect of violation of this assumption, under varying block sizes, on the Type I error rate. Populations of 10,000 subjects for each of two groups, the underlying variance-covariance matrices reflecting a specific condition of violation of the homogeneity of covariance assumptions, were generated based on each of three actual experimental data sets. The data were blocked in various ways, VE calculated, and subsequently analyzed by a repeated measures ANOVA. The complete process was replicated for four covariance homogeneity conditions for each of the three data sets, resulting in a total of 22,000 simulated experiments. Results indicated that the Type I error rate increases as the degree of heterogeneity within the variance-covariance matrices increases when raw (unblocked) data are analyzed. With VE, the effects of within-matrix heterogeneity on the Type I error rate are inconclusive. However, block size does seem to affect the probability of obtaining a significant interaction, but the nature of this relationship is not clear as there does not appear to be any consistent relationship between the size of the block and the probability of obtaining significance. For both raw and VE data there was no inflation in the number of Type I errors when the covariances within a given matrix were homogeneous, regardless of the differences between the group variance-covariance matrices.  相似文献   

2.
3.
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of the data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach was evaluated for testing all possible pairwise differences among repeated measures marginal means in a Between-Subjects x Within-Subjects design. Specifically, the authors investigated Type I error and power rates for a number of simultaneous and stepwise multiple comparison procedures using SAS (1999) PROC MIXED in unbalanced designs when normality and covariance homogeneity assumptions did not hold. J. P. Shaffer's (1986) sequentially rejective step-down and Y. Hochberg's (1988) sequentially acceptive step-up Bonferroni procedures, based on an unstructured covariance structure, had superior Type I error control and power to detect true pairwise differences across the investigated conditions.  相似文献   

4.
Existing test statistics for assessing whether incomplete data represent a missing completely at random sample from a single population are based on a normal likelihood rationale and effectively test for homogeneity of means and covariances across missing data patterns. The likelihood approach cannot be implemented adequately if a pattern of missing data contains very few subjects. A generalized least squares rationale is used to develop parallel tests that are expected to be more stable in small samples. Three factors were varied for a simulation: number of variables, percent missing completely at random, and sample size. One thousand data sets were simulated for each condition. The generalized least squares test of homogeneity of means performed close to an ideal Type I error rate for most of the conditions. The generalized least squares test of homogeneity of covariance matrices and a combined test performed quite well also.Preliminary results on this research were presented at the 1999 Western Psychological Association convention, Irvine, CA, and in the UCLA Statistics Preprint No. 265 (http://www.stat.ucla.edu). The assistance of Ke-Hai Yuan and several anonymous reviewers is gratefully acknowledged.  相似文献   

5.
Researchers often want to demonstrate a lack of interaction between two categorical predictors on an outcome. To justify a lack of interaction, researchers typically accept the null hypothesis of no interaction from a conventional analysis of variance (ANOVA). This method is inappropriate as failure to reject the null hypothesis does not provide statistical evidence to support a lack of interaction. This study proposes a bootstrap‐based intersection–union test for negligible interaction that provides coherent decisions between the omnibus test and post hoc interaction contrast tests and is robust to violations of the normality and variance homogeneity assumptions. Further, a multiple comparison strategy for testing interaction contrasts following a non‐significant omnibus test is proposed. Our simulation study compared the Type I error control, omnibus power and per‐contrast power of the proposed approach to the non‐centrality‐based negligible interaction test of Cheng and Shao (2007, Statistica Sinica, 17, 1441). For 2 × 2 designs, the empirical Type I error rates of the Cheng and Shao test were very close to the nominal α level when the normality and variance homogeneity assumptions were satisfied; however, only our proposed bootstrapping approach was satisfactory under non‐normality and/or variance heterogeneity. In general a × b designs, although the omnibus Cheng and Shao test, as expected, is the most powerful, it is not robust to assumption violation and results in incoherent omnibus and interaction contrast decisions that are not possible with the intersection–union approach.  相似文献   

6.
Repeated measures analyses of variance are the method of choice in many studies from experimental psychology and the neurosciences. Data from these fields are often characterized by small sample sizes, high numbers of factor levels of the within-subjects factor(s), and nonnormally distributed response variables such as response times. For a design with a single within-subjects factor, we investigated Type I error control in univariate tests with corrected degrees of freedom, the multivariate approach, and a mixed-model (multilevel) approach (SAS PROC MIXED) with Kenward–Roger’s adjusted degrees of freedom. We simulated multivariate normal and nonnormal distributions with varied population variance–covariance structures (spherical and nonspherical), sample sizes (N), and numbers of factor levels (K). For normally distributed data, as expected, the univariate approach with Huynh–Feldt correction controlled the Type I error rate with only very few exceptions, even if samples sizes as low as three were combined with high numbers of factor levels. The multivariate approach also controlled the Type I error rate, but it requires NK. PROC MIXED often showed acceptable control of the Type I error rate for normal data, but it also produced several liberal or conservative results. For nonnormal data, all of the procedures showed clear deviations from the nominal Type I error rate in many conditions, even for sample sizes greater than 50. Thus, none of these approaches can be considered robust if the response variable is nonnormally distributed. The results indicate that both the variance heterogeneity and covariance heterogeneity of the population covariance matrices affect the error rates.  相似文献   

7.
A monte carlo computer study was conducted where the statistical power of the univariate repeated measures ANOVA design proposed by Arvey and Mossholder (1977) to detect job differences was investigated. Also investigated was the relative value and usefulness of omega-squared estimates to indicate job similarities and differences. Job profile means and covariance structures were generated by using data from six relatively similar jobs and six dissimilar jobs based on Position Analysis Questionnaire (PAQ) data bank information. Different combinations of job differences (4 conditions), number of job raters (2 conditions), and violations of statistical assumptions (3 conditions) were generated (1000 sets for each of the 24 combinations) and each data set analyzed using the ANOVA design. Results indicate that testing for statistical significance is not as useful in determining job differences as examining the omega-squared estimates. Specifically, the omega-squared estimates for the interaction of the Jobs × Dimension effect is a relatively sensitive and stable indicator of job differences regardless of the number of raters and violations of the statistical assumptions.  相似文献   

8.
In a multiple (or multivariate) regression model where the predictors are subject to errors of measurement with a known variance-covariance structure, two-sample hypotheses are formulated for (i) equality of regressions on true scores and (ii) equality of residual variances (or covariance matrices) after regression on true scores. The hypotheses are tested using a large-sample procedure based on maximum likelihood estimators. Formulas for the test statistic are presented; these may be avoided in practice by using a general purpose computer program. The procedure has been applied to a comparison of learning in high schools using achievement test data.  相似文献   

9.
Estimates of test size (probability of Type I error) were obtained for several specific repeated measures designs. Estimates were presented for configurations where the underlying covariance matrices exhibited varying degrees of heterogeneity. Conventional variance ratios were employed as basic statistics in order to produce estimates of size for a conventional test, an -adjusted test, and -adjusted test and a conservative test. Indices for empirical distributions of two estimators of , a measure of covariance heterogeneity, were also provided.  相似文献   

10.
Vallejo G  Lozano LM 《Psicothema》2006,18(2):293-299
In the social, behavioral, and health researches it is a common strategy to collect data along time on more than one group of participants on multiple dependent variables. To analyse this kind of data is very complicated due to the correlations between the measures taken in different points of the time, and between the answers. Usually to analyse these data the multivariate mixed model, or the doubly multivariate model, are the most frequent approaches. Both of them require combined multivariate normality, equal covariance matrices, independence between the observations of different participants, complete measurements on all subjects, and time-independent covariates. When one ore more of these assumptions are not accomplished these approaches do not control in the correct way the Type I error, and this affects the validity and the accuracy of the inferences. In this paper some solutions that solve the problems with the error Type I will be shown. Several programs for a correct realization of the analyses through the SAS Proc Mixed procedure are presented.  相似文献   

11.
Recently, a nonparametric technique called bootstrapping has been recommended over the more well-known analysis of variance (ANOVA) for analyzing repeated measures data. Advocates cite as bootstrap’s advantages over ANOVA the fact that the former uses distributional information and is free of normal theory assumptions. The present study used a computer simulation to compare the two techniques calculated using data sampled from normal and nonnormal distributions. The parametric test had adequate control of Type I error rates; the nonparametric test had overly liberal Type I error rates and therefore is not recommended.  相似文献   

12.
The statistical simulation program DATASIM is designed to conduct large-scale sampling experiments on microcomputers. Monte Carlo procedures are used to investigate the Type I and Type II error rates for statistical tests when one or more assumptions are systematically violated-assumptions, for example, regarding normality, homogeneity of variance or covariance, mini-mum expected cell frequencies, and the like. In the present paper, we report several initial tests of the data-generating algorithms employed by DATASIM. The results indicate that the uniform and standard normal deviate generators perform satisfactorily. Furthermore, Kolmogorov-Smirnov tests show that the sampling distributions ofz, t, F, χ2, andr generated by DATASIM simulations follow the appropriate theoretical distributions. Finally, estimates of Type I error rates obtained by DATASIM under various patterns of violations of assumptions are in close agreement with the results of previous analytical and empirical studies; These converging lines of evidence suggest that DATASIM may well prove to be a reliable and productive tool for conducting statistical simulation research.  相似文献   

13.
The relationship between the latent growth curve and repeated measures ANOVA models is often misunderstood. Although a number of investigators have looked into the similarities and differences among these models, a cursory reading of the literature can give the impression that they are very different models. Here we show that each model represents a set of contrasts on the occasion means. We demonstrate that the fixed effects parameters of the estimated basis vector latent growth curve model are merely a transformation of the repeated measures ANOVA fixed effects parameters. We further show that differences in fit in models that estimate the same means structure can be due to the different error covariance structures implied by the model. We show these relationships both algebraically and through using data from a simulation.  相似文献   

14.
We study several aspects of bootstrap inference for covariance structure models based on three test statistics, including Type I error, power and sample‐size determination. Specifically, we discuss conditions for a test statistic to achieve a more accurate level of Type I error, both in theory and in practice. Details on power analysis and sample‐size determination are given. For data sets with heavy tails, we propose applying a bootstrap methodology to a transformed sample by a downweighting procedure. One of the key conditions for safe bootstrap inference is generally satisfied by the transformed sample but may not be satisfied by the original sample with heavy tails. Several data sets illustrate that, by combining downweighting and bootstrapping, a researcher may find a nearly optimal procedure for evaluating various aspects of covariance structure models. A rule for handling non‐convergence problems in bootstrap replications is proposed.  相似文献   

15.
This work compares the sensitivity of five modern analytical techniques for detecting the effects of a design with measures which are partially repeated when the assumptions of the traditional ANOVA approach are not met, namely: the approach of the mixed model adjusted by means of the SAS Proc Mixed module, the Bootstrap-F approach, the Brown-Forsythe multivariate approach, the Welch-James multivariate approach and Welch-James multivariate approach with robust estimators. Previously, Livacic-Rojas, Vallejo and Fernández found out that these methods are comparable in terms of their Type I error rates. The results obtained suggest that the mixed model approach, as well as the Brown-Forsythe and Welch-James approaches, satisfactorily controlled the Type II error rates corresponding to the main effects of the measurement occasions under most of the conditions assessed.  相似文献   

16.
Conventional covariance structure analysis, such as factor analysis, is often applied to data that are obtained in a hierarchical fashion, such as siblings observed within families. A more appropriate specification is demonstrated which explicitly models the within-level and between-level covariance matrices of sibling substance use and intrafamily conflict. Participants were 267 target adolescents (mean age=13.11 years) and 318 siblings (mean age=15.03 years). The level of homogeneity within sibling clusters, and heterogeneity among families, was sufficient to conduct a multilevel covariance structure analysis (MCA). Parental and family-level variables consisting of marital status, socioeconomic status, marital discord, parent use, and modeling of substances were used to explain heterogeneity among families. Marital discord predicted intrafamily conflict, and parent marital status and modeling of substances predicted sibling substance use. Advantages and uses of hierarchical designs and conventional covariance structure software for multilevel data are discussed.  相似文献   

17.
In the past, researchers have debated the problem of selecting the most appropriate error measure (e.g., CE, VE, AE, or E) for use as the dependent variable when analyzing the results of their experiments in short-term motor memory research (Gessaroli & Schutz, 1982; Henry, 1974; Safrit, Spray, & Diewert, 1980; Schutz, 1977; Schutz & Roy, 1973). This paper suggests that the subjects' error scores, recorded over a series of trials, are analyzed individually, using repeated measures (RM) ANOVA. This analysis divides the total error sum of squares into recognizable components that, when identified, adequately explain the subjects' performance. The between-subjects sources of variation will indicate any differences in CE bias between the levels of each factor in the experiment. Similarly, any VE differences between the levels of each factor will be identified by significant trial-by-factor interactions. However, not all significant trial-by-factor interactions will necessarily indicate differences in VE performances. Nevertheless by plotting the group's perceived mean trial profiles for any significant trial-by-factor interactions, valuable insight can be gained into difference performance responses in trial adaptation for each level of the factors in the experimental design.  相似文献   

18.
The conventional approach for testing the equality of two normal mean vectors is to test first the equality of covariance matrices, and if the equality assumption is tenable, then use the two-sample Hotelling T 2 test. Otherwise one can use one of the approximate tests for the multivariate Behrens–Fisher problem. In this article, we study the properties of the Hotelling T 2 test, the conventional approach, and one of the best approximate invariant tests (Krishnamoorthy & Yu, 2004) for the Behrens–Fisher problem. Our simulation studies indicated that the conventional approach often leads to inflated Type I error rates. The approximate test not only controls Type I error rates very satisfactorily when covariance matrices were arbitrary but was also comparable with the T 2 test when covariance matrices were equal.  相似文献   

19.
Empirical Type I error and power rates were estimated for (a) the doubly multivariate model, (b) the Welch-James multivariate solution developed by Keselman, Carriere and Lix (1993) using Johansen's results (1980), and for (c) the multivariate version of the modified Brown-Forsythe (1974) procedure. The performance of these procedures was investigated by testing within- blocks sources of variation in a multivariate split-plot design containing unequal covariance matrices. The results indicate that the doubly multivariate model did not provide effective Type I error control while the Welch-James procedure provided robust and powerful tests of the within-subjects main effect, however, this approach provided liberal tests of the interaction effect. The results also indicate that the modified Brown-Forsythe procedure provided robust tests of within-subjects main and interaction effects, especially when the design was balanced or when group sizes and covariance matrices were positively paired.  相似文献   

20.
Using a Monte Carlo simulation and the Kenward–Roger (KR) correction for degrees of freedom, in this article we analyzed the application of the linear mixed model (LMM) to a mixed repeated measures design. The LMM was first used to select the covariance structure with three types of data distribution: normal, exponential, and log-normal. This showed that, with homogeneous between-groups covariance and when the distribution was normal, the covariance structure with the best fit was the unstructured population matrix. However, with heterogeneous between-groups covariance and when the pairing between covariance matrices and group sizes was null, the best fit was shown by the between-subjects heterogeneous unstructured population matrix, which was the case for all of the distributions analyzed. By contrast, with positive or negative pairings, the within-subjects and between-subjects heterogeneous first-order autoregressive structure produced the best fit. In the second stage of the study, the robustness of the LMM was tested. This showed that the KR method provided adequate control of Type I error rates for the time effect with normally distributed data. However, as skewness increased—as occurs, for example, in the log-normal distribution—the robustness of KR was null, especially when the assumption of sphericity was violated. As regards the influence of kurtosis, the analysis showed that the degree of robustness increased in line with the amount of kurtosis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号