首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
This article is about analysis of data obtained in repeated measures designs in psycholinguistics and related disciplines with items (words) nested within treatment (= type of words). Statistics tested in a series of computer simulations are: F1, F2, F1 & F2, F', min F', plus two decision procedures, the one suggested by Forster and Dickinson (1976) and one suggested by the authors of this article. The most common test statistic, F1 & F2, turns out to be wrong, but all alternative statistics suggested in the literature have problems too. The two decision procedures perform much better, especially the new one, because it systematically takes into account the subject by treatment interaction and the degree of word variability.  相似文献   

4.
5.
Vallejo G  Lozano LM 《Psicothema》2006,18(2):293-299
In the social, behavioral, and health researches it is a common strategy to collect data along time on more than one group of participants on multiple dependent variables. To analyse this kind of data is very complicated due to the correlations between the measures taken in different points of the time, and between the answers. Usually to analyse these data the multivariate mixed model, or the doubly multivariate model, are the most frequent approaches. Both of them require combined multivariate normality, equal covariance matrices, independence between the observations of different participants, complete measurements on all subjects, and time-independent covariates. When one ore more of these assumptions are not accomplished these approaches do not control in the correct way the Type I error, and this affects the validity and the accuracy of the inferences. In this paper some solutions that solve the problems with the error Type I will be shown. Several programs for a correct realization of the analyses through the SAS Proc Mixed procedure are presented.  相似文献   

6.
Estimates of test size (probability of Type I error) were obtained for several specific repeated measures designs. Estimates were presented for configurations where the underlying covariance matrices exhibited varying degrees of heterogeneity. Conventional variance ratios were employed as basic statistics in order to produce estimates of size for a conventional test, an -adjusted test, and -adjusted test and a conservative test. Indices for empirical distributions of two estimators of , a measure of covariance heterogeneity, were also provided.  相似文献   

7.
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.  相似文献   

8.
9.
10.
When measuring the same variables on different occasions, two procedures for canonical analysis with stationary compositing weights are developed. The first, SUMCOV, maximizes the sum of the covariances of the canonical variates subject to norming constraints. The second, COLLIN, maximizes the largest root of the covariances of the canonical variates subject to norming constraints. A characterization theorem establishes a model building approach. Both methods are extended to allow for Cohort Sequential Designs. Finally a numerical illustration utilizing Nesselroade and Baltes data is presented.The authors wish to thank John Nesselroade for permitting us to use the data whose analysis we present.  相似文献   

11.
12.
Solving theoretical or empirical issues sometimes involves establishing the equality of two variables with repeated measures. This defies the logic of null hypothesis significance testing, which aims at assessing evidence against the null hypothesis of equality, not for it. In some contexts, equivalence is assessed through regression analysis by testing for zero intercept and unit slope (or simply for unit slope in case that regression is forced through the origin). This paper shows that this approach renders highly inflated Type I error rates under the most common sampling models implied in studies of equivalence. We propose an alternative approach based on omnibus tests of equality of means and variances and in subject-by-subject analyses (where applicable), and we show that these tests have adequate Type I error rates and power. The approach is illustrated with a re-analysis of published data from a signal detection theory experiment with which several hypotheses of equivalence had been tested using only regression analysis. Some further errors and inadequacies of the original analyses are described, and further scrutiny of the data contradict the conclusions raised through inadequate application of regression analyses.  相似文献   

13.
The validity conditions for univariate repeated measures designs are described. Attention is focused on the sphericity requirement. For av degree of freedom family of comparisons among the repeated measures, sphericity exists when all contrasts contained in thev dimensional space have equal variances. Under nonsphericity, upper and lower bounds on test size and power of a priori, repeated measures,F tests are derived. The effects of nonsphericity are illustrated by means of a set of charts. The charts reveal that small departures from sphericity (.97 <1.00) can seriously affect test size and power. It is recommended that separate rather than pooled error term procedures be routinely used to test a priori hypotheses.Appreciation is extended to Milton Parnes for his insightful assistance.  相似文献   

14.
15.
16.
This study examined the performance of selection criteria available in the major statistical packages for both mean model and covariance structure. Unbalanced designs due to missing data involving both a moderate and large number of repeated measurements and varying total sample sizes were investigated. The study also investigated the impact of using different estimation strategies for information criteria, the impact of different adjustments for calculating the criteria, and the impact of different distribution shapes. Overall, we found that the ability of consistent criteria in any of the their examined forms to select the correct model was superior under simple covariance patterns than under complex covariance patterns, and vice versa for the efficient criteria. The simulation studies covered in this paper also revealed that, regardless of method of estimation used, the consistent criteria based on number of subjects were more effective than the consistent criteria based on total number of observations, and vice versa for the efficient criteria. Furthermore, results indicated that, given a dataset with missing values, the efficient criteria were more affected than the consistent criteria by the lack of normality.  相似文献   

17.
Loftus and Masson (1994) proposed a method for computing confidence intervals (CIs) in repeated measures (RM) designs and later proposed that RM CIs for factorial designs should be based on number of observations rather than number of participants (Masson & Loftus, 2003). However, determining the correct number of observations for a particular effect can be complicated, given that its value depends on the relation between the effect and the overall design. To address this, we recently defined a general number-of-observations principle, explained why it obtains, and provided step-by-step instructions for constructing CIs for various effect types (Jarmasz & Hollands, 2009). In this note, we provide a brief summary of our approach.  相似文献   

18.
Behavior that develops in phases may exhibit distinctively different rates of change in one time period than in others. In this article, a mixed-effects model for a response that displays identifiable regimes is reviewed. An interesting component of the model is the change point. In substantive terms, the change point is the time when development switches from one phase to another. In a mixed-effects model, the change point can be a random coefficient. This possibility allows individuals to make the transition from one phase to another at different ages or after different lengths of time in treatment. Two examples are reviewed in detail, both of which can be estimated with software that is widely available.  相似文献   

19.
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of the data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach was evaluated for testing all possible pairwise differences among repeated measures marginal means in a Between-Subjects x Within-Subjects design. Specifically, the authors investigated Type I error and power rates for a number of simultaneous and stepwise multiple comparison procedures using SAS (1999) PROC MIXED in unbalanced designs when normality and covariance homogeneity assumptions did not hold. J. P. Shaffer's (1986) sequentially rejective step-down and Y. Hochberg's (1988) sequentially acceptive step-up Bonferroni procedures, based on an unstructured covariance structure, had superior Type I error control and power to detect true pairwise differences across the investigated conditions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号