首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The goal of this study was to investigate the performance of Hall’s transformation of the Brunner-Dette-Munk (BDM) and Welch-James (WJ) test statistics and Box-Cox’s data transformation in factorial designs when normality and variance homogeneity assumptions were violated separately and jointly. On the basis of unweighted marginal means, we performed a simulation study to explore the operating characteristics of the methods proposed for a variety of distributions with small sample sizes. Monte Carlo simulation results showed that when data were sampled from symmetric distributions, the error rates of the original BDM and WJ tests were scarcely affected by the lack of normality and homogeneity of variance. In contrast, when data were sampled from skewed distributions, the original BDM and WJ rates were not well controlled. Under such circumstances, the results clearly revealed that Hall’s transformation of the BDM and WJ tests provided generally better control of Type I error rates than did the same tests based on Box-Cox’s data transformation. Among all the methods considered in this study, we also found that Hall’s transformation of the BDM test yielded the best control of Type I errors, although it was often less powerful than either of the WJ tests when both approaches reasonably controlled the error rates.  相似文献   

2.
Empirical Type I error and power rates were estimated for (a) the doubly multivariate model, (b) the Welch-James multivariate solution developed by Keselman, Carriere and Lix (1993) using Johansen's results (1980), and for (c) the multivariate version of the modified Brown-Forsythe (1974) procedure. The performance of these procedures was investigated by testing within- blocks sources of variation in a multivariate split-plot design containing unequal covariance matrices. The results indicate that the doubly multivariate model did not provide effective Type I error control while the Welch-James procedure provided robust and powerful tests of the within-subjects main effect, however, this approach provided liberal tests of the interaction effect. The results also indicate that the modified Brown-Forsythe procedure provided robust tests of within-subjects main and interaction effects, especially when the design was balanced or when group sizes and covariance matrices were positively paired.  相似文献   

3.
This paper explores the space of possibilities for public justification in morally diverse communities. Moral diversity is far more consequential than is typically appreciated, and as a result, we need to think more carefully about how our standard tools function in such environments. I argue that because of this diversity, public justification can (and should) be divorced from any claim of determinateness. Instead, we should focus our attention on procedures—in particular, what Rawls called cases of pure procedural justice. I use a modified form of the procedure “I cut, you choose” to demonstrate how perspectival diversity can make what looks like a simple procedure quite complex in practice. I use this to reframe disputes between classical liberal and contemporary liberal approaches to questions of public morality, arguing that classically liberal procedures, such as a reliance on the harm principle, can generate rather illiberal-looking outcomes when used in a morally diverse community. A seemingly less-principled approach, which simply balances burdens, appears to generate outcomes that look closer to what we would expect from classical liberalism. However, since both approaches are based on pure procedures that we can justify without reference to outcomes, it remains indeterminate which we ought to choose.  相似文献   

4.
The hypothesis that behavioral asymmetries with the dual task paradigm represent manual dominance was investigated with right- and left-handed males performing verbal and spatial tasks ordered by complexity. Lateralization was assessed for nonideational (perfunctory) and ideational (purposeful) components of tasks with multivariate and ANCOVA procedures. The outcomes of prerequisite tests showed the assumptions for conducting ANCOVA procedures were not satisfied with different handedness groups in the same design. However, results of the multivariate analyses suggest lateralized effects are more likely to represent the cognitive task when interference is high and may represent manual dominance when interference is low.  相似文献   

5.
The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.  相似文献   

6.
Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non‐parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non‐parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non‐standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed.  相似文献   

7.
In studies of detection and discrimination, data are often obtained in the form of a 2 x 2 matrix and then converted to an estimate of d' based on the assumptions that the underlying decision distributions are Gaussian and equal in variance. The statistical properties of the estimate of d', d' are well understood for data obtained using the yes-no procedure, but less effort has been devoted to the more commonly used two-interval forced choice (2IFC) procedure. The variance associated with d' is a function of true d' in both procedures, but for small values of true d' the variance of d' obtained using the 2IFC procedure is predicted to be less than the variance of d' obtained using yes-no; for large values of true d', the variance of d' obtained using the 2IFC procedure is predicted to be greater than the variance of d' from yes-no. These results follow from standard assumptions about the relationship between the two procedures. The present paper reviews the statistical properties of d' obtained using the two standard procedures and compares estimates of the variance of d' as a function of true d' with the variance observed in values of d' obtained with a 2IFC procedure.  相似文献   

8.
A composite step‐down procedure, in which a set of step‐down tests are summarized collectively with Fisher's combination statistic, was considered to test for multivariate mean equality in two‐group designs. An approximate degrees of freedom (ADF) composite procedure based on trimmed/Winsorized estimators and a non‐pooled estimate of error variance is proposed, and compared to a composite procedure based on trimmed/Winsorized estimators and a pooled estimate of error variance. The step‐down procedures were also compared to Hotelling's T2 and Johansen's ADF global procedure based on trimmed estimators in a simulation study. Type I error rates of the pooled step‐down procedure were sensitive to covariance heterogeneity in unbalanced designs; error rates were similar to those of Hotelling's T2 across all of the investigated conditions. Type I error rates of the ADF composite step‐down procedure were insensitive to covariance heterogeneity and less sensitive to the number of dependent variables when sample size was small than error rates of Johansen's test. The ADF composite step‐down procedure is recommended for testing hypotheses of mean equality in two‐group designs except when the data are sampled from populations with different degrees of multivariate skewness.  相似文献   

9.
To date, there is a lack of satisfactory inferential techniques for the analysis of multivariate data in factorial designs, when only minimal assumptions on the data can be made. Presently available methods are limited to very particular study designs or assume either multivariate normality or equal covariance matrices across groups, or they do not allow for an assessment of the interaction effects across within-subjects and between-subjects variables. We propose and methodologically validate a parametric bootstrap approach that does not suffer from any of the above limitations, and thus provides a rather general and comprehensive methodological route to inference for multivariate and repeated measures data. As an example application, we consider data from two different Alzheimer’s disease (AD) examination modalities that may be used for precise and early diagnosis, namely, single-photon emission computed tomography (SPECT) and electroencephalogram (EEG). These data violate the assumptions of classical multivariate methods, and indeed classical methods would not have yielded the same conclusions with regards to some of the factors involved.  相似文献   

10.
Repeated measures analyses of variance are the method of choice in many studies from experimental psychology and the neurosciences. Data from these fields are often characterized by small sample sizes, high numbers of factor levels of the within-subjects factor(s), and nonnormally distributed response variables such as response times. For a design with a single within-subjects factor, we investigated Type I error control in univariate tests with corrected degrees of freedom, the multivariate approach, and a mixed-model (multilevel) approach (SAS PROC MIXED) with Kenward–Roger’s adjusted degrees of freedom. We simulated multivariate normal and nonnormal distributions with varied population variance–covariance structures (spherical and nonspherical), sample sizes (N), and numbers of factor levels (K). For normally distributed data, as expected, the univariate approach with Huynh–Feldt correction controlled the Type I error rate with only very few exceptions, even if samples sizes as low as three were combined with high numbers of factor levels. The multivariate approach also controlled the Type I error rate, but it requires NK. PROC MIXED often showed acceptable control of the Type I error rate for normal data, but it also produced several liberal or conservative results. For nonnormal data, all of the procedures showed clear deviations from the nominal Type I error rate in many conditions, even for sample sizes greater than 50. Thus, none of these approaches can be considered robust if the response variable is nonnormally distributed. The results indicate that both the variance heterogeneity and covariance heterogeneity of the population covariance matrices affect the error rates.  相似文献   

11.
A procedure for generating multivariate nonnormal distributions is proposed. Our procedure generates average values of intercorrelations much closer to population parameters than competing procedures for skewed and/or heavy tailed distributions and for small sample sizes. Also, it eliminates the necessity of conducting a factorization procedure on the population correlation matrix that underlies the random deviates, and it is simpler to code in a programming language (e.g., FORTRAN). Numerical examples demonstrating the procedures are given. Monte Carlo results indicate our procedure yields excellent agreement between population parameters and average values of intercorrelation, skew, and kurtosis.  相似文献   

12.
Co‐occurrence of an object and affective stimuli does not always mean that the object and the stimuli are the same valence (e.g., false accusations that Richard is a crook). Contemporary theory posits that information about the (in)validity of co‐occurrence has stronger influence on deliberate than automatic evaluation. However, available evidence supports that hypothesis only when the (in)validity information is delayed. Further, the existing evidence is open to alternative methodological accounts. In six high‐powered experiments (total N = 1750), we modified previous procedures to minimize alternative explanations and examine whether delayed (in)validity information has a discrepant effect on automatic versus deliberate evaluation. Casting doubt on the generality of the hypothesis, we found more sensitivity of deliberate than automatic evaluation to delayed validity information only when automatic evaluation was measured with the Implicit Association Test and not with the evaluative priming task or the affective misattribution procedure.  相似文献   

13.
Multilevel modeling (MLM) is rapidly becoming the standard method of analyzing nested data, for example, data from students within multiple schools, data on multiple clients seen by a smaller number of therapists, and even longitudinal data. Although MLM analyses are likely to increase in frequency in counseling psychology research, many readers of counseling psychology journals have had only limited exposure to MLM concepts. This paper provides an overview of MLM that blends mathematical concepts with examples drawn from counseling psychology. This tutorial is intended to be a first step in learning about MLM; readers are referred to other sources for more advanced explorations of MLM. In addition to being a tutorial for understanding and perhaps even conducting MLM analyses, this paper reviews recent research in counseling psychology that has adopted a multilevel framework, and it provides ideas for MLM approaches to future research in counseling psychology.  相似文献   

14.
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of the data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach was evaluated for testing all possible pairwise differences among repeated measures marginal means in a Between-Subjects x Within-Subjects design. Specifically, the authors investigated Type I error and power rates for a number of simultaneous and stepwise multiple comparison procedures using SAS (1999) PROC MIXED in unbalanced designs when normality and covariance homogeneity assumptions did not hold. J. P. Shaffer's (1986) sequentially rejective step-down and Y. Hochberg's (1988) sequentially acceptive step-up Bonferroni procedures, based on an unstructured covariance structure, had superior Type I error control and power to detect true pairwise differences across the investigated conditions.  相似文献   

15.
The remember-know procedure can be conducted in one or two steps. The one-step procedure does not include a recognition response (old-new) prior to the remember-know response. It is observed consistently that the one-step procedure leads to a more liberal placement of the response criterion, but it is unclear whether recognition accuracy is affected by the number of procedural steps. However, previous studies used bias-dependent measures of accuracy (A′ and d′). We manipulated the number of steps and confirmed the finding that the response criterion is more liberal with the one-step procedure. More importantly, we employed a signal detection theory bias-free accuracy measure (da) to show that varying the number of steps does not affect recognition accuracy, and we demonstrated that this pattern of results does not change when the dual process signal detection model (Yonelinas, 1997) is applied.  相似文献   

16.
Abstract

In intervention studies having multiple outcomes, researchers often use a series of univariate tests (e.g., ANOVAs) to assess group mean differences. Previous research found that this approach properly controls Type I error and generally provides greater power compared to MANOVA, especially under realistic effect size and correlation combinations. However, when group differences are assessed for a specific outcome, these procedures are strictly univariate and do not consider the outcome correlations, which may be problematic with missing outcome data. Linear mixed or multivariate multilevel models (MVMMs), implemented with maximum likelihood estimation, present an alternative analysis option where outcome correlations are taken into account when specific group mean differences are estimated. In this study, we use simulation methods to compare the performance of separate independent samples t tests estimated with ordinary least squares and analogous t tests from MVMMs to assess two-group mean differences with multiple outcomes under small sample and missingness conditions. Study results indicated that a MVMM implemented with restricted maximum likelihood estimation combined with the Kenward–Roger correction had the best performance. Therefore, for intervention studies with small N and normally distributed multivariate outcomes, the Kenward–Roger procedure is recommended over traditional methods and conventional MVMM analyses, particularly with incomplete data.  相似文献   

17.
McIntyre and Farr (1979), Hanser, Mendel, and Wolins (1979), and Lissitz, Mendoza, Huberty, and Markos (1979) comment on the repeated measures Analysis of Variance design suggested by Arvey and Mossholder (1977) to detect job differences and similarities. These authors propose alternative procedures to determine job differences. We, in turn, point out here that the problems specified by these critics may not be as severe as they indicate, and that some problems are even nonexistent. Moreover, the alternative solutions they suggest also have their own limitations. Finally, we propose an additional procedure–a multivariate approach to repeated measures data–which might be useful in the context of detecting job differences. It appears as if there are assumptions and limitations to both the univariate and multivariate approaches to the problem; these assumptions and limitations are delineated more precisely in the present paper.  相似文献   

18.
Using a latent-variable modeling approach, relationships between social ties and depression were studied in a sample of 201 older adults. Both positive and negative ties were related to concurrent depression, whereas only negative ties predicted future depression. Nonnormally distributed scores were observed for several variables, and results based on maximum likelihood (ML), which assumes multivariate normality, were compared with those obtained using Browne's (1982, 1984) arbitrary distribution function (ADF) estimator for nonnormal variables. Inappropriate use of ML with nonnormal data yielded model chi-square values that were too large and standard errors that were too small. ML also failed to detect the over-time effect of negative ties on depression. The results suggest that the negative functions of social networks may causally influence depression and illustrate the need to test distributional assumptions when estimating latent-variable models.  相似文献   

19.
In this introduction to the special issue on applications of multilevel modeling (MLM) to communication research, we provide a conceptual overview of the benefits of MLM—the ability to simultaneously analyze data collected at multiple levels, the ease with which it can be used to assess trends and change over time, and its incorporation of the nested structure of data in the estimation process. We highlight ways in which MLM can be used to further theory and research in communication. In addition, we comment on the applications of MLM highlighted in this special issue and echo past calls for more multilevel theorizing and analysis in the field of communication.  相似文献   

20.
近年社科领域常见使用多层线性模型进行多层中介研究。尽管多层线性模型区分了多层中介的组间和组内效应, 仍然存在抽样误差和测量误差。比较好的方法是, 将多层线性模型整合到结构方程模型中, 在多层结构方程模型框架下设置潜变量和多指标, 可有效校正抽样误差和测量误差、得到比较准确的中介效应值, 还能适用于更多种类的多层中介分析并提供模型的拟合指数。在介绍新方法后, 总结出一套多层中介的分析流程, 通过一个例子来演示如何用MPLUS软件进行多层中介分析。最后展望了多层结构方程和多层中介研究的拓展方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号