首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.  相似文献   

2.
A general model is developed for the analysis of multivariate multilevel data structures. Special cases of the model include repeated measures designs, multiple matrix samples, multilevel latent variable models, multiple time series, and variance and covariance component models.We would like to acknowledge the helpful comments of Ruth Silver. We also wish to thank the referees for helping to clarify the paper. This work was partly carried out with research funds provided by the Economic and Social Research Council (U.K.).  相似文献   

3.
Procedures are described which enable researchers to implement balanced covariance designs of from one to four independent variables. Use is made of three subroutines from IBM’s Scientific Subroutine Package which implement a general decomposition algorithm for balanced designs. FORTRAN instructions, illustrating the main calling program, are given.  相似文献   

4.
Event-related potentials (ERPs) are now widely collected in psychological research to determine the time courses of mental events. When event-related potentials from treatment conditions are compared, often there is no a priori information on when or how long the differences should occur. Testing simultaneously for differences over the entire set of time points creates a serious multiple comparison problem in which the probability of false positive errors must be controlled, while maintaining reasonable power for correct detection. In this work, we extend the factor-adjusted multiple testing procedure developed by Friguet, Kloareg, and Causeur (Journal of the American Statistical Association, 104, 1406-1415, 2009) to manage the multiplicity problem in ERP data analysis and compare its performance with that of the Benjamini and Hochberg (Journal of the Royal Statistical Society B, 57, 289-300, 1995) false discovery rate procedure, using simulations. The proposed procedure outperformed the latter in detecting more truly significant time points, in addition to reducing the variability of the false discovery rate, suggesting that corrections for mass multiple testings of ERPs can be much improved by modeling the strong local temporal dependencies.  相似文献   

5.
Configural frequency analysis (CFA) is a widely used method of explorative data analysis. It tries to detect patterns in the data that occur significantly more or significantly less often than expected by chance. Patterns which occur more often than expected by chance are called CFA types, while those which occur less often than expected by chance are called CFA antitypes. The patterns detected are used to generate knowledge about the mechanisms underlying the data. We investigate the ability of CFA to detect adequate types and antitypes in a number of simulation studies. The basic idea of these studies is to predefine sets of types and antitypes and a mechanism which uses them to create a simulated data set. This simulated data set is then analysed with CFA and the detected types and antitypes are compared to the predefined ones. The predefined types and antitypes together with the method to generate the data are called a data generation model. The results of the simulation studies show that CFA can be used in quite different research contexts to detect structural dependencies in observed data. In addition, we can learn from these simulation studies how much data is necessary to enable CFA to reconstruct the predefined types and antitypes with sufficient accuracy. For one of the data generation models investigated, implicitly underlying knowledge space theory, it was shown that zero‐order CFA can be used to reconstruct the predefined types (which can be interpreted in this context as knowledge states) with sufficient accuracy. Theoretical considerations show that first‐order CFA cannot be used for this data generation model. Thus, it is wrong to consider first‐order CFA, as is done in many publications, as the standard or even only method of CFA.  相似文献   

6.
7.
8.
This paper discusses least squares methods for fitting a reformulation of the general Euclidean model for the external analysis of preference data. The reformulated subject weights refer to a common set of reference vectors for all subjects and hence are comparable across subjects. If the rotation of the stimulus space is fixed, the subject weight estimates in the model are uniquely determined. Weight estimates can be guaranteed nonnegative. While the reformulation is a metric model for single stimulus data, the paper briefly discusses extensions to nonmetric, pairwise, and logistic models. The reformulated model is less general than Carroll's earlier formulation.The author is grateful to Christopher J. Nachtsheim for his helpful suggestions.  相似文献   

9.
10.
11.
A simple and very general algorithm for oblique rotation is identified. While motivated by the rotation problem in factor analysis, it may be used to minimize almost any function of a not necessarily square matrix whose columns are restricted to have unit length. The algorithm has two steps. The first is to compute the gradient of the rotation criterion and the second is to project this onto a manifold of matrices with unit length columns. For this reason it is called a gradient projection algorithm. Because the projection step is very simple, implementation of the algorithm involves little more than computing the gradient of the rotation criterion which for many applications is very simple. It is proven that the algorithm is strictly monotone, that is as long as it is not already at a stationary point, each step will decrease the value of the criterion. Examples from a variety of areas are used to demonstrate the algorithm, including oblimin rotation, target rotation, simplimax rotation, and rotation to similarity and simplicity simultaneously. While it may be, the algorithm is not intended for use as a standard algorithm for well established problems, but rather as a tool for investigating new methods where its generality and simplicity may save an investigator substantial effort.The author would like to thank the review team for their insights and recommendations.  相似文献   

12.
13.
The Asymptotic Classification Theory of Cognitive Diagnosis (Chiu et al., 2009, Psychometrika, 74, 633–665) determined the conditions that cognitive diagnosis models must satisfy so that the correct assignment of examinees to proficiency classes is guaranteed when non‐parametric classification methods are used. These conditions have only been proven for the Deterministic Input Noisy Output AND gate model. For other cognitive diagnosis models, no theoretical legitimization exists for using non‐parametric classification techniques for assigning examinees to proficiency classes. The specific statistical properties of different cognitive diagnosis models require tailored proofs of the conditions of the Asymptotic Classification Theory of Cognitive Diagnosis for each individual model – a tedious undertaking in light of the numerous models presented in the literature. In this paper a different way is presented to address this task. The unified mathematical framework of general cognitive diagnosis models is used as a theoretical basis for a general proof that under mild regularity conditions any cognitive diagnosis model is covered by the Asymptotic Classification Theory of Cognitive Diagnosis.  相似文献   

14.
15.
A simple multiple imputation-based method is proposed to deal with missing data in exploratory factor analysis. Confidence intervals are obtained for the proportion of explained variance. Simulations and real data analysis are used to investigate and illustrate the use and performance of our proposal.  相似文献   

16.
17.
Differences in performance with various stimulus-response mappings are among the most prevalent findings for binary choice reaction tasks. The authors show that perceptual or conceptual similarity is not necessary to obtain mapping effects; a type of structural similarity is sufficient. Specifically, stimulus and response alternatives are coded as positive and negative polarity along several dimensions, and polarity correspondence is sufficient to produce mapping effects. The authors make the case for this polarity correspondence principle using the literature on word-picture verification and then provide evidence that polarity correspondence is a determinant of mapping effects in orthogonal stimulus-response compatibility, numerical judgment, and implicit association tasks. The authors conclude by discussing implications of this principle for interpretation of results from binary choice tasks and future model development.  相似文献   

18.
From a theoretical point of view, paired comparisons and the law of comparative judgment provide an excellent approach to the problem of psychological measurement. However, if a reasonably large number of stimuli are to be investigated, paired comparisons become extremely time-consuming and fatiguing to the subjects. A balanced incomplete block design, requiring multiple rank order judgments for each subject, provides an efficient experimental method for obtaining paired comparisons judgments. Features of the analysis proposed for this design are discussed in detail. A program for the analysis is available for the IBM 650 electronic computer.Prepared in connection with research done under Office of Naval Research Contract Nonr 1858-(15), Project Designation NR 150-088, and National Science Foundation Grant G-3407. Reproduction of any part of this material is permitted for any purpose of the United States Government.  相似文献   

19.
In this paper, two methods of sequential analysis are applied to hypothetical observational data. The first method employs the conventional “conditional probability” approach, illustrated using the GSEQ program (Bakeman & Quera, 1995). In order to overcome some of the difficulties associated with the conditional probability approach, the second method employs a new “normalized and pooled” approach. Essentially, by normalizing periods of time preceding, during, and following each occurrence of a nominated “given” behavior, the proportion of time units devoted to a “target” behavior can be estimated and then pooled across all occurrences of the given behavior. A summary diagram representing the likelihood that the target behavior precedes, occurs concurrently with, and follows the given behavior can then be constructed. Elements of this summary diagram can also be quantified. Given the graphical nature of the output, and its ease of use, the normalized and pooled approach may help to promote the use of sequential analysis in applied settings.  相似文献   

20.
A method of exhaustion has been described for calculating regression coefficients. This method dispenses with the solution of simultaneous equations but utilizes a process of successive extraction in obtaining's, where each successive is maximized. This procedure permits the worker to discard as he goes along those weights which are deemed unsatisfactory for purposes of prediction. The coefficients and theR in a problem involving a criterion and six independent variables were calculated in sixty minutes. TheR's obtained by this method are smaller than those yielded by the Doolittle technique, but in problems which have been considered this discrepancy has not exceeded .05.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号