首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule’s Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.  相似文献   

2.
Dual scaling is a set of related techniques for the analysis of a wide assortment of categorical data types including contingency tables and multiple-choice, rank order, and paired comparison data. When applied to a contingency table, dual scaling also goes by the name "correspondence analysis," and when applied to multiple-choice data in which there are more than 2 items, "optimal scaling" and "multiple correspondence analysis. " Our aim of this article was to explain in nontechnical terms what dual scaling offers to an analysis of contingency table and multiple-choice data.  相似文献   

3.
4.
In cognitive modeling, data are often categorical observations taken over participants and items. Usually subsets of these observations are pooled and analyzed by a cognitive model assuming the category counts come from a multinomial distribution with the same model parameters underlying all observations. It is well known that if there are individual differences in participants and/or items, a model analysis of the pooled data may be quite misleading, and in such cases it may be appropriate to augment the cognitive model with parametric random effects assumptions. On the other hand, if random effects are incorporated into a cognitive model that is not needed, the resulting model may be more flexible than the multinomial model that assumes no heterogeneity, and this may lead to overfitting. This article presents Monte Carlo statistical tests for directly detecting individual participant and/or item heterogeneity that depend only on the data structure itself. These tests are based on the fact that heterogeneity in participants and/or items results in overdispersion of certain category count statistics. It is argued that the methods developed in the article should be applied to any set of participant 3 item categorical data prior to cognitive model-based analyses.  相似文献   

5.
A general approach to the analysis of subjective categorical data is considered, in which agreement matrices of two or more raters are directly expressed in terms of error and agreement parameters. The method provides focused analyses of ratings from several raters for whom ratings have measurement error distributions that may induce bias in the evaluation of substantive questions of interest. Each rater's judgment process is modeled as a mixture of two components: an error variable that is unique for the rater in question as well as an agreement variable that operationalizes the true values of the units of observation. The statistical problems of identification, estimation, and testing of such measurement models are discussed.The general model is applied in several special cases. The most simple situation is that underlying Cohen's Kappa, where two raters place units into unordered categories. The model provides a generalization and systematization of the Kappa-idea to correct for agreement by chance. In applications with typical research designs, including a between-subjects design and a mixed within-subjects, between-subjects design, the model is shown to disentangle structural and measurement components of the observations, thereby controlling for possible confounding effects of systematic rater bias. Situations considered include the case of more than two raters as well as the case of ordered categories. The different analyses are illustrated by means of real data sets.The authors wish to thank Lawrence Hubert and Ivo Molenaar for helpful and detailed comments on a previous draft of this paper. Thanks are also due to Jens Möller und Bernd Strauß for the data from the 1992 Olympic Games. We thank the editor and three anonymous reviewers for valuable comments on an earlier draft.  相似文献   

6.
Markov chains are probabilistic models for sequences of categorical events, with applications throughout scientific psychology. This paper provides a method for anlayzing data consisting of event sequences and covariate observations. It is assumed that each sequence is a Markov process characterized by a distinct transition probability matrix. The objective is to use the covariate data to explain differences between individuals in the transition probability matrices characterizing their sequential data. The elements of the transition probability matrices are written as functions of a vector of latent variables, with variation in the latent variables explained through a multivariate regression on the covariates. The regression is estimated using the EM algorithm, and requires the numerical calculation of a multivariate integral. An example using simulated cognitive developmental data is presented, which shows that the estimation of individual variation in the parameters of a probability model may have substantial theoretical importance, even when individual differences are not the focus of the investigator's concerns.Research contributing to this article was supported by B.R.S. Subgrant 5-35345 from the University of Virginia. I thank the DADA Group, Bill Fabricius, Don Hartmann, William Griffin, Jack McArdle, Ivo Molenaar, Ronald Schoenberg, Simon Tavaré, and several anonymous reviewers for their discussion of these points.  相似文献   

7.
Parallel analysis has been well documented to be an effective and accurate method for determining the number of factors to retain in exploratory factor analysis. The O'Connor (2000) procedure for parallel analysis has many benefits and is widely applied, yet it has a few shortcomings in dealing with missing data and ordinal variables. To address these technical issues, we adapted and modified the O'Connor procedure to provide an alternative method that better approximates the ordinal data by factoring in the frequency distributions of the variables (e.g., the number of response categories and the frequency of each response category per variable). The theoretical and practical differences between the modified procedure and the O'Connor procedure are discussed. The SAS syntax for implementing this modified procedure is also provided.  相似文献   

8.
We develop a method for the analysis of multivariate ordinal categorical data with misclassification based on the latent normal variable approach. Misclassification arises if a subject has been classified into a category that does not truly reflect its actual state, and can occur with one or more variables. A basic framework is developed to enable the analysis of two types of data. The first corresponds to a single sample that is obtained from a fallible design that may lead to misclassified data. The other corresponds to data that is obtained by double sampling. Double sampling data consists of two parts: a sample that is obtained by classifying subjects using the fallible design only and a sample that is obtained by classifying subjects using both fallible and true designs, which is assumed to have no misclassification. A unified expectation–maximization approach is developed to find the maximum likelihood estimate of model parameters. Simulation studies and examples that are based on real data are used to demonstrate the applicability and practicability of the proposed methods.  相似文献   

9.
In quantifying categorical data, constraints play an important role in characterizing the outcome. In the Guttman-type quantification of contingency tables and multiple-choice data (incidence data), the trivial solution due to the marginal constraints is typically removed before quantification; this removal, however, has the effect of distorting the shape of the total space. Awareness of this is important for the interpretation of the quantified outcome. The present study provides some relevant formulas for those cases that are affected by the trivial solution and those cases that are not. The characterization of the total space used by the Guttman-type quantification and pertinent discussion are presented.This study was supported by a grant from The Natural Sciences and Engineering Research Council of Canada to S. Nishisato.  相似文献   

10.
Tutorial on modeling ordered categorical response data   总被引:2,自引:0,他引:2  
  相似文献   

11.
12.
13.
14.
This paper reports on a simulation study that evaluated the performance of five structural equation model test statistics appropriate for categorical data. Both Type I error rate and power were investigated. Different model sizes, sample sizes, numbers of categories, and threshold distributions were considered. Statistics associated with both the diagonally weighted least squares (cat‐DWLS) estimator and with the unweighted least squares (cat‐ULS) estimator were studied. Recent research suggests that cat‐ULS parameter estimates and robust standard errors slightly outperform cat‐DWLS estimates and robust standard errors ( Forero, Maydeu‐Olivares, & Gallardo‐Pujol, 2009 ). The findings of the present research suggest that the mean‐ and variance‐adjusted test statistic associated with the cat‐ULS estimator performs best overall. A new version of this statistic now exists that does not require a degrees‐of‐freedom adjustment ( Asparouhov & Muthén, 2010 ), and this statistic is recommended. Overall, the cat‐ULS estimator is recommended over cat‐DWLS, particularly in small to medium sample sizes.  相似文献   

15.
This paper suggests a method to supplant missing categorical data by reasonable replacements. These replacements will maximize the consistency of the completed data as measured by Guttman's squared correlation ratio. The text outlines a solution of the optimization problem, describes relationships with the relevant psychometric theory, and studies some properties of the method in detail. The main result is that the average correlation should be at least 0.50 before the method becomes practical. At that point, the technique gives reasonable results up to 10–15% missing data.We thank Anneke Bloemhoff of NIPG-TNO for compiling and making the Dutch Life Style Survey data available to use, and Chantal Houée and Thérèse Bardaine, IUT, Vannes, France, exchange students under the COMETT program of the EC, for computational assistance. We also thank Donald Rubin, the Editors and several anonymous reviewers for constructive suggestions.  相似文献   

16.
Individualized contingency contracts can be a powerful intervention for helping at-risk students succeed in regular education. Several issues should be considered relative to developing and implementing the contingency contract. These issues include developing precise definitions of the problem behaviors and prioritizing problem behaviors for contract intervention. In addition, activities should be undertaken that facilitate collaboration among special services providers, regular education teachers, the student in question, and possibly his/her parents. Next, the appropriate contract contingencies need to be selected and the criterion should to be set at the appropriate level of difficulty. The actual writing of the contract encompasses several issues including the use of language and concepts that are both attractive and developmentally appropriate for the student. Guidelines for incorporating cognitive interventions into contracts are presented. Finally, implementation and generalization issues are discussed.  相似文献   

17.
18.
19.
This paper studies the problem of scaling ordinal categorical data observed over two or more sets of categories measuring a single characteristic. Scaling is obtained by solving a constrained entropy model which finds the most probable values of the scales given the data. A Kullback-Leibler statistic is generated which operationalizes a measure for the strength of consistency among the sets of categories. A variety of data of two and three sets of categories are analyzed using the entropy approach.This research was partially supported by the Air Force Office of Scientific Research under grant AFOSR 83-0234. The support by the Air Force through grant AFOSR-83-0234 is gratefully acknowledged. The comments of the editor and referees have been most helpful in improving the paper, and in bringing several additional references to our attention.  相似文献   

20.
Levels-of-analysis issues arise whenever individual-level data are collected from more than one person from the same dyad, family, classroom, work group, or other interaction unit. Interdependence in data from individuals in the same interaction units also violates the independence-of-observations assumption that underlies commonly used statistical tests. This article describes the data analysis challenges that are presented by these issues and presents SPSS and SAS programs for conducting appropriate analyses. The programs conduct the within- and-between-analyses described by Dansereau, Alutto, and Yammarino (1984) and the dyad-level analyses describedby Gonzalez and Griffin (1999) and Griffin and Gonzalez (1995). Contrasts with general multilevel modeling procedures are then discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号