首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Parallel analysis has been well documented to be an effective and accurate method for determining the number of factors to retain in exploratory factor analysis. The O'Connor (2000) procedure for parallel analysis has many benefits and is widely applied, yet it has a few shortcomings in dealing with missing data and ordinal variables. To address these technical issues, we adapted and modified the O'Connor procedure to provide an alternative method that better approximates the ordinal data by factoring in the frequency distributions of the variables (e.g., the number of response categories and the frequency of each response category per variable). The theoretical and practical differences between the modified procedure and the O'Connor procedure are discussed. The SAS syntax for implementing this modified procedure is also provided.  相似文献   

3.
Structural equation models are very popular for studying relationships among observed and latent variables. However, the existing theory and computer packages are developed mainly under the assumption of normality, and hence cannot be satisfactorily applied to non‐normal and ordered categorical data that are common in behavioural, social and psychological research. In this paper, we develop a Bayesian approach to the analysis of structural equation models in which the manifest variables are ordered categorical and/or from an exponential family. In this framework, models with a mixture of binomial, ordered categorical and normal variables can be analysed. Bayesian estimates of the unknown parameters are obtained by a computational procedure that combines the Gibbs sampler and the Metropolis–Hastings algorithm. Some goodness‐of‐fit statistics are proposed to evaluate the fit of the posited model. The methodology is illustrated by results obtained from a simulation study and analysis of a real data set about non‐adherence of hypertension patients in a medical treatment scheme.  相似文献   

4.
Although few would dispute the usefulness of looking at behavioral change from a stage-sequential perspective, until recently the lack of appropriate modeling techniques has hampered rigorous empirical tests of stage theories. In particular, for behavioral measurements that are ordinal, there is a need for methods that represent the underlying change processes in the form of qualitative and discontinuous shifts. This article introduces a stage-sequential ordinal model by postulating that at any point in time there are a finite number of latent stages. Panel members may shift among these stages over time. The authors show that the stage-sequential model provides a general approach for both the analysis of ordinal time-dependent data and tests of various competing theories and hypotheses about psychological change processes. An analysis of a 5-year study concerning attitudes toward alcohol consumption by teenagers is presented to illustrate the modeling approach.  相似文献   

5.
In cognitive modeling, data are often categorical observations taken over participants and items. Usually subsets of these observations are pooled and analyzed by a cognitive model assuming the category counts come from a multinomial distribution with the same model parameters underlying all observations. It is well known that if there are individual differences in participants and/or items, a model analysis of the pooled data may be quite misleading, and in such cases it may be appropriate to augment the cognitive model with parametric random effects assumptions. On the other hand, if random effects are incorporated into a cognitive model that is not needed, the resulting model may be more flexible than the multinomial model that assumes no heterogeneity, and this may lead to overfitting. This article presents Monte Carlo statistical tests for directly detecting individual participant and/or item heterogeneity that depend only on the data structure itself. These tests are based on the fact that heterogeneity in participants and/or items results in overdispersion of certain category count statistics. It is argued that the methods developed in the article should be applied to any set of participant 3 item categorical data prior to cognitive model-based analyses.  相似文献   

6.
A general approach to the analysis of subjective categorical data is considered, in which agreement matrices of two or more raters are directly expressed in terms of error and agreement parameters. The method provides focused analyses of ratings from several raters for whom ratings have measurement error distributions that may induce bias in the evaluation of substantive questions of interest. Each rater's judgment process is modeled as a mixture of two components: an error variable that is unique for the rater in question as well as an agreement variable that operationalizes the true values of the units of observation. The statistical problems of identification, estimation, and testing of such measurement models are discussed.The general model is applied in several special cases. The most simple situation is that underlying Cohen's Kappa, where two raters place units into unordered categories. The model provides a generalization and systematization of the Kappa-idea to correct for agreement by chance. In applications with typical research designs, including a between-subjects design and a mixed within-subjects, between-subjects design, the model is shown to disentangle structural and measurement components of the observations, thereby controlling for possible confounding effects of systematic rater bias. Situations considered include the case of more than two raters as well as the case of ordered categories. The different analyses are illustrated by means of real data sets.The authors wish to thank Lawrence Hubert and Ivo Molenaar for helpful and detailed comments on a previous draft of this paper. Thanks are also due to Jens Möller und Bernd Strauß for the data from the 1992 Olympic Games. We thank the editor and three anonymous reviewers for valuable comments on an earlier draft.  相似文献   

7.
We develop a method for the analysis of multivariate ordinal categorical data with misclassification based on the latent normal variable approach. Misclassification arises if a subject has been classified into a category that does not truly reflect its actual state, and can occur with one or more variables. A basic framework is developed to enable the analysis of two types of data. The first corresponds to a single sample that is obtained from a fallible design that may lead to misclassified data. The other corresponds to data that is obtained by double sampling. Double sampling data consists of two parts: a sample that is obtained by classifying subjects using the fallible design only and a sample that is obtained by classifying subjects using both fallible and true designs, which is assumed to have no misclassification. A unified expectation–maximization approach is developed to find the maximum likelihood estimate of model parameters. Simulation studies and examples that are based on real data are used to demonstrate the applicability and practicability of the proposed methods.  相似文献   

8.
In quantifying categorical data, constraints play an important role in characterizing the outcome. In the Guttman-type quantification of contingency tables and multiple-choice data (incidence data), the trivial solution due to the marginal constraints is typically removed before quantification; this removal, however, has the effect of distorting the shape of the total space. Awareness of this is important for the interpretation of the quantified outcome. The present study provides some relevant formulas for those cases that are affected by the trivial solution and those cases that are not. The characterization of the total space used by the Guttman-type quantification and pertinent discussion are presented.This study was supported by a grant from The Natural Sciences and Engineering Research Council of Canada to S. Nishisato.  相似文献   

9.
In assessments of attitudes, personality, and psychopathology, unidimensional scale scores are commonly obtained from Likert scale items to make inferences about individuals' trait levels. This study approached the issue of how best to combine Likert scale items to estimate test scores from the practitioner's perspective: Does it really matter which method is used to estimate a trait? Analyses of 3 data sets indicated that commonly used methods could be classified into 2 groups: methods that explicitly take account of the ordered categorical item distributions (i.e., partial credit and graded response models of item response theory, factor analysis using an asymptotically distribution-free estimator) and methods that do not distinguish Likert-type items from continuously distributed items (i.e., total score, principal component analysis, maximum-likelihood factor analysis). Differences in trait estimates were found to be trivial within each group. Yet the results suggested that inferences about individuals' trait levels differ considerably between the 2 groups. One should therefore choose a method that explicitly takes account of item distributions in estimating unidimensional traits from ordered categorical response formats. Consequences of violating distributional assumptions were discussed.  相似文献   

10.
11.
Dual scaling is a set of related techniques for the analysis of a wide assortment of categorical data types including contingency tables and multiple-choice, rank order, and paired comparison data. When applied to a contingency table, dual scaling also goes by the name "correspondence analysis," and when applied to multiple-choice data in which there are more than 2 items, "optimal scaling" and "multiple correspondence analysis. " Our aim of this article was to explain in nontechnical terms what dual scaling offers to an analysis of contingency table and multiple-choice data.  相似文献   

12.
In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low‐dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented.  相似文献   

13.
Nonparametric and distribution-free tests of categorical data provide an evaluation of statistical significance between groups of subjects differing in their assignment to a set of categories. This paper describes an implementation in the SAS programming language of three tests to evaluate categorical data. One of these tests, the Contingency Table Test for Ordered Categories evaluates data assessed on at least an ordinal scale where the categories are in ascending or descending rank order. The remaining two tests, Fisher’s Fourfold-Table Test for Variables with Two Categories and Fisher’s Contingency Table Test for Variables with More than Two Categories, evaluate data assessed on either a nominal or an ordinal scale. The program described completes analysis of a 2°C categorical contingency table as would be obtained from the application of a multiple-level rating scale to the behavior of a treatment and a control group.  相似文献   

14.
This paper reports on a simulation study that evaluated the performance of five structural equation model test statistics appropriate for categorical data. Both Type I error rate and power were investigated. Different model sizes, sample sizes, numbers of categories, and threshold distributions were considered. Statistics associated with both the diagonally weighted least squares (cat‐DWLS) estimator and with the unweighted least squares (cat‐ULS) estimator were studied. Recent research suggests that cat‐ULS parameter estimates and robust standard errors slightly outperform cat‐DWLS estimates and robust standard errors ( Forero, Maydeu‐Olivares, & Gallardo‐Pujol, 2009 ). The findings of the present research suggest that the mean‐ and variance‐adjusted test statistic associated with the cat‐ULS estimator performs best overall. A new version of this statistic now exists that does not require a degrees‐of‐freedom adjustment ( Asparouhov & Muthén, 2010 ), and this statistic is recommended. Overall, the cat‐ULS estimator is recommended over cat‐DWLS, particularly in small to medium sample sizes.  相似文献   

15.
This paper suggests a method to supplant missing categorical data by reasonable replacements. These replacements will maximize the consistency of the completed data as measured by Guttman's squared correlation ratio. The text outlines a solution of the optimization problem, describes relationships with the relevant psychometric theory, and studies some properties of the method in detail. The main result is that the average correlation should be at least 0.50 before the method becomes practical. At that point, the technique gives reasonable results up to 10–15% missing data.We thank Anneke Bloemhoff of NIPG-TNO for compiling and making the Dutch Life Style Survey data available to use, and Chantal Houée and Thérèse Bardaine, IUT, Vannes, France, exchange students under the COMETT program of the EC, for computational assistance. We also thank Donald Rubin, the Editors and several anonymous reviewers for constructive suggestions.  相似文献   

16.
This paper studies the problem of scaling ordinal categorical data observed over two or more sets of categories measuring a single characteristic. Scaling is obtained by solving a constrained entropy model which finds the most probable values of the scales given the data. A Kullback-Leibler statistic is generated which operationalizes a measure for the strength of consistency among the sets of categories. A variety of data of two and three sets of categories are analyzed using the entropy approach.This research was partially supported by the Air Force Office of Scientific Research under grant AFOSR 83-0234. The support by the Air Force through grant AFOSR-83-0234 is gratefully acknowledged. The comments of the editor and referees have been most helpful in improving the paper, and in bringing several additional references to our attention.  相似文献   

17.
Markov chains are probabilistic models for sequences of categorical events, with applications throughout scientific psychology. This paper provides a method for anlayzing data consisting of event sequences and covariate observations. It is assumed that each sequence is a Markov process characterized by a distinct transition probability matrix. The objective is to use the covariate data to explain differences between individuals in the transition probability matrices characterizing their sequential data. The elements of the transition probability matrices are written as functions of a vector of latent variables, with variation in the latent variables explained through a multivariate regression on the covariates. The regression is estimated using the EM algorithm, and requires the numerical calculation of a multivariate integral. An example using simulated cognitive developmental data is presented, which shows that the estimation of individual variation in the parameters of a probability model may have substantial theoretical importance, even when individual differences are not the focus of the investigator's concerns.Research contributing to this article was supported by B.R.S. Subgrant 5-35345 from the University of Virginia. I thank the DADA Group, Bill Fabricius, Don Hartmann, William Griffin, Jack McArdle, Ivo Molenaar, Ronald Schoenberg, Simon Tavaré, and several anonymous reviewers for their discussion of these points.  相似文献   

18.
A structural equation model is proposed with a generalized measurement part, allowing for dichotomous and ordered categorical variables (indicators) in addition to continuous ones. A computationally feasible three-stage estimator is proposed for any combination of observed variable types. This approach provides large-sample chi-square tests of fit and standard errors of estimates for situations not previously covered. Two multiple-indicator modeling examples are given. One is a simultaneous analysis of two groups with a structural equation model underlying skewed Likert variables. The second is a longitudinal model with a structural model for multivariate probit regressions.This research was supported by Grant No. 81-IJ-CX-0015 from the National Institute of Justice, by Grant No. DA 01070 from the U.S. Public Health Service, and by Grant No. SES-8312583 from the National Science Foundation. I thank Julie Honig for drawing the figures. Requests for reprints should be sent to Bengt Muthén, Graduate School of Education, University of California, Los Angeles, California 90024.  相似文献   

19.
A rating formulation for ordered response categories   总被引:22,自引:0,他引:22  
A rating response mechanism for ordered categories, which is related to the traditional threshold formulation but distinctively different from it, is formulated. In addition to the subject and item parameters two other sets of parameters, which can be interpreted in terms of thresholds on a latent continuum and discriminations at the thresholds, are obtained. These parameters are identified with the category coefficients and the scoring function of the Rasch model for polychotomous responses in which the latent trait is assumed uni-dimensional. In the case where the threshold discriminations are equal, the scoring of successive categories by the familiar assignment of successive integers is justified. In the case where distances between thresholds are also equal, a simple pattern of category coefficients is shown to follow.This work was conducted in part in the first half of 1977 while the author was on study leave at the Danish Institute for Educational Research. The Institute provided required research facilities while The University of Western Australia provided financial support.  相似文献   

20.
A monotone relationship between a true score (τ) and a latent trait level (θ) has been a key assumption for many psychometric applications. The monotonicity property in dichotomous response models is evident as a result of a transformation via a test characteristic curve. Monotonicity in polytomous models, in contrast, is not immediately obvious because item response functions are determined by a set of response category curves, which are conceivably non-monotonic in θ. The purpose of the present note is to demonstrate strict monotonicity in ordered polytomous item response models. Five models that are widely used in operational assessments are considered for proof: the generalized partial credit model (Muraki, 1992, Applied Psychological Measurement, 16, 159), the nominal model (Bock, 1972, Psychometrika, 37, 29), the partial credit model (Masters, 1982, Psychometrika, 47, 147), the rating scale model (Andrich, 1978, Psychometrika, 43, 561), and the graded response model (Samejima, 1972, A general model for free-response data (Psychometric Monograph no. 18). Psychometric Society, Richmond). The study asserts that the item response functions in these models strictly increase in θ and thus there exists strict monotonicity between τ and θ under certain specified conditions. This conclusion validates the practice of customarily using τ in place of θ in applied settings and provides theoretical grounds for one-to-one transformations between the two scales.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号