首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Generalized latent trait models   总被引:1,自引:0,他引:1  
In this paper we discuss a general model framework within which manifest variables with different distributions in the exponential family can be analyzed with a latent trait model. A unified maximum likelihood method for estimating the parameters of the generalized latent trait model will be presented. We discuss in addition the scoring of individuals on the latent dimensions. The general framework presented allows, not only the analysis of manifest variables all of one type but also the simultaneous analysis of a collection of variables with different distributions. The approach used analyzes the data as they are by making assumptions about the distribution of the manifest variables directly.  相似文献   

2.
A lexicographic rule orders multi-attribute alternatives in the same way as a dictionary orders words. Although no utility function can represent lexicographic preference over continuous, real-valued attributes, a constrained linear model suffices for representing such preferences over discrete attributes. We present an algorithm for inferring lexicographic structures from choice data. The primary difficulty in using such data is that it is seldom possible to obtain sufficient information to estimate individual-level preference functions. Instead, one needs to pool the data across latent clusters of individuals. We propose a method that identifies latent clusters of subjects, and estimates a lexicographic rule for each cluster. We describe an application of the method using data collected by a manufacturer of television sets. We compare the predictions of the model with those obtained from a finite-mixture, multinomial-logit model.  相似文献   

3.
标准化估计对模型的解释和效应大小的比较有重要作用。虽然潜变量交互效应的恰当标准化估计公式已经面世超过10年, 国内外都在使用和引用, 但至今未见到关于不同估计方法得到的恰当标准化估计的系统比较。通过模拟实验, 比较了乘积指标法、潜调节结构方程(LMS)、无先验信息和有先验信息的贝叶斯法的潜变量交互效应标准化估计在不同条件下的表现。结果发现, 在正态条件下, LMS和有信息贝叶斯法表现较好; 而在非正态条件下, 乘积指标法比较稳健, 但需要较大的样本(不小于500), 小样本且外生潜变量之间相关很低时可使用无信息贝叶斯法。  相似文献   

4.
Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified structure of the model, which is defined by the number of classes and, possibly, fixation and equality constraints. The model structure is usually chosen on theoretical grounds. A large variety of structurally different latent class models can be compared using goodness-of-fit indices of the chi-square family, Akaike’s information criterion, the Bayesian information criterion, and various other statistics. However, finding the optimal structure for a given goodness-of-fit index often requires a lengthy search in which all kinds of model structures are tested. Moreover, solutions may depend on the choice of initial values for the parameters. This article presents a new method by which one can simultaneously infer the model structure from the data and optimize the parameter values. The method consists of a genetic algorithm in which any goodness-of-fit index can be used as a fitness criterion. In a number of test cases in which data sets from the literature were used, it is shown that this method provides models that fit equally well as or better than the models suggested in the original articles.  相似文献   

5.
We develop a general approach to factor analysis that involves observed and latent variables that are assumed to be distributed in the exponential family. This gives rise to a number of factor models not considered previously and enables the study of latent variables in an integrated methodological framework, rather than as a collection of seemingly unrelated special cases. The framework accommodates a great variety of different measurement scales and accommodates cases where different latent variables have different distributions. The models are estimated with the method of simulated likelihood, which allows for higher dimensional factor solutions to be estimated than heretofore. The models are illustrated on synthetic data. We investigate their performance when the distribution of the latent variables is mis-specified and when part of the observations are missing. We study the properties of the simulation estimators relative to maximum likelihood estimation with numerical integration. We provide an empirical application to the analysis of attitudes.  相似文献   

6.
A general solution for the latent class model of latent structure analysis   总被引:1,自引:0,他引:1  
GREEN BF 《Psychometrika》1951,16(2):151-166
  相似文献   

7.
Multivariate ordinal and quantitative longitudinal data measuring the same latent construct are frequently collected in psychology. We propose an approach to describe change over time of the latent process underlying multiple longitudinal outcomes of different types (binary, ordinal, quantitative). By relying on random‐effect models, this approach handles individually varying and outcome‐specific measurement times. A linear mixed model describes the latent process trajectory while equations of observation combine outcome‐specific threshold models for binary or ordinal outcomes and models based on flexible parameterized non‐linear families of transformations for Gaussian and non‐Gaussian quantitative outcomes. As models assuming continuous distributions may be also used with discrete outcomes, we propose likelihood and information criteria for discrete data to compare the goodness of fit of models assuming either a continuous or a discrete distribution for discrete data. Two analyses of the repeated measures of the Mini‐Mental State Examination, a 20‐item psychometric test, illustrate the method. First, we highlight the usefulness of parameterized non‐linear transformations by comparing different flexible families of transformation for modelling the test as a sum score. Then, change over time of the latent construct underlying directly the 20 items is described using two‐parameter longitudinal item response models that are specific cases of the approach.  相似文献   

8.
A latent variable modelling approach is discussed, which can be used to evaluate indices of linear relationship between latent constructs in incomplete data sets. The method is based on an application of maximum-likelihood estimation and inclusion of covariates predictive of missing values. The approach can be employed for point and interval estimation of latent correlations in the presence of missing data, and capitalizes on enhanced plausibility of the assumption of data missing at random through introduction of informative covariates. The method is illustrated on empirical data.  相似文献   

9.
We propose a new method of structural equation modeling (SEM) for longitudinal and time series data, named Dynamic GSCA (Generalized Structured Component Analysis). The proposed method extends the original GSCA by incorporating a multivariate autoregressive model to account for the dynamic nature of data taken over time. Dynamic GSCA also incorporates direct and modulating effects of input variables on specific latent variables and on connections between latent variables, respectively. An alternating least square (ALS) algorithm is developed for parameter estimation. An improved bootstrap method called a modified moving block bootstrap method is used to assess reliability of parameter estimates, which deals with time dependence between consecutive observations effectively. We analyze synthetic and real data to illustrate the feasibility of the proposed method.  相似文献   

10.
The latent Markov (LM) model is a popular method for identifying distinct unobserved states and transitions between these states over time in longitudinally observed responses. The bootstrap likelihood-ratio (BLR) test yields the most rigorous test for determining the number of latent states, yet little is known about power analysis for this test. Power could be computed as the proportion of the bootstrap p values (PBP) for which the null hypothesis is rejected. This requires performing the full bootstrap procedure for a large number of samples generated from the model under the alternative hypothesis, which is computationally infeasible in most situations. This article presents a computationally feasible shortcut method for power computation for the BLR test. The shortcut method involves the following simple steps: (1) obtaining the parameters of the model under the null hypothesis, (2) constructing the empirical distributions of the likelihood ratio under the null and alternative hypotheses via Monte Carlo simulations, and (3) using these empirical distributions to compute the power. We evaluate the performance of the shortcut method by comparing it to the PBP method and, moreover, show how the shortcut method can be used for sample-size determination.  相似文献   

11.
Under consideration is a test battery of binary items. The responses ofn individuals are assumed to follow a Rasch model. It is further assumed that the latent individual parameters are distributed within a given population in accordance with a normal distribution. Methods are then considered for estimating the mean and variance of this latent population distribution. Also considered are methods for checking whether a normal population distribution fits the data. The developed methods are applied to data from an achievement test and from an attitude test.  相似文献   

12.
Statistical analyses investigating latent structure can be divided into those that estimate structural model parameters and those that detect the structural model type. The most basic distinction among structure types is between categorical (discrete) and dimensional (continuous) models. It is a common, and potentially misleading, practice to apply some method for estimating a latent structural model such as factor analysis without first verifying that the latent structure type assumed by that method applies to the data. The taxometric method was developed specifically to distinguish between dimensional and 2-class models. This study evaluated the taxometric method as a means of identifying categorical structures in general. We assessed the ability of the taxometric method to distinguish between dimensional (1-class) and categorical (2-5 classes) latent structures and to estimate the number of classes in categorical datasets. Based on 50,000 Monte Carlo datasets (10,000 per structure type), and using the comparison curve fit index averaged across 3 taxometric procedures (Mean Above Minus Below A Cut, Maximum Covariance, and Latent Mode Factor Analysis) as the criterion for latent structure, the taxometric method was found superior to finite mixture modeling for distinguishing between dimensional and categorical models. A multistep iterative process of applying taxometric procedures to the data often failed to identify the number of classes in the categorical datasets accurately, however. It is concluded that the taxometric method may be an effective approach to distinguishing between dimensional and categorical structure but that other latent modeling procedures may be more effective for specifying the model.  相似文献   

13.
黎光明  张敏强 《心理科学》2013,36(1):203-209
方差分量估计是概化理论的必用技术,但受限于抽样,需要对其变异量进行探讨。采用Monte Carlo数据模拟技术,探讨非正态数据分布对四种方法估计概化理论方差分量变异量的影响。结果表明:(1)不同非正态数据分布下,各种估计方法的“性能”表现出差异性;(2)数据分布对方差分量变异量估计有影响,适合于非正态分布数据的方差分量变异量估计方法不一定适合于正态分布数据。  相似文献   

14.
There is a recent increase in interest of Bayesian analysis. However, little effort has been made thus far to directly incorporate background knowledge via the prior distribution into the analyses. This process might be especially useful in the context of latent growth mixture modeling when one or more of the latent groups are expected to be relatively small due to what we refer to as limited data. We argue that the use of Bayesian statistics has great advantages in limited data situations, but only if background knowledge can be incorporated into the analysis via prior distributions. We highlight these advantages through a data set including patients with burn injuries and analyze trajectories of posttraumatic stress symptoms using the Bayesian framework following the steps of the WAMBS-checklist. In the included example, we illustrate how to obtain background information using previous literature based on a systematic literature search and by using expert knowledge. Finally, we show how to translate this knowledge into prior distributions and we illustrate the importance of conducting a prior sensitivity analysis. Although our example is from the trauma field, the techniques we illustrate can be applied to any field.  相似文献   

15.
A model for longitudinal latent structure analysis is proposed. We assume that test scores for a given mental or attitudinal test are observed for the same individuals at two different points in time. The purpose of the analysis is to fit a model that combines the values of the latent variable at the two time points in a two-dimensional latent density. The correlation coefficient between the two values of the latent variable can then be estimated. The theory and methods are illustrated by a Danish dataset concerning psychic vulnerability.  相似文献   

16.
In a latent class IRT model in which the latent classes are ordered on one dimension, the class specific response probabilities are subject to inequality constraints. The number of these inequality constraints increase dramatically with the number of response categories per item, if assumptions like monotonicity or double monotonicity of the cumulative category response functions are postulated. A Markov chain Monte Carlo method, the Gibbs sampler, can sample from the multivariate posterior distribution of the parameters under the constraints. Bayesian model selection can be done by posterior predictive checks and Bayes factors. A simulation study is done to evaluate results of the application of these methods to ordered latent class models in three realistic situations. Also, an example of the presented methods is given for existing data with polytomous items. It can be concluded that the Bayesian estimation procedure can handle the inequality constraints on the parameters very well. However, the application of Bayesian model selection methods requires more research.  相似文献   

17.
Computer simulations have become a popular tool for assessing complex skills such as problem-solving. Log files of computer-based items record the human–computer interactive processes for each respondent in full. The response processes are very diverse, noisy, and of non-standard formats. Few generic methods have been developed to exploit the information contained in process data. In this paper we propose a method to extract latent variables from process data. The method utilizes a sequence-to-sequence autoencoder to compress response processes into standard numerical vectors. It does not require prior knowledge of the specific items and human–computer interaction patterns. The proposed method is applied to both simulated and real process data to demonstrate that the resulting latent variables extract useful information from the response processes.  相似文献   

18.
The latent structure model considered here postulates that a population of individuals can be divided intom classes such that each class is homogeneous in the sense that for the individuals in the class the responses toN dichotomous items or questions are statistically independent. A method is given for deducing the proportions of the population in each latent class and the probabilities of positive responses to each item for individuals in each class from knowledge of the probabilities of positive responses for individuals from the population as a whole. For estimation of the latent parameters on the basis of a sample, it is proposed that the same method of analysis be applied to the observed data. The method has the advantages of avoiding implicitly defined and unobservable quantities, and of using relatively simple computational procedures of conventional matrix algebra, but it has the disadvantages of using only a part of the available information and of using that part asymmetrically.Work supported by the RAND Corporation.  相似文献   

19.
In a meta-analysis, the unknown parameters are often estimated using maximum likelihood, and inferences are based on asymptotic theory. It is assumed that, conditional on study characteristics included in the model, the between-study distribution and the sampling distributions of the effect sizes are normal. In practice, however, samples are finite, and the normality assumption may be violated, possibly resulting in biased estimates and inappropriate standard errors. In this article, we propose two parametric and two nonparametric bootstrap methods that can be used to adjust the results of maximum likelihood estimation in meta-analysis and illustrate them with empirical data. A simulation study, with raw data drawn from normal distributions, reveals that the parametric bootstrap methods and one of the nonparametric methods are generally superior to the ordinary maximum likelihood approach but suffer from a bias/precision tradeoff. We recommend using one of these bootstrap methods, but without applying the bias correction.  相似文献   

20.
Many probabilistic models for psychological and educational measurements contain latent variables. Well‐known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the ‘explaining‐away’ phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well‐known latent variable models by using both theoretical and real data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号