首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Integrative data analysis (IDA) is a methodological framework that allows for the fitting of models to data that have been pooled across 2 or more independent sources. IDA offers many potential advantages including increased statistical power, greater subject heterogeneity, higher observed frequencies of low base-rate behaviors, and longer developmental periods of study. However, a core challenge is the estimation of valid and reliable psychometric scores that are based on potentially different items with different response options drawn from different studies. In Bauer and Hussong (2009) we proposed a method for obtaining scores within an IDA called moderated nonlinear factor analysis (MNLFA). Here we move significantly beyond this work in the development of a general framework for estimating MNLFA models and obtaining scale scores across a variety of settings. We propose a 5-step procedure and demonstrate this approach using data drawn from n = 1,972 individuals ranging in age from 11 to 34 years pooled across 3 independent studies to examine the factor structure of 17 binary items assessing depressive symptomatology. We offer substantive conclusions about the factor structure of depression, use this structure to compute individual-specific scale scores, and make recommendations for the use of these methods in practice.  相似文献   

2.
Generalized fiducial inference (GFI) has been proposed as an alternative to likelihood-based and Bayesian inference in mainstream statistics. Confidence intervals (CIs) can be constructed from a fiducial distribution on the parameter space in a fashion similar to those used with a Bayesian posterior distribution. However, no prior distribution needs to be specified, which renders GFI more suitable when no a priori information about model parameters is available. In the current paper, we apply GFI to a family of binary logistic item response theory models, which includes the two-parameter logistic (2PL), bifactor and exploratory item factor models as special cases. Asymptotic properties of the resulting fiducial distribution are discussed. Random draws from the fiducial distribution can be obtained by the proposed Markov chain Monte Carlo sampling algorithm. We investigate the finite-sample performance of our fiducial percentile CI and two commonly used Wald-type CIs associated with maximum likelihood (ML) estimation via Monte Carlo simulation. The use of GFI in high-dimensional exploratory item factor analysis was illustrated by the analysis of a set of the Eysenck Personality Questionnaire data.  相似文献   

3.
新世纪头20年, 国内心理学11本专业期刊一共发表了213篇统计方法研究论文。研究范围主要包括以下10类(按论文篇数排序):结构方程模型、测验信度、中介效应、效应量与检验力、纵向研究、调节效应、探索性因子分析、潜在类别模型、共同方法偏差和多层线性模型。对各类做了简单的回顾与梳理。结果发现, 国内心理统计方法研究的广度和深度都不断增加, 研究热点在相互融合中共同发展; 但综述类论文比例较大, 原创性研究论文比例有待提高, 研究力量也有待加强。  相似文献   

4.
Finite mixture models are widely used in the analysis of growth trajectory data to discover subgroups of individuals exhibiting similar patterns of behavior over time. In practice, trajectories are usually modeled as polynomials, which may fail to capture important features of the longitudinal pattern. Focusing on dichotomous response measures, we propose a likelihood penalization approach for parameter estimation that is able to capture a variety of nonlinear class mean trajectory shapes with higher precision than maximum likelihood estimates. We show how parameter estimation and inference for whether trajectories are time-invariant, linear time-varying, or nonlinear time-varying can be carried out for such models. To illustrate the method, we use simulation studies and data from a long-term longitudinal study of children at high risk for substance abuse. This work was supported in part by NIAAA grants R37 AA07065 and R01 AA12217 to RAZ.  相似文献   

5.
Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).  相似文献   

6.
Computational models of lexical semantics, such as latent semantic analysis, can automatically generate semantic similarity measures between words from statistical redundancies in text. These measures are useful for experimental stimulus selection and for evaluating a model’s cognitive plausibility as a mechanism that people might use to organize meaning in memory. Although humans are exposed to enormous quantities of speech, practical constraints limit the amount of data that many current computational models can learn from. We follow up on previous work evaluating a simple metric of pointwise mutual information. Controlling for confounds in previous work, we demonstrate that this metric benefits from training on extremely large amounts of data and correlates more closely with human semantic similarity ratings than do publicly available implementations of several more complex models. We also present a simple tool for building simple and scalable models from large corpora quickly and efficiently.  相似文献   

7.
Item factor analysis: current approaches and future directions   总被引:2,自引:0,他引:2  
The rationale underlying factor analysis applies to continuous and categorical variables alike; however, the models and estimation methods for continuous (i.e., interval or ratio scale) data are not appropriate for item-level data that are categorical in nature. The authors provide a targeted review and synthesis of the item factor analysis (IFA) estimation literature for ordered-categorical data (e.g., Likert-type response scales) with specific attention paid to the problems of estimating models with many items and many factors. Popular IFA models and estimation methods found in the structural equation modeling and item response theory literatures are presented. Following this presentation, recent developments in the estimation of IFA parameters (e.g., Markov chain Monte Carlo) are discussed. The authors conclude with considerations for future research on IFA, simulated examples, and advice for applied researchers.  相似文献   

8.
This paper develops a unified approach, based on ranks, to the statistical analysis of data arising from complex experimental designs. In this way we answer a major objection to the use of rank procedures as a major methodology in data analysis. We show that the rank procedures, including testing, estimation and multiple comparisons, are generated in a natural way from a robust measure of scale. The rank methods closely parallel the familiar methods of least squares, so that estimates and tests have natural interpretations.This research was supported in part by grant MCS76-07292 from the National Science Foundation.  相似文献   

9.
Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the linear ballistic accumulator model, which has a known likelihood, and the leaky competing accumulator model, whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics—a feat that was not previously possible.  相似文献   

10.
Multilevel factor analysis models are widely used in the social sciences to account for heterogeneity in mean structures. In this paper we extend previous work on multilevel models to account for general forms of heterogeneity in confirmatory factor analysis models. We specify various models of mean and covariance heterogeneity in confirmatory factor analysis and develop Markov Chain Monte Carlo (MCMC) procedures to perform Bayesian inference, model checking, and model comparison.We test our methodology using synthetic data and data from a consumption emotion study. The results from synthetic data show that our Bayesian model perform well in recovering the true parameters and selecting the appropriate model. More importantly, the results clearly illustrate the consequences of ignoring heterogeneity. Specifically, we find that ignoring heterogeneity can lead to sign reversals of the factor covariances, inflation of factor variances and underappreciation of uncertainty in parameter estimates. The results from the emotion study show that subjects vary both in means and covariances. Thus traditional psychometric methods cannot fully capture the heterogeneity in our data.  相似文献   

11.
Traditional statistical analyses can be compromised when data are collected from groups or multiple observations are collected from individuals. We present an introduction to multilevel models designed to address dependency in data. We review current use of multilevel modeling in 3 personality journals showing use concentrated in the 2 areas of experience sampling and longitudinal growth. Using an empirical example, we illustrate specification and interpretation of the results of series of models as predictor variables are introduced at Levels 1 and 2. Attention is given to possible trends and cycles in longitudinal data and to different forms of centering. We consider issues that may arise in estimation, model comparison, model evaluation, and data evaluation (outliers), highlighting similarities to and differences from standard regression approaches. Finally, we consider newer developments, including 3-level models, cross-classified models, nonstandard (limited) dependent variables, multilevel structural equation modeling, and nonlinear growth. Multilevel approaches both address traditional problems of dependency in data and provide personality researchers with the opportunity to ask new questions of their data.  相似文献   

12.
Generalized full-information item bifactor analysis   总被引:1,自引:0,他引:1  
Cai L  Yang JS  Hansen M 《心理学方法》2011,16(3):221-248
Full-information item bifactor analysis is an important statistical method in psychological and educational measurement. Current methods are limited to single-group analysis and inflexible in the types of item response models supported. We propose a flexible multiple-group item bifactor analysis framework that supports a variety of multidimensional item response theory models for an arbitrary mixing of dichotomous, ordinal, and nominal items. The extended item bifactor model also enables the estimation of latent variable means and variances when data from more than 1 group are present. Generalized user-defined parameter restrictions are permitted within or across groups. We derive an efficient full-information maximum marginal likelihood estimator. Our estimation method achieves substantial computational savings by extending Gibbons and Hedeker's (1992) bifactor dimension reduction method so that the optimization of the marginal log-likelihood requires only 2-dimensional integration regardless of the dimensionality of the latent variables. We use simulation studies to demonstrate the flexibility and accuracy of the proposed methods. We apply the model to study cross-country differences, including differential item functioning, using data from a large international education survey on mathematics literacy.  相似文献   

13.
Statistical inference (including interval estimation and model selection) is increasingly used in the analysis of behavioral data. As with many other fields, statistical approaches for these analyses traditionally use classical (i.e., frequentist) methods. Interpreting classical intervals and p‐values correctly can be burdensome and counterintuitive. By contrast, Bayesian methods treat data, parameters, and hypotheses as random quantities and use rules of conditional probability to produce direct probabilistic statements about models and parameters given observed study data. In this work, we reanalyze two data sets using Bayesian procedures. We precede the analyses with an overview of the Bayesian paradigm. The first study reanalyzes data from a recent study of controls, heavy smokers, and individuals with alcohol and/or cocaine substance use disorder, and focuses on Bayesian hypothesis testing for covariates and interval estimation for discounting rates among various substance use disorder profiles. The second example analyzes hypothetical environmental delay‐discounting data. This example focuses on using historical data to establish prior distributions for parameters while allowing subjective expert opinion to govern the prior distribution on model preference. We review the subjective nature of specifying Bayesian prior distributions but also review established methods to standardize the generation of priors and remove subjective influence while still taking advantage of the interpretive advantages of Bayesian analyses. We present the Bayesian approach as an alternative paradigm for statistical inference and discuss its strengths and weaknesses.  相似文献   

14.
We develop a general approach to factor analysis that involves observed and latent variables that are assumed to be distributed in the exponential family. This gives rise to a number of factor models not considered previously and enables the study of latent variables in an integrated methodological framework, rather than as a collection of seemingly unrelated special cases. The framework accommodates a great variety of different measurement scales and accommodates cases where different latent variables have different distributions. The models are estimated with the method of simulated likelihood, which allows for higher dimensional factor solutions to be estimated than heretofore. The models are illustrated on synthetic data. We investigate their performance when the distribution of the latent variables is mis-specified and when part of the observations are missing. We study the properties of the simulation estimators relative to maximum likelihood estimation with numerical integration. We provide an empirical application to the analysis of attitudes.  相似文献   

15.
使用单任务研究程序,采用引入提示线索的方法,以产生时距作为反应指标对存在间断的时距估计任务中的间断期望效应和提示线索效应(注意效应)进行系统考察,并对间断时距的效应、产生时距与等待时距的关系问题作出进一步探讨。结果表明,间断位置(等待时距)因素是被试时间判断的主要线索,被试的产生时距随着等待时距的增加而延长。间断实验中表现出极其显著的提示线索效应,此效应既增加了时距估计的变异,又延长了被试的时距估计。无间断实验条件下,被试表现出显著的间断期望效应,被试对间断的期望有损于时间估计。  相似文献   

16.
Although Thurstonian models provide an attractive representation of choice behavior, they have not been extensively used in ranking applications since only recently efficient estimation methods for these models have been developed. These, however, require the use of special-purpose estimation programs, which limits their applicability. Here we introduce a formulation of Thurstonian ranking models that turns an idiosyncratic estimation problem into an estimation problem involving mean and covariance structures with dichotomous indicators. Well-known standard solutions for the latter can be readily applied to this specific problem, and as a result any Thurstonian model for ranking data can be fitted using existing general purpose software for mean and covariance structure analysis. Although the most popular programs for covariance structure analysis (e.g., LISREL and EQS) cannot be presently used to estimate Thurstonian ranking models, other programs such as MECOSA already exist that can be straightforwardly used to estimate these models.This paper is based on the author's doctoral dissertation. Ulf Böckenholt was my advisor. The author is indebted to Ulf Böckenholt for his comments on a previous version of this paper and to Gerhard Arminger for his extensive support on the use of MECOSA. The final stages of this research took place while the author was at the Department of Statistics and Econometrics, Universidad Carlos III de Madrid. Conversations with my colleague there, Adolfo Hernández, helped to greatly improve this paper.  相似文献   

17.
In this article, we show that the underlying dimensions obtained when factor analyzing cross-sectional data actually form a mix of within-person state dimensions and between-person trait dimensions. We propose a factor analytical model that distinguishes between four independent sources of variance: common trait, unique trait, common state, and unique state. We show that by testing whether there is weak factorial invariance across the trait and state factor structures, we can tackle the fundamental question first raised by Cattell; that is, are within-person state dimensions qualitatively the same as between-person trait dimensions? Furthermore, we discuss how this model is related to other trait-state factor models, and we illustrate its use with two empirical data sets. We end by discussing the implications for cross-sectional factor analysis and suggest potential future developments.  相似文献   

18.
Measurement models, such as factor analysis and item response theory models, are commonly implemented within educational, psychological, and behavioral science research to mitigate the negative effects of measurement error. These models can be formulated as an extension of generalized linear mixed models within a unifying framework that encompasses various kinds of multilevel models and longitudinal models, such as partially nonlinear latent growth models. We introduce the R package PLmixed, which implements profile maximum likelihood estimation to estimate complex measurement and growth models that can be formulated within the general modeling framework using the existing R package lme4 and function optim. We demonstrate the use of PLmixed through two examples before concluding with a brief overview of other possible models.  相似文献   

19.
Multiple item response profile (MIRP) models are models with crossed fixed and random effects. At least one between-person factor is crossed with at least one within-person factor, and the persons nested within the levels of the between-person factor are crossed with the items within levels of the within-person factor. Maximum likelihood estimation (MLE) of models for binary data with crossed random effects is challenging. This is because the marginal likelihood does not have a closed form, so that MLE requires numerical or Monte Carlo integration. In addition, the multidimensional structure of MIRPs makes the estimation complex. In this paper, three different estimation methods to meet these challenges are described: the Laplace approximation to the integrand; hierarchical Bayesian analysis, a simulation-based method; and an alternating imputation posterior with adaptive quadrature as the approximation to the integral. In addition, this paper discusses the advantages and disadvantages of these three estimation methods for MIRPs. The three algorithms are compared in a real data application and a simulation study was also done to compare their behaviour.  相似文献   

20.
Nonlinear Regime-Switching State-Space (RSSS) Models   总被引:1,自引:0,他引:1  
Nonlinear dynamic factor analysis models extend standard linear dynamic factor analysis models by allowing time series processes to be nonlinear at the latent level (e.g., involving interaction between two latent processes). In practice, it is often of interest to identify the phases—namely, latent “regimes” or classes—during which a system is characterized by distinctly different dynamics. We propose a new class of models, termed nonlinear regime-switching state-space (RSSS) models, which subsumes regime-switching nonlinear dynamic factor analysis models as a special case. In nonlinear RSSS models, the change processes within regimes, represented using a state-space model, are allowed to be nonlinear. An estimation procedure obtained by combining the extended Kalman filter and the Kim filter is proposed as a way to estimate nonlinear RSSS models. We illustrate the utility of nonlinear RSSS models by fitting a nonlinear dynamic factor analysis model with regime-specific cross-regression parameters to a set of experience sampling affect data. The parallels between nonlinear RSSS models and other well-known discrete change models in the literature are discussed briefly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号