首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
钱锦昕  余嘉元 《心理学报》2013,45(6):704-714
探讨基因表达式编程对自陈量表测量数据的建模方法。运用威廉斯创造力测验和认知需求量表获得400位中学生的测量分数,通过数据清洗,保留383个被试的分数作为建模的数据集。运用哈曼单因素检验方法没有发现共同方法偏差。采用均匀设计方法对基因表达式编程中的5个参数进行优化配置,在测试拟合度最大的试验条件下,找到了测试误差最小的模型。比较基因表达式编程和BP神经网络、支持向量回归机、多元线性回归、二次多项式回归所建模型的预测精度。研究表明,基因表达式编程能用于自陈量表测量数据的建模,该模型比传统方法所建的模型具有更高的预测精度,而且模型是稳健的。  相似文献   

2.
Some nonparametric allocation methods are proposed for use in computer-aided medical diagnostics. It may be expected that the replacement of the widely employed parametric models by these methods leads to more realistic results, because the assumptions which are used by parametric models and which are never fulfilled in practice become unnecessary. The overestimation of the discriminating power arising from the non-fulfillment of parametric assumptions are avoided.  相似文献   

3.
Authors dealing with models of individual choice behavior often point out the pitfalls of using group data to test such models. However, it is possible to derive properties of the choice behavior of the population based on these models of individual choice behavior. The present paper develops a procedure for the aggregation of choice data which takes into account individual differences. The utility of using aggregated choice data and the properties of the present procedure in comparison to other methods which utilized group data in analyzing choice behavior are discussed.  相似文献   

4.
A basic problem in psychophysics is recovering the mean internal response and noise amplitude from sensory discrimination data. Since these components cannot be estimated independently, several indirect methods were suggested to resolve this issue. Here we analyze the two-alternative force-choice method (2AFC), using a signal detection theory approach, and show analytically that the 2AFC data are not always suitable for a reliable estimation of the mean internal responses and noise amplitudes. Specifically, we show that there is a subspace of internal parameters that are highly sensitive to sampling errors (singularities), which results in a large range of estimated parameters with a finite number of experimental trials. Four types of singular models were identified, including the models where the noise amplitude is independent of the stimulus intensity, a situation often encountered in visual contrast discrimination. Finally, we consider two ways to avoid singularities: (1) inserting external noise to the stimuli, and (2) using one-interval forced-choice scaling methods (such as the Thurstonian scaling method for successive intervals).  相似文献   

5.
Statistical prediction of an outcome variable using multiple independent variables is a common practice in the social and behavioral sciences. For example, neuropsychologists are sometimes called upon to provide predictions of preinjury cognitive functioning for individuals who have suffered a traumatic brain injury. Typically, these predictions are made using standard multiple linear regression models with several demographic variables (e.g., gender, ethnicity, education level) as predictors. Prior research has shown conflicting evidence regarding the ability of such models to provide accurate predictions of outcome variables such as full-scale intelligence (FSIQ) test scores. The present study had two goals: (1) to demonstrate the utility of a set of alternative prediction methods that have been applied extensively in the natural sciences and business but have not been frequently explored in the social sciences and (2) to develop models that can be used to predict premorbid cognitive functioning in preschool children. Predictions of Stanford–Binet 5 FSIQ scores for preschool-aged children is used to compare the performance of a multiple regression model with several of these alternative methods. Results demonstrate that classification and regression treesprovided more accurate predictions of FSIQ scores than does the more traditional regression approach. Implications of these results are discussed.  相似文献   

6.
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show that these methods can be implemented in a flexible way which requires minimal technical sophistication on the part of the end user. After providing an overview of item factor analysis and MCMC, results from several examples (simulated and real) will be discussed. The bulk of these examples focus on models that are problematic for current “gold-standard” estimators. The results demonstrate that it is possible to obtain accurate parameter estimates using MCMC in a relatively user-friendly package.  相似文献   

7.
Diagnostic models provide a statistical framework for designing formative assessments by classifying student knowledge profiles according to a collection of fine-grained attributes. The context and ecosystem in which students learn may play an important role in skill mastery, and it is therefore important to develop methods for incorporating student covariates into diagnostic models. Including covariates may provide researchers and practitioners with the ability to evaluate novel interventions or understand the role of background knowledge in attribute mastery. Existing research is designed to include covariates in confirmatory diagnostic models, which are also known as restricted latent class models. We propose new methods for including covariates in exploratory RLCMs that jointly infer the latent structure and evaluate the role of covariates on performance and skill mastery. We present a novel Bayesian formulation and report a Markov chain Monte Carlo algorithm using a Metropolis-within-Gibbs algorithm for approximating the model parameter posterior distribution. We report Monte Carlo simulation evidence regarding the accuracy of our new methods and present results from an application that examines the role of student background knowledge on the mastery of a probability data set.  相似文献   

8.
9.
10.
Tijmstra  Jesper  Bolsinova  Maria 《Psychometrika》2019,84(3):846-869

The assumption of latent monotonicity is made by all common parametric and nonparametric polytomous item response theory models and is crucial for establishing an ordinal level of measurement of the item score. Three forms of latent monotonicity can be distinguished: monotonicity of the cumulative probabilities, of the continuation ratios, and of the adjacent-category ratios. Observable consequences of these different forms of latent monotonicity are derived, and Bayes factor methods for testing these consequences are proposed. These methods allow for the quantification of the evidence both in favor and against the tested property. Both item-level and category-level Bayes factors are considered, and their performance is evaluated using a simulation study. The methods are applied to an empirical example consisting of a 10-item Likert scale to investigate whether a polytomous item scoring rule results in item scores that are of ordinal level measurement.

  相似文献   

11.
The increasing availability of high-dimensional, fine-grained data about human behaviour, gathered from mobile sensing studies and in the form of digital footprints, is poised to drastically alter the way personality psychologists perform research and undertake personality assessment. These new kinds and quantities of data raise important questions about how to analyse the data and interpret the results appropriately. Machine learning models are well suited to these kinds of data, allowing researchers to model highly complex relationships and to evaluate the generalizability and robustness of their results using resampling methods. The correct usage of machine learning models requires specialized methodological training that considers issues specific to this type of modelling. Here, we first provide a brief overview of past studies using machine learning in personality psychology. Second, we illustrate the main challenges that researchers face when building, interpreting, and validating machine learning models. Third, we discuss the evaluation of personality scales, derived using machine learning methods. Fourth, we highlight some key issues that arise from the use of latent variables in the modelling process. We conclude with an outlook on the future role of machine learning models in personality research and assessment.  相似文献   

12.
SUMMARY

Research on spirituality and religiousness has gained growing attention in recent years; however, most studies have used cross-sectional designs. As research on this topic evolves, there has been increasing recognition of the need to examine these constructs and their effects through the use of longitudinal designs. Beyond repeated-measures ANOVA and OLS regression models, what tools are available to examine these constructs over time? The purpose of this paper is to provide an overview of two cutting-edge statistical techniques that will facilitate longitudinal investigations of spirituality and religiousness: latent growth curve analysis using structural equation modeling (SEM) and individual growth curve models. The SEM growth curve approach examines change at the group level, with change over time expressed as a single latent growth factor. In contrast, individual growth curve models consider longitudinal change at the level of the person. While similar results may be obtained using either method, researchers may opt for one over the other due to the strengths and weaknesses associated with these methods. Examples of applications of both approaches to longitudinal studies of spirituality and religiousness are presented and discussed, along with design and data considerations when employing these modeling techniques.  相似文献   

13.
This article examines the possible contribution of behavioral and molecular genetic research to the development of a dimensional classification of personality disorder. It is argued that the results of molecular studies are too preliminary to have immediate nosological significance. However, behavioral genetic methods could play a useful role in constructing a classification that reflects the genetic architecture of personality disorder. It is also argued that the best approach to constructing a valid classification would be to integrate behavioral genetic methods with the construct validation framework used in test construction. An integrative approach is proposed that seeks to combine constructs from alternative dimensional models. It is suggested that strong evidence of a four-dimensional structure to personality disorder provides a way to organize a preliminary model. An initial set of primary traits to define these secondary domains would then be compiled from existing models and refined using a combination of traditional psychometric analyses and behavioral genetic methods. It is concluded that an etiologically based classification is feasible for the DSM-V.  相似文献   

14.
This paper studies three models for cognitive diagnosis, each illustrated with an application to fraction subtraction data. The objective of each of these models is to classify examinees according to their mastery of skills assumed to be required for fraction subtraction. We consider the DINA model, the NIDA model, and a new model that extends the DINA model to allow for multiple strategies of problem solving. For each of these models the joint distribution of the indicators of skill mastery is modeled using a single continuous higher-order latent trait, to explain the dependence in the mastery of distinct skills. This approach stems from viewing the skills as the specific states of knowledge required for exam performance, and viewing these skills as arising from a broadly defined latent trait resembling the θ of item response models. We discuss several techniques for comparing models and assessing goodness of fit. We then implement these methods using the fraction subtraction data with the aim of selecting the best of the three models for this application. We employ Markov chain Monte Carlo algorithms to fit the models, and we present simulation results to examine the performance of these algorithms. The work reported here was performed under the auspices of the External Diagnostic Research Team funded by Educational Testing Service. Views expressed in this paper does not necessarily represent the views of Educational Testing Service.  相似文献   

15.
Composition in distributional models of semantics   总被引:1,自引:0,他引:1  
Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.  相似文献   

16.
Computerized adaptive testing under nonparametric IRT models   总被引:1,自引:0,他引:1  
Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using nonparametric item response models. A central issue is that the derivatives of item characteristic Curves may not be estimated well, which eliminates the availability of the standard maximum Fisher information criterion. As alternatives, procedures based on Shannon entropy and Kullback–Leibler information are proposed. For a long test, these procedures, which do not require the derivatives of the item characteristic eurves, become equivalent to the maximum Fisher information criterion. A simulation study is conducted to study the behavior of these two procedures, compared with random item selection. The study shows that the procedures based on Shannon entropy and Kullback–Leibler information perform similarly in terms of root mean square error, and perform much better than random item selection. The study also shows that item exposure rates need to be addressed for these methods to be practical. The authors would like to thank Hua Chang for his help in conducting this research.  相似文献   

17.
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but use ANCOVA anyway (and, most likely, report misleading results); (b) attempt to employ 1 of several measurement error models with the understanding that no research has examined their relative performance and with the added practical difficulty that several of these models are not available in commonly used statistical software; or (c) not use ANCOVA at all. First, we discuss analytic evidence to explain why using ANCOVA with fallible covariates produces bias and a systematic inflation of Type I error rates that may lead to the incorrect conclusion that treatment effects exist. Second, to provide a solution for this problem, we conduct 2 Monte Carlo studies to compare 4 existing approaches for adjusting treatment effects in the presence of covariate measurement error: errors-in-variables (EIV; Warren, White, & Fuller, 1974), Lord's (1960) method, Raaijmakers and Pieters's (1987) method (R&P), and structural equation modeling methods proposed by S?rbom (1978) and Hayduk (1996). Results show that EIV models are superior in terms of parameter accuracy, statistical power, and keeping Type I error close to the nominal value. Finally, we offer a program written in R that performs all needed computations for implementing EIV models so that ANCOVA can be used to obtain accurate results even when covariates are measured with error.  相似文献   

18.
Eric Maris 《Psychometrika》1993,58(3):445-469
A class of models for gamma distributed random variables is presented. These models are shown to be more flexible than the classical linear models with respect to the structure that can be imposed on the expected value. In particular, both additive, multiplicative, and combined additive-multiplicative models can be formulated. As a special case, a class of psychometric models for reaction times is presented, together with their psychological interpretation. By means of a comparison with existing models, this class of models is shown to offer some possibilities that are not available in existing methods. Parameter estimation by means of maximum likelihood (ML) is shown to have some attractive properties, since the models belong to the exponential family. Then, the results of a simulation study of the bias in the ML estimates are presented. Finally, the application of these models is illustrated by an analysis of the data from a mental rotation experiment. This analysis is preceded by an evaluation of the appropriateness of the gamma distribution for these data.  相似文献   

19.
The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.  相似文献   

20.
双因子模型和高阶因子模型,作为既有全局因子又有局部因子的两个竞争模型,在研究中得到了广泛应用。本文采用Monte Carlo模拟方法,在模型拟合比较的基础上,比较了效标分别为外显变量和内潜变量时,两个模型在各种负荷水平下预测准确度的差异。结果发现,两种模型在拟合效果方面无显著差异;但在预测效度方面,当效标为显变量时,两个模型的结构系数估计值皆为无偏估计;而效标为潜变量时,高阶因子模型表现优于双因子模型:高阶因子模型的结构系数为无偏估计,双因子模型的结构系数估计值则在50%左右的情况下存在偏差。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号