首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Multinomial processing tree models are widely used in many areas of psychology. Their application relies on the assumption of parameter homogeneity, that is, on the assumption that participants do not differ in their parameter values. Tests for parameter homogeneity are proposed that can be routinely used as part of multinomial model analyses to defend the assumption. If parameter homogeneity is found to be violated, a new family of models, termed latent-class multinomial processing tree models, can be applied that accommodates parameter heterogeneity and correlated parameters, yet preserves most of the advantages of the traditional multinomial method. Estimation, goodness-of-fit tests, and tests of other hypotheses of interest are considered for the new family of models. The author thanks Bill Batchelder, Edgar Erdfelder, Thorsten Meiser, and Christoph Stahl for helpful comments on a previous version of this paper. The author is also grateful to Edgar Erdfelder for making available the data set analyzed in this paper.  相似文献   

2.
Psychometrika - Multinomial processing trees (MPTs) are a popular class of cognitive models for categorical data. Typically, researchers compare several MPTs, each equipped with many parameters,...  相似文献   

3.
4.
多项式加工树(multinomial processing tree, MPT)从理论模型出发,使用多项式模型来拟合行为数据并估计理论模型中各个加工过程发生的可能性。该模型能够有效分离和量化不同心理过程,被广泛应用于社会认知研究之中,如态度、刻板印象等。本文首先介绍该模型的基本原理及其实现,并以道德判断为例说明其在社会心理学中的最新应用。最后,总结其对社会心理学研究的意义,即可以作为一种方法提高研究的效度和精度,具有较高的实用价值,并指出其潜在不足。  相似文献   

5.
多项式加工树(MPT)模型是一种认知测量模型,能够对潜在认知过程进行测量和检验。已有研究探讨了二链MPT模型次序约束的重新参数化问题,本研究探讨了MPT模型次序约束的量化分析方法并从二链推广到多链,同时归纳出MPT模型参数向量内和参数向量间两参数次序约束量化分析的结论。数据分析结果表明该方法不仅在MPT模型框架下验证了潜在参数次序关系,而且给出了约束的量化指标,为潜在认知测量提供更有意义的解释。  相似文献   

6.
This study shows how to address the problem of trait-unrelated response styles (RS) in rating scales using multidimensional item response theory. The aim is to test and correct data for RS in order to provide fair assessments of personality. Expanding on an approach presented by Böckenholt (2012), observed rating data are decomposed into multiple response processes based on a multinomial processing tree. The data come from a questionnaire consisting of 50 items of the International Personality Item Pool measuring the Big Five dimensions administered to 2,026 U.S. students with a 5-point rating scale. It is shown that this approach can be used to test if RS exist in the data and that RS can be differentiated from trait-related responses. Although the extreme RS appear to be unidimensional after exclusion of only 1 item, a unidimensional measure for the midpoint RS is obtained only after exclusion of 10 items. Both RS measurements show high cross-scale correlations and item response theory-based (marginal) reliabilities. Cultural differences could be found in giving extreme responses. Moreover, it is shown how to score rating data to correct for RS after being proved to exist in the data.  相似文献   

7.
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.  相似文献   

8.
Among theories of human language comprehension, cue-based memory retrieval has proven to be a useful framework for understanding when and how processing difficulty arises in the resolution of long-distance dependencies. Most previous work in this area has assumed that very general retrieval cues like [+subject] or [+singular] do the work of identifying (and sometimes misidentifying) a retrieval target in order to establish a dependency between words. However, recent work suggests that general, handpicked retrieval cues like these may not be enough to explain illusions of plausibility (Cunnings & Sturt, 2018), which can arise in sentences like The letter next to the porcelain plate shattered. Capturing such retrieval interference effects requires lexically specific features and retrieval cues, but handpicking the features is hard to do in a principled way and greatly increases modeler degrees of freedom. To remedy this, we use well-established word embedding methods for creating distributed lexical feature representations that encode information relevant for retrieval using distributed retrieval cue vectors. We show that the similarity between the feature and cue vectors (a measure of plausibility) predicts total reading times in Cunnings and Sturt’s eye-tracking data. The features can easily be plugged into existing parsing models (including cue-based retrieval and self-organized parsing), putting very different models on more equal footing and facilitating future quantitative comparisons.  相似文献   

9.
General processing tree (GPT) models are usually used to analyze categorical data collected in psychological experiments. Such models assume functional relations between probabilities of the observed behavior categories and the unobservable choice probabilities involved in a cognitive task. This paper extends GPT models for categorical data to the analysis of continuous data in a class of response time (RT) experiments in cognitive psychology. Suppose that a cognitive task involves several discrete processing stages and both accuracy (categorical) and latency (continuous) measures are obtained for each of the response categories. Furthermore, suppose that the task can be modeled by a GPT model that assumes serialization among the stages. The observed latencies of the response categories are functions of the choice probabilities and processing times (PT) at each of the processing stages. The functional relations are determined by the processing structure of the task. A general framework is presented and it is applied to a set of data obtained from a source monitoring experiment. Copyright 2001 Academic Press.  相似文献   

10.
11.
Although latent attributes that follow a hierarchical structure are anticipated in many areas of educational and psychological assessment, current psychometric models are limited in their capacity to objectively evaluate the presence of such attribute hierarchies. This paper introduces the Hierarchical Diagnostic Classification Model (HDCM), which adapts the Log-linear Cognitive Diagnosis Model to cases where attribute hierarchies are present. The utility of the HDCM is demonstrated through simulation and by an empirical example. Simulation study results show the HDCM is efficiently estimated and can accurately test for the presence of an attribute hierarchy statistically, a feature not possible when using more commonly used DCMs. Empirically, the HDCM is used to test for the presence of a suspected attribute hierarchy in a test of English grammar, confirming the data is more adequately represented by hierarchical attribute structure when compared to a crossed, or nonhierarchical structure.  相似文献   

12.
13.
The efficacy of a cognitive-based remediation program was investigated with 14 English-as-a-second-language (ESL) poor readers in Grade 4 who had significant difficulty in comprehension and 14 normal ESL readers in Grade 4 who received no remediation. Both groups were selected from 2 English-medium schools in India. We examined pretest-to-posttest changes in word reading, comprehension, and planning–attention–simultaneous–successive cognitive processes. Analyses of variance (ANOVAs) showed marked improvement in comprehension and some improvement in simultaneous processing for the treated group. The results indicate that the cognitive-based remediation program has potential for substantially improving comprehension and its underlying cognitive process among ESL children.  相似文献   

14.
尹华站 《心理科学》2013,36(3):743-747
为了探讨数秒内不同层级时间加工的特性,研究者分别从“时间信息加工”和“信息加工的计时特性”角度开展了一系列研究。Münsterberg (1889)、Michon(1985) 、Lewis 和 Miall(2003) 及Vierodt(1868)从前一角度,分别指出1/3秒、1/2秒、1秒及3秒可能是数秒以内时距加工机制的分界点,分界点以下与以上的加工机制存在差异。P?ppel(1997, 2009)则从后一角度指出限制信息加工过程的两类时间窗,一类时间窗是以20-60毫秒振荡周期运行的高频系统,属于初级整合单元;另一类时间窗主要是处理2-3秒以内事件系列的低频系统,属于高级整合单元。前一类时间窗可以为信息加工整合基本的心理事件,后一类时间窗则是把2-3秒内的心理事件整合为基本知觉单元。基于以往研究的剖析,我们认为1/3秒、1/2秒及1秒等分界点的真伪性尚需进一步验证,并进一步假设40毫秒以内时间不能觉察为时距;40毫秒至3秒之间,随着长度增加,自动化加工减弱,控制性加工增强;3秒以上主要为控制性加工,涉及记忆过程。  相似文献   

15.
16.
The conventional setup for multi-group structural equation modeling requires a stringent condition of cross-group equality of intercepts before mean comparison with latent variables can be conducted. This article proposes a new setup that allows mean comparison without the need to estimate any mean structural model. By projecting the observed sample means onto the space of the common scores and the space orthogonal to that of the common scores, the new setup allows identifying and estimating the means of the common and specific factors, although, without replicate measures, variances of specific factors cannot be distinguished from those of measurement errors. Under the new setup, testing cross-group mean differences of the common scores is done independently from that of the specific factors. Such independent testing eliminates the requirement for cross-group equality of intercepts by the conventional setup in order to test cross-group equality of means of latent variables using chi-square-difference statistics. The most appealing piece of the new setup is a validity index for mean differences, defined as the percentage of the sum of the squared observed mean differences that is due to that of the mean differences of the common scores. By analyzing real data with two groups, the new setup is shown to offer more information than what is obtained under the conventional setup.  相似文献   

17.
18.
This survey investigates the relationship between exposure to television portrayals of Latinos and real world perceptions of Latinos in the U.S. To aid in this assessment, contributions from the research on mental models were incorporated into a cultivation framework. From this mental models-based cultivation perspective, it was expected that amount of television exposure and existing cognitions regarding representations of Latinos in the media would interact in predicting real world perceptions of Latinos. Additionally, the amount of real world interracial contact with Latinos was predicted to moderate these effects. Findings provide support for the proposed relationships, indicating that as television consumption rates increase, extant cognitions regarding media depictions of Latinos and real world contact guide subsequent evaluations of Latinos.  相似文献   

19.
Human response time (RT) data are widely used in experimental psychology to evaluate theories of mental processing. Typically, the data constitute the times taken by a subject to react to a succession of stimuli under varying experimental conditions. Because of the sequential nature of the experiments there are trends (due to learning, fatigue, fluctuations in attentional state, etc.) and serial dependencies in the data. The data also exhibit extreme observations that can be attributed to lapses, intrusions from outside the experiment, and errors occurring during the experiment. Any adequate analysis should account for these features and quantify them accurately. Recognizing that Bayesian hierarchical models are an excellent modeling tool, we focus on the elaboration of a realistic likelihood for the data and on a careful assessment of the quality of fit that it provides. We judge quality of fit in terms of the predictive performance of the model. We demonstrate how simple Bayesian hierarchical models can be built for several RT sequences, differentiating between subject-specific and condition-specific effects.  相似文献   

20.
In this paper, an approach to decision making in managing human resources that integrates multi‐attribute decision making techniques with expert systems is described. The approach is based on the explicit articulation of qualitative decision knowledge which is represented by a tree of attributes and decision rules. The decision making process is supported by DEXi, a specialized expert system shell for interactive construction of the knowledge base, evaluation of options, and explanation of the results. Practical use of the shell is illustrated by an application in the field of personnel selection for a top manager position.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号