首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Abstract

When estimating multiple regression models with incomplete predictor variables, it is necessary to specify a joint distribution for the predictor variables. A convenient assumption is that this distribution is a multivariate normal distribution, which is also the default in many statistical software packages. This distribution will in general be misspecified if predictors with missing data have nonlinear effects (e.g., x2) or are included in interaction terms (e.g., x·z). In the present article, we introduce a factored regression modeling approach for estimating regression models with missing data that is based on maximum likelihood estimation. In this approach, the model likelihood is factorized into a part that is due to the model of interest and a part that is due to the model for the incomplete predictors. In three simulation studies, we showed that the factored regression modeling approach produced valid estimates of interaction and nonlinear effects in regression models with missing values on categorical or continuous predictor variables under a broad range of conditions. We developed the R package mdmb, which facilitates a user-friendly application of the factored regression modeling approach, and present a real-data example that illustrates the flexibility of the software.  相似文献   

3.
Understanding how memory processes contribute to the conscious experience of memory is central to contemporary cognitive psychology. Recently, many investigators (e.g., Gardiner, 1988) have examined theremember-know paradigm to understand the conscious correlates of recognition memory. A variety of studies have demonstrated that variables have different effects on remember and know responses, and these findings have been interpreted in the context of dual-process models of recognition memory. This paper presents a single-process model of the remember-know paradigm, emphasizing the dependence of remember and know judgments on a set of common underlying processes (e.g., criterion setting). We use this model to demonstrate how a single-process model can give rise to the functional dissociations presented in the remember-know literature. We close by detailing procedures for testing our model and describing how those tests may facilitate the development of dual-process models.  相似文献   

4.
5.
6.
Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.  相似文献   

7.
Diffusion processes (e.g., Wiener process, Ornstein-Uhlenbeck process) are powerful approaches to model human information processes in a variety of psychological tasks. Lack of mathematical tractability, however, has prevented broad applications of these models to empirical data. This tutorial explains step by step, using a matrix approach, how to construct these models, how to implement them on a computer, and how to calculate the predictions made by these models. In particular, we present models for binaries choices for unidimensional and multiattribute choice alternatives; for simple reaction time tasks; and for three alternatives choice problems.  相似文献   

8.
9.
The purpose of the popular Iowa gambling task is to study decision making deficits in clinical populations by mimicking real-life decision making in an experimental context. Busemeyer and Stout [Busemeyer, J. R., & Stout, J. C. (2002). A contribution of cognitive decision models to clinical assessment: Decomposing performance on the Bechara gambling task. Psychological Assessment, 14, 253-262] proposed an “Expectancy Valence” reinforcement learning model that estimates three latent components which are assumed to jointly determine choice behavior in the Iowa gambling task: weighing of wins versus losses, memory for past payoffs, and response consistency. In this article we explore the statistical properties of the Expectancy Valence model. We first demonstrate the difficulty of applying the model on the level of a single participant, we then propose and implement a Bayesian hierarchical estimation procedure to coherently combine information from different participants, and we finally apply the Bayesian estimation procedure to data from an experiment designed to provide a test of specific influence.  相似文献   

10.
There is a growing use of noncognitive assessments around the world, and recent research has posited an ideal point response process underlying such measures. A critical issue is whether the typical use of dominance approaches (e.g., average scores, factor analysis, and the Samejima's graded response model) in scoring such measures is adequate. This study examined the performance of an ideal point scoring approach (e.g., the generalized graded unfolding model) as compared to the typical dominance scoring approaches in detecting curvilinear relationships between scored trait and external variable. Simulation results showed that when data followed the ideal point model, the ideal point approach generally exhibited more power and provided more accurate estimates of curvilinear effects than the dominance approaches. No substantial difference was found between ideal point and dominance scoring approaches in terms of Type I error rate and bias across different sample sizes and scale lengths, although skewness in the distribution of trait and external variable can potentially reduce statistical power. For dominance data, the ideal point scoring approach exhibited convergence problems in most conditions and failed to perform as well as the dominance scoring approaches. Practical implications for scoring responses to Likert-type surveys to examine curvilinear effects are discussed.  相似文献   

11.
Under anxiety, people sometimes perform poorly. This concerns cognitive performance (e.g., taking an important exam) as well as perceptual-motor performance (e.g., picking up a cup from a table). There is still much debate about how anxiety affects perceptual-motor performance. In the current paper we review the experimental literature on anxiety and perceptual-motor performance, thereby focusing on how anxiety affects the perception, selection, and realization of action possibilities. Based on this review we discuss the merits of two opposing theoretical explanations and build on existing frameworks of anxiety and cognitive performance to develop an integrated model that explains the various ways in which anxiety may specifically affect perceptual-motor performance. This model distinguishes between positive and negative effects of anxiety and, moving beyond previous approaches, recognizes three operational levels (i.e., attentional, interpretational, and behavioral) at which anxiety may affect different aspects of goal-directed action. Finally, predictions are formulated and directions for future research suggested.  相似文献   

12.
詹沛达 《心理学报》2022,54(11):1416-1423
多模态数据为实现对认知结构的精准诊断及其他认知特征(如, 认知风格)的全面反馈提供了可能性。为实现对题目作答精度、作答时间(RT)和视觉注视点数(FC)的联合分析, 本文基于联合-交叉负载建模法提出3个多模态认知诊断模型。实证研究及模拟研究结果表明: (1)联合分析比分离分析更适用于多模态数据; (2)新模型可直接利用RT和FC中信息提高潜在能力或潜在属性的估计准确性; (3)新模型的参数估计返真性较好; (4)忽略交叉负载所导致的负面结果比冗余考虑交叉负载所导致的更严重。  相似文献   

13.
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or hierarchical approach in which the variance-covariance matrix of the random effects is assumed to be positive definite with nonzero values for the variances. When the number of fixed effects and random effects is unknown, the predominant approach to model building is a step-up method in which one starts with a limited model (e.g., few fixed and random intercepts) and then additional fixed effects and random effects are added based on statistical tests. A model building approach that has received less attention in psychology and education is a top-down method. In the top-down method, the initial model has a single random intercept but is loaded with fixed effects (also known as an “overelaborate” model). Based on the overelaborate fixed effects model, the need for additional random effects is determined. There has been little if any examination of the ability of these methods to identify a true population model (i.e., identifying the model that generated the data). The purpose of this article is to examine the performance of the step-up and top-down model building approaches for exploratory longitudinal data analysis. Student achievement data sets from the Chicago longitudinal study serve as the populations in the simulations.  相似文献   

14.
Several competing models have been put forth regarding the role of identity in the reasoned action framework. The standard model proposes that identity is a background variable. Under a typical augmented model, identity is treated as an additional direct predictor of intention and behavior. Alternatively, it has been proposed that identity measures are inadvertent indicators of an underlying intention factor (e.g., a manifest-intention model). In order to test these competing hypotheses, we used data from 73 independent studies (total N = 23,917) to conduct a series of meta-analytic structural equation models. We also tested for moderation effects based on whether there was a match between identity constructs and the target behaviors examined (e.g., if the study examined a “smoker identity” and “smoking behavior,” there would be a match; if the study examined a “health conscious identity” and “smoking behavior,” there would not be a match). Average effects among primary reasoned action variables were all substantial, rs = .37–.69. Results gave evidence for the manifest-intention model over the other explanations, and a moderation effect by identity-behavior matching.  相似文献   

15.
An important aspect of psychotherapy research is the examination of the theoretical models underlying intervention approaches. Laboratory-based component research is one useful methodology for this endeavor as it provides an experimental means of testing questions related to intervention components and the change process they engage with a high level of control and precision. A meta-analysis was conducted of 66 laboratory-based component studies evaluating treatment elements and processes that are suggested by the psychological flexibility model that underlies Acceptance and Commitment Therapy (acceptance, defusion, self as context, committed action, values, and present moment), but also touches on a variety of contextual forms of cognitive behavior therapy. Significant positive effect sizes were observed for acceptance, defusion, present moment, values, mixed mindfulness components, and values plus mindfulness component conditions compared to inactive comparison conditions. Additional analyses provided further support for the psychological flexibility model, finding larger effect sizes for theoretically specified outcomes, expected differences between theoretically distinct interventions, and larger effect sizes for component conditions that included experiential methods (e.g., metaphors, exercises) than those with a rationale alone. Effect sizes did not differ between at-risk/distressed and convenience samples. Limitations with the meta-analysis and future directions for laboratory-based component research are discussed.  相似文献   

16.
17.
Both the speed and accuracy of responding are important measures of performance. A well-known interpretive difficulty is that participants may differ in their strategy, trading speed for accuracy, with no change in underlying competence. Another difficulty arises when participants respond slowly and inaccurately (rather than quickly but inaccurately), e.g., due to a lapse of attention. We introduce an approach that combines response time and accuracy information and addresses both situations. The modeling framework assumes two latent competing processes. The first, the error-free process, always produces correct responses. The second, the guessing process, results in all observed errors and some of the correct responses (but does so via non-specific processes, e.g., guessing in compliance with instructions to respond on each trial). Inferential summaries of the speed of the error-free process provide a principled assessment of cognitive performance reducing the influences of both fast and slow guesses. Likelihood analysis is discussed for the basic model and extensions. The approach is applied to a data set on response times in a working memory test. The authors wish to thank Roger Ratcliff, Christopher Chabris, and three anonymous referees for their helpful comments, and Aureliu Lavric for providing the data analyzed in this paper.  相似文献   

18.
Decisions can sometimes have a constructive role, so that the act of, for example, choosing one option over another creates a preference for that option (e.g., , ,  and ). In this work we explore the constructive role of just articulating an impression, for a presented visual stimulus, as opposed to making a choice (specifically, the judgments we employ are affective evaluations). Using quantum probability theory, we outline a cognitive model formalizing such a constructive process. We predict a simple interaction, in relation to how a second image is evaluated, following the presentation of a first image, depending on whether there is a rating for the first image or not. The interaction predicted by the quantum model was confirmed across three experiments and a variety of control manipulations. The advantages of using quantum probability theory to model the present results, compared with existing models of sequence order effects in judgment (e.g., Hogarth & Einhorn, 1992) or other theories of constructive processes when a choice is made (e.g.,  and ) are discussed.  相似文献   

19.
Social cognitive career theory (Lent, Brown, & Hackett, 1994) was originally designed to help explain interest development, choice, and performance in career and educational domains. These three aspects of career/academic development were presented in distinct but overlapping segmental models. This article presents a fourth social cognitive model aimed at understanding satisfaction experienced in vocational and educational pursuits. The model posits paths whereby core social cognitive variables (e.g., self-efficacy, goals) function jointly with personality/affective trait and contextual variables that have been linked to job satisfaction. We consider the model’s implications for forging an understanding of satisfaction that bridges the often disparate perspectives of organizational and vocational psychology.  相似文献   

20.
The remarkable successes of the physical sciences have been built on highly general quantitative laws, which serve as the basis for understanding an enormous variety of specific physical systems. How far is it possible to construct universal principles in the cognitive sciences, in terms of which specific aspects of perception, memory, or decision making might be modelled? Following Shepard (e.g., 1987 ), it is argued that some universal principles may be attainable in cognitive science. Here, 2 examples are proposed: the simplicity principle (which states that the cognitive system prefers patterns that provide simpler explanations of available data); and the scale‐invariance principle, which states that many cognitive phenomena are independent of the scale of relevant underlying physical variables, such as time, space, luminance, or sound pressure. This article illustrates how principles may be combined to explain specific cognitive processes by using these principles to derive SIMPLE, a formal model of memory for serial order ( Brown, Neath, & Chater, 2007 ), and briefly mentions some extensions to models of identification and categorization. This article also considers the scope and limitations of universal laws in cognitive science.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号