首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Attitudes represent object evaluations, comprising complex underlying cognitive and affective knowledge structures. When people are asked to judge an object, they can use their primary response (i.e., the immediate object‐evaluation linkage) or underlying affective and cognitive knowledge structures. In many situations, a primary response satisfices, but if not, more elaboration is required. Both processes are fundamentally different but may lead to the same attitude. For monitoring underlying processes during attitude expression, we developed an innovative eye‐tracking procedure using eye‐gaze on response scale options. This procedure was applied in three studies to identify the extent to which elaboration differs for attitude objects with weak or strong, univalent or mixed object evaluations (i.e., univalent, neutral and ambivalent). In Study 1, the overall judgment preceded processing of more specific affective and cognitive linkage evaluations. In Studies 2 and 3, the order was reversed, and affective and cognitive bases were assessed prior to overall attitude outcomes. For attitude objects with strong univalent or strong mixed object evaluations, we found similar outcomes on underlying processes. For weak object evaluations, cognition was found to be more predictive and easily accessible if an overall judgment was required first; affect for these objects was more predictive if people had to elaborate on affect and cognition first. We concluded that both affective and cognitive attitudes may require substantial elaboration, albeit in different situations. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
How might being outcome dependent on another person influence the processes that one uses to form impressions of that person? We designed three experiments to investigate this question with respect to short-term, task-oriented outcome dependency. In all three experiments, subjects expected to interact with a young man formerly hospitalized as a schizophrenic, and they received information about the person's attributes in either written profiles or videotapes. In Experiment 1, short-term, task-oriented outcome dependency led subjects to use relatively individuating processes (i.e., to base their impressions of the patient on his particular attributes), even under conditions that typically lead subjects to use relatively category-based processes (i.e., to base their impressions on the patient's schizophrenic label). Moreover, in the conditions that elicited individuating processes, subjects spent more time attending to the patient's particular attribute information. Experiment 2 demonstrated that the attention effects in Experiment 1 were not merely a function of impression positivity and that outcome dependency did not influence the impression formation process when attribute information in addition to category-level information was unavailable. Finally, Experiment 3 manipulated not outcome dependency but the attentional goal of forming an accurate impression. We found that accuracy-driven attention to attribute information also led to individuating processes. The results of the three experiments indicate that there are important influences of outcome dependency on impression formation. These results are consistent with a model in which the tendency for short-term, task-oriented outcome dependency to facilitate individuating impression formation processes is mediated by an increase in accuracy-driven attention to attribute information.  相似文献   

3.
In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.  相似文献   

4.
This study examines the precision of conditional maximum likelihood estimates and the quality of model selection methods based on information criteria (AIC and BIC) in mixed Rasch models. The design of the Monte Carlo simulation study included four test lengths (10, 15, 25, 40), three sample sizes (500, 1000, 2500), two simulated mixture conditions (one and two groups), and population homogeneity (equally sized subgroups) or heterogeneity (one subgroup three times larger than the other). The results show that both increasing sample size and increasing number of items lead to higher accuracy; medium-range parameters were estimated more precisely than extreme ones; and the accuracy was higher in homogeneous populations. The minimum-BIC method leads to almost perfect results and is more reliable than AIC-based model selection. The results are compared to findings by Li, Cohen, Kim, and Cho (2009) and practical guidelines are provided.  相似文献   

5.
《Military psychology》2013,25(3):119-136
In recent years, the military has devoted considerable effort to the develop- ment of empirically keyed biodata instruments for use in selection. Although studies using empirical keying procedures are common in the personnel selection literature, relatively few studies have compared these procedures. Using data collected from Naval Academy midshipmen, we compared nine empirical keying procedures: vertical percent (five strategies), horizontal percent, mean criterion, phi coefficient, and rare response. For each keying procedure, five different sample sizes were used to determine the minimum sample size needed to obtain stable results. For the three largest samples, all of the criterion-based methods yielded scales with significant cross-validities. Among these methods, two vertical percent strategies generally produced the most valid scales for the four largest samples. Without exception, the cross-validities for the only noncriterion-based method (rare response) failed to reach significance. The effects of unit versus differential weighting and scale length versus item-alternative validity are discussed.  相似文献   

6.
For mixed models generally, it is well known that modeling data with few clusters will result in biased estimates, particularly of the variance components and fixed effect standard errors. In linear mixed models, small sample bias is typically addressed through restricted maximum likelihood estimation (REML) and a Kenward-Roger correction. Yet with binary outcomes, there is no direct analog of either procedure. With a larger number of clusters, estimation methods for binary outcomes that approximate the likelihood to circumvent the lack of a closed form solution such as adaptive Gaussian quadrature and the Laplace approximation have been shown to yield less-biased estimates than linearization estimation methods that instead linearly approximate the model. However, adaptive Gaussian quadrature and the Laplace approximation are approximating the full likelihood rather than the restricted likelihood; the full likelihood is known to yield biased estimates with few clusters. On the other hand, linearization methods linearly approximate the model, which allows for restricted maximum likelihood and the Kenward-Roger correction to be applied. Thus, the following question arises: Which is preferable, a better approximation of a biased function or a worse approximation of an unbiased function? We address this question with a simulation and an illustrative empirical analysis.  相似文献   

7.
刘红云  李冲  张平平  骆方 《心理学报》2012,44(8):1124-1136
测量工具满足等价性是进行多组比较的前提, 测量等价性的检验方法主要有基于CFA的多组比较法和基于IRT的DIF检验两类方法。文章比较了单维测验情境下基于CCFA的DIFFTEST检验方法和基于IRT模型的IRT-LR检验方法, 以及多维测验情境下DIFFTEST和基于MIRT的卡方检验方法的差异。通过模拟研究的方法, 比较了几种方法的检验力和第一类错误, 并考虑了样本总量、样本量的组间均衡性、测验长度、阈值差异大小以及维度间相关程度的影响。研究结果表明:(1)在单维测验下, IRT-LR是比DIFFTEST更为严格的检验方法; 多维测验下, 在测验较长、测验维度之间相关较高时, MIRT-MG比DIFFTEST更容易检验出项目阈值的差异, 而在测验长度较短、维度之间相关较小时, DIFFTEST的检验力反而略高于MIRT-MG方法。(2)随着阈值差值增加, DIFFTEST、IRT-LR和MIRT-MG三种方法的检验力均在增加, 当阈值差异达到中等或较大时, 三种方法都可以有效检验出测验阈值的不等价性。(3)随着样本总量增加, DIFFTEST、IRT-LR和MIRT-MG方法的检验力均在增加; 在总样本量不变, 两组样本均衡情况下三种方法的检验力均高于不均衡的情况。(4)违背等价性题目个数不变时, 测验越长DIFFTEST的检验力会下降, 而IRT-LR和MIRT-MG检验力则上升。(5) DIFFTEST方法的一类错误率平均值接近名义值0.05; 而IRT-LR和MIRT-MG方法的一类错误率平均值远低于0.05。  相似文献   

8.
Taxometric procedures and the Factor Mixture Model (FMM) have a complimentary set of strengths and weaknesses. Both approaches purport to detect evidence of a latent class structure. Taxometric procedures, popular in psychiatric and psychopathology literature, make no assumptions beyond those needed to compute means and covariances. However, Taxometric procedures assume that observed items are uncorrelated within a class or taxon. This assumption is violated when there are individual differences in the trait underlying the items (i.e., severity differences within class). FMMs can model within-class covariance structures ranging from local independence to multidimensional within-class factor models and permits the specification of more than two classes. FMMs typically rely on normality assumptions for within-class factors and error terms. FMMs are highly parameterized and susceptible to misspecifications of the within-class covariance structure.

The current study compared the Taxometric procedures MAXEIG and the Base-Rate Classification Technique to the FMM in their respective abilities to (1) correctly detect the two-class structure in simulated data, and to (2) correctly assign subjects to classes. Two class data were simulated under conditions of balanced and imbalanced relative class size, high and low class separation, and 1-factor and 2-factor within-class covariance structures. For the 2-factor data, simple and cross-loaded factor loading structures, and positive and negative factor correlations were considered. For the FMM, both correct and incorrect within-class factor structures were fit to the data.

FMMs generally outperformed Taxometric procedures in terms of both class detection and in assigning subjects to classes. Imbalanced relative class size (e.g., a small minority class and a large majority class) negatively impacted both FMM and Taxometric performance while low class separation was much more problematic for Taxometric procedures than the FMM. Comparisons of alterative FMMs based on information criteria generally resulted in correct model choice but deteriorated when small class separation was combined with imbalanced relative class size.  相似文献   

9.
Can sentence comprehension impairments in aphasia be explained by difficulties arising from dependency completion processes in parsing? Two distinct models of dependency completion difficulty are investigated, the Lewis and Vasishth (2005) activation-based model and the direct-access model (DA; McElree, 2000). These models' predictive performance is compared using data from individuals with aphasia (IWAs) and control participants. The data are from a self-paced listening task involving subject and object relative clauses. The relative predictive performance of the models is evaluated using k-fold cross-validation. For both IWAs and controls, the activation-based model furnishes a somewhat better quantitative fit to the data than the DA. Model comparisons using Bayes factors show that, assuming an activation-based model, intermittent deficiencies may be the best explanation for the cause of impairments in IWAs, although slowed syntax and lexical delayed access may also play a role. This is the first computational evaluation of different models of dependency completion using data from impaired and unimpaired individuals. This evaluation develops a systematic approach that can be used to quantitatively compare the predictions of competing models of language processing.  相似文献   

10.
Measures of depressive dimensions: are they interchangeable?   总被引:2,自引:0,他引:2  
Several theorists have posited two focuses for depressive experience and/or vulnerability: dependency and rejection, and self-criticism and failure. In turn, three instruments have emerged, each addressing these two components, respectively: the Depressive Experiences Questionnaire (DEQ; Dependent and Self-Critical scales), the Sociotropy-Autonomy Scales (SAS), and the Anaclitic and Introjective Dysfunctional Attitude Scales (DAS). In this study, we addressed the relations within and among these three pairs of scales in a large undergraduate sample. Generally, the DEQ-Dependent, SAS-Sociotrophy, and DAS-Anaclitic scales showed substantial convergent and discriminant validity. Although this was true also for the DEQ-Self-Critical and DAS-Introjective scales, neither scale was closely related to the SAS-Autonomy scale, which appeared instead to be a better measure of counter dependency than a measure of self-critical, introjective features.  相似文献   

11.
Two experiments evaluated rate dependency and a neuropharmacological model of timing as explanations of the effects of amphetamine on behavior under discriminative control by time. Four pigeons pecked keys during 60-trial sessions. On each trial, the houselight was lit for a particular duration (5 to 30 s), and then the key was lit for 30 s. In Experiment 1, the key could be lit either green or blue. If the key was lit green and the sample was 30 s, or if the key was lit blue and the sample was 5 s, pecks produced food on a variable-interval 20-s schedule. The rate of key pecking increased as a function of sample duration when the key was green and decreased as a function of sample duration when the key was blue. Acute d-amphetamine (0.1 to 3.0 mg/kg) decreased higher rates of key pecking and increased lower rates of key pecking as predicted by rate dependency, but did not shift the timing functions leftward (toward overestimation) as predicted by the neuropharmacological model. These results were replicated in Experiment 2, in which the key was lit only one color during sessions, indicating that the effects were not likely due to disruption of discriminative control by key color. These results are thus consistent with rate dependency but not with the predictions of the neuropharmacological model.  相似文献   

12.
Classical factor analysis assumes a random sample of vectors of observations. For clustered vectors of observations, such as data for students from colleges, or individuals within households, it may be necessary to consider different within-group and between-group factor structures. Such a two-level model for factor analysis is defined, and formulas for a scoring algorithm for estimation with this model are derived. A simple noniterative method based on a decomposition of the total sums of squares and crossproducts is discussed. This method provides a suitable starting solution for the iterative algorithm, but it is also a very good approximation to the maximum likelihood solution. Extensions for higher levels of nesting are indicated. With judicious application of quasi-Newton methods, the amount of computation involved in the scoring algorithm is moderate even for complex problems; in particular, no inversion of matrices with large dimensions is involved. The methods are illustrated on two examples.Suggestions and corrections of three anonymous referees and of an Associate Editor are acknowledged. Discussions with Bob Jennrich on computational aspects were very helpful. Most of research leading to this paper was carried out while the first author was a visiting associate professor at the University of California, Los Angeles.  相似文献   

13.
Interactive procedures are very effective for exploring sets of alternatives with a view to finding the best compromise alternative. In this paper we consider the interactive exploration of implicitly or explicitly given large sets of alternatives. Upon review of classical interactive procedures, which usually assume a utility function preference model, we distinguish three typical operations used in various interactive procedures: contraction of the explored set, exploration of some neighbourhood of a current alternative and reduction of a sample of the explored set. After pointing out some areas for improvement in the traditional procedures, we describe three interactive procedures performing the three operations respectively using an outranking relation preference model. Owing to the proposed ways of building and exploiting the outranking relation, the weak points of the traditional procedures can be overcome. Finally we solve an exemplary problem using all three procedures. © 1997 John Wiley & Sons, Ltd.  相似文献   

14.
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent variables. The response model generalizes GLMMs to incorporate factor structures in addition to random intercepts and coefficients. As in GLMMs, the data can have an arbitrary number of levels and can be highly unbalanced with different numbers of lower-level units in the higher-level units and missing data. A wide range of response processes can be modeled including ordered and unordered categorical responses, counts, and responses of mixed types. The structural model is similar to the structural part of a SEM except that it may include latent and observed variables varying at different levels. For example, unit-level latent variables (factors or random coefficients) can be regressed on cluster-level latent variables. Special cases of this framework are explored and data from the British Social Attitudes Survey are used for illustration. Maximum likelihood estimation and empirical Bayes latent score prediction within the GLLAMM framework can be performed using adaptive quadrature in gllamm, a freely available program running in Stata.gllamm can be downloaded from http://www.gllamm.org. The paper was written while Sophia Rabe-Hesketh was employed at and Anders Skrondal was visiting the Department of Biostatistics and Computing, Institute of Psychiatry, King's College London.  相似文献   

15.
Verbal credibility assessment methods are frequently used in the criminal justice system to investigate the truthfulness of statements. Three of these methods are Criteria Based Content Analysis (CBCA), Reality Monitoring (RM), and Scientific Content Analysis (SCAN). The aim of this study is twofold. First, we investigated the diagnostic accuracy of CBCA, RM, and especially SCAN. Second, we tested whether giving the interviewee an example of a detailed statement can enhance the diagnostic accuracy of these verbal credibility methods. To test the latter, two groups of participants were requested to write down one true and one fabricated statement about a negative event. Prior to this request, one group received a detailed example statement, whereas the other group received no additional information. Results showed that CBCA and RM scores differed between true and fabricated statements, whereas SCAN scores did not. Giving a detailed example statement did not lead to better discrimination between truth tellers and liars for any of the methods but did lead to the participants producing significantly longer statements. The implications of these findings are discussed. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Magnitude estimations were made for the taste intensity of sodium chloride (NaCl) and quinine sulfate (QSO4) presented by three different methods: sip, anterior dorsal tongue flow, and whole-mouth flow. Power functions fitted to the data indicate that, for the anterior tongue stimulus (NaCl), the two flowing procedures produced lower exponents than did the sip procedure. For the posterior tongue stimulus (QSO4), the exponent obtained with dorsal tongue flow was lower than the exponents obtained with either of the whole-mouth procedures, sip or flow. The results are compared to previous experiments on ratio scaling of taste intensity to elucidate the effects of several procedural variables.  相似文献   

17.
Data in social sciences are typically non-normally distributed and characterized by heavy tails. However, most widely used methods in social sciences are still based on the analyses of sample means and sample covariances. While these conventional methods continue to be used to address new substantive issues, conclusions reached can be inaccurate or misleading. Although there is no ‘best method’ in practice, robust methods that consider the distribution of the data can perform substantially better than the conventional methods. This article gives an overview of robust procedures, emphasizing a few that have been repeatedly shown to work well for models that are widely used in social and behavioural sciences. Real data examples show how to use the robust methods for latent variable models and for moderated mediation analysis when a regression model contains categorical covariates and product terms. Results and logical analyses indicate that robust methods yield more efficient parameter estimates, more reliable model evaluation, more reliable model/data diagnostics, and more trustworthy conclusions when conducting replication studies. R and SAS programs are provided for routine applications of the recommended robust method.  相似文献   

18.
Three individuals with mental retardation, who had failed to learn identity matching to sample with standard fading and prompting procedures, were given microcomputer-based programmed instruction. The methods were based on an analysis of two features of typical identity matching procedures: (a) within each trial, the current sample stimulus must control comparison selection, and (b) across trials, specific comparison stimuli must function both as S+ and as S–, depending upon the sample presented (conditional discrimination). During the first phase of training, one-trial acquisition of discriminative stimulus control was established in a nonconditional discrimination context where the S+ or S– functions of specific stimuli did not change from trial to trial. After one-trial learning was established, conditional discrimination was programmed by gradually introducing reversals of S+/S– stimulus functions. All three participants learned to perform conditional identity matching. Avenues for further analysis of the prerequisites for conditional discrimination and continued development of programmed methods are discussed.  相似文献   

19.
以1982~2012年中国期刊网收录的88例追踪研究为对象,从应用现状、设计特征、数据处理三方面分析和评估追踪研究方法在国内心理研究的应用情况及存在的问题。结果显示,2005年之前追踪研究方法应用增长缓慢,2005年开始呈显著增长趋势,研究对象以未成年及成年早期群体为主。主要采用固定样本追踪设计,大部分研究测量2-3次、样本量在10~300之间、持续时间在3年内。61例有缺失的研究中,38例用删除法处理缺失;主要运用 HLM、方差分析、t检验和SEM分析追踪数据。相当部分研究存在测量次数少、样本量较小、持续时间短、被试缺失严重及数据处理方法相对陈旧问题。追踪研究方法的应用应注意,根据理论模型和研究有效性要求确定设计类型和设计特征,根据数据特征选择缺失处理方法和追踪数据分析方法。  相似文献   

20.
刘玥  刘红云 《心理学报》2017,(9):1234-1246
双因子模型可以同时包含一个全局因子和多个局部因子,在描述多维测验结构时有其独特优势,近些年应用越来越广泛。文章基于双因子模型,提出了4种合成总分和维度分的方法,分别是:原始分法,加和法,全局题目加权加和法和局部题目加权加和法,并采用模拟的方法,在样本量、测验长度、维度间相关变化的条件下考察了这些方法与传统多维IRT方法的表现。最后,通过实证研究对结果进行了验证。结果显示:(1)全局加权加和法和局部加权加和法,尤其是局部加权加和法合成的总分和维度分与真值最接近、信度最高。(2)在维度间相关较高,测验长度较长的条件下,局部加权加和法的结果较好,部分条件下甚至优于多维IRT法。(3)仅有局部加权加和法合成的维度分能够反应维度间真实的相关关系。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号