首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study aims to evaluate a number of procedures that have been proposed to enhance cross‐cultural comparability of personality and value data. A priori procedures (anchoring vignettes and direct measures of response styles (i.e. acquiescence, extremity, midpoint responding, and social desirability), a posteriori procedures focusing on data transformations prior to analysis (ipsatization and item parcelling), and two data modelling procedures (treating data as continuous vs as ordered categories) were compared using data collected from university students in 16 countries. We found that (i) anchoring vignettes showed lack of invariance, so they were not bias‐free; (ii) anchoring vignettes showed higher internal consistencies than raw scores where all other correction procedures, notably ipsatization, showed lower internal consistencies; (iii) in measurement invariance testing, no procedure yielded scalar invariance; anchoring vignettes and item parcelling slightly improved comparability, response style correction did not affect it, and ipsatization resulted in lower comparability; (iv) treating Likert‐scale data as categorical resulted in higher levels of comparability; (v) factor scores of scales extracted from different procedures showed similar correlational patterning; and (vi) response style correction was the only procedure that suggested improvement in external validity of country‐level conscientiousness. We conclude that, although no procedure resolves all comparability issues, anchoring vignettes, parcelling, and treating data as ordered categories seem promising to alleviate incomparability. We advise caution in uncritically applying any of these procedures. Copyright © 2017 European Association of Personality Psychology  相似文献   

2.
Researchers are often advised to write balanced scales (containing an equal number of positively and negatively worded items) when measuring psychological attributes. This practice is recommended to control for acquiescence bias (ACQ). However, little advice has been given on what to do with such data if the researcher subsequently wants to evaluate a 1-factor model for the scale. This article compares 3 approaches for dealing with the presence of ACQ bias, which make different assumptions: an ipsatization approach based on the work of Chan and Bentler (CB; 1993), a confirmatory factor analysis (CFA) approach that includes an ACQ factor with equal loadings (Billiet & McClendon, 2000; Mirowsky & Ross, 1991), and an exploratory factor analysis (EFA) approach with a target rotation (Ferrando, Lorenzo-Seva, & Chico, 2003). We also examine the “do nothing” approach which fits the 1-factor model to the data ignoring the presence of ACQ bias. Our main findings are that the CFA method performs best overall and that it is robust to the violation of its assumptions, the EFA and the CB approaches work well when their assumptions are strictly met, and the “do nothing” approach can be surprisingly robust when the ACQ factor is not very strong.  相似文献   

3.
4.
Psychologists have a recurrent concern that socially desirable responding (SDR) is a form of response distortion that compromises the validity of self‐report measures, especially in high‐stakes situations where participants are motivated to make a good impression. Psychologists have used various strategies to minimise SDR or its impact, for example, forced choice responding, ipsatization, and direct measures of social desirability. However, empirical evidence suggests that SDR is a robust phenomenon existing in many cultures and a substantive variable with meaningful associations with other psychological variables and outcomes. Here, we review evidence of the occurrence of SDR across cultures and tie SDR to the study of cultural normativity and cultural consonance in anthropology. We suggest that cultural normativity is an important component of SDR, which may partly explain the adaptiveness of SDR and its association with positive outcomes.  相似文献   

5.
This study analyzes the robustness of the linear mixed model (LMM) with the Kenward–Roger (KR) procedure to violations of normality and sphericity when used in split-plot designs with small sample sizes. Specifically, it explores the independent effect of skewness and kurtosis on KR robustness for the values of skewness and kurtosis coefficients that are most frequently found in psychological and educational research data. To this end, a Monte Carlo simulation study was designed, considering a split-plot design with three levels of the between-subjects grouping factor and four levels of the within-subjects factor. Robustness is assessed in terms of the probability of type I error. The results showed that (1) the robustness of the KR procedure does not differ as a function of the violation or satisfaction of the sphericity assumption when small samples are used; (2) the LMM with KR can be a good option for analyzing total sample sizes of 45 or larger when their distributions are normal, slightly or moderately skewed, and with different degrees of kurtosis violation; (3) the effect of skewness on the robustness of the LMM with KR is greater than the corresponding effect of kurtosis for common values; and (4) when data are not normal and the total sample size is 30, the procedure is not robust. Alternative analyses should be performed when the total sample size is 30.  相似文献   

6.
In this article, we show that the underlying dimensions obtained when factor analyzing cross-sectional data actually form a mix of within-person state dimensions and between-person trait dimensions. We propose a factor analytical model that distinguishes between four independent sources of variance: common trait, unique trait, common state, and unique state. We show that by testing whether there is weak factorial invariance across the trait and state factor structures, we can tackle the fundamental question first raised by Cattell; that is, are within-person state dimensions qualitatively the same as between-person trait dimensions? Furthermore, we discuss how this model is related to other trait-state factor models, and we illustrate its use with two empirical data sets. We end by discussing the implications for cross-sectional factor analysis and suggest potential future developments.  相似文献   

7.
Simultaneous factor analysis in several populations   总被引:24,自引:0,他引:24  
This paper is concerned with the study of similarities and differences in factor structures between different groups. A common situation occurs when a battery of tests has been administered to samples of examinees from several populations.A very general model is presented, in which any parameter in the factor analysis models (factor loadings, factor variances, factor covariances, and unique variances) for the different groups may be assigned an arbitrary value or constrained to be equal to some other parameter. Given such a specification, the model is estimated by the maximum likelihood method yielding a large samplex 2 of goodness of fit. By computing several solutions under different specifications one can test various hypotheses.The method is capable of dealing with any degree of invariance, from the one extreme, where nothing is invariant, to the other extreme, where everything is invariant. Neither the number of tests nor the number of common factors need to be the same for all groups, but to be at all interesting, it is assumed that there is a common core of tests in each battery that is the same or at least content-wise comparable.This research was supported by grant NSF-GB-12959 from National Science Foundation. My thanks are due to Michael Browne for his comments on an earlier draft of this paper and to Marielle van Thillo who checked the mathematical derivations and wrote and debugged the computer program SIFASP.Now at Statistics Department, University of Uppsala, Sweden.  相似文献   

8.
ABSTRACT There is no agreement regarding the nature or number of dimensions that make up the social effectiveness domain. We inductively explore the relationships between a set of social effectiveness measures with the intention of identifying an initial set of dimensions. An exploratory factor analysis of the Social Competence Inventory (SCI, Schneider, 2001 ) resulted in the identification of four factors: Social Potency, Social Appropriateness, Social Emotional Expression, and Social Reputation. A joint factor analysis between the SCI and a set of extant measures resulted in the identification of the same four factors. A fifth factor emerged when a set of scales from an emotional intelligence measure was included in the analysis, suggesting that emotional intelligence is not captured within the common factor space defined by measures of social effectiveness. This study represents a first step in the establishment of a set of common social effectiveness dimensions.  相似文献   

9.
This research is concerned with task-oriented decision situations where the decision maker faces two options, one superior on a factor directly related to the given task (called the A factor) and the other superior on a factor not central to the accomplishment of the task but tempting to the decision maker (called the B factor). According to the elastic justification notion, the decision maker may find it unjustifiable to choose the B-superior option over the A-superior option if there is no uncertainty in the A values of the two options, but will construct a justification and become more likely to choose the B-superior option if there is uncertainty. In support of this proposition, two experiments employing a simulated decision situation found that subjects were indeed more likely to choose the B-superior option when there was uncertainty in the A factor than when there was not, no matter whether the uncertainty resided in one of the options (Experiment 1) or in both options (Experiment 2).  相似文献   

10.
Suganuma M  Yokosawa K 《Perception》2006,35(4):483-495
In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.  相似文献   

11.
Our previous research on auditory time perception showed that the duration of empty time intervals shorter than about 250 ms can be underestimated hugely if they are immediately preceded by shorter time intervals. We named this illusion 'time-shrinking' (TS). This study comprises four experiments in which the preceding interval, t1, was followed by a standard interval, t2. When t1 < or = 200 ms, and t1 < or = t2, the underestimation of t2 came into view clearly. The absolute difference between t2 and t1 was the crucial factor for the illusion to appear. The underestimation increased when t2 increased from t1 to t1 + 65 ms, stayed at about 45 ms when t2 was between t1 + 65 ms and t1 + 95 ms, and disappeared suddenly when t2 exceeded t1 + 95 ms. This pattern of results was observed across all values of t1 < or = 200 ms. A model was fit to the data to elucidate the underlying process of the illusion. The model states that the perceived duration difference between t1 and t2 is reduced by cutting mental processing time for t2; in other words, that t2 assimilates to t1.  相似文献   

12.
The covariances of observed variables reproduced from conventional factor score predictors are generally not the same as the covariances reproduced from the common factors. We sought to find a factor score predictor that optimally reproduces the common part of the observed covariances. It was found algebraically that—under some conditions—the single observed variable with highest loading on a factor reproduces the non-diagonal elements of the observed covariance matrix more exactly than the conventional factor score predictors. This finding is linked to Spearman's and Wilson's 1929 debate on the use of single variables as factor score predictors. A population-based and a sample-based simulation study confirmed the algebraic result that taking a single variable can outperform conventional factor score predictors in reproducing the non-diagonal covariances when the nonzero loading size and the number of nonzero loadings per factor are small. The results indicated that a weighted aggregation of variables does not necessarily lead to an improvement of the score over the variable with the highest loading.  相似文献   

13.
Learners demonstrate superior recognition of faces of their own race or ethnicity, compared to faces of other races or ethnicities; a finding termed the own-race bias. Accounts of the own-race bias differ on whether the effect reflects acquired expertise with own-race faces or enhanced motivation to individuate own-race faces. Learners have previously been motivated to demonstrate increased recall for highly important items through a value-based paradigm, in which item importance is designated using high (vs. low) point values. Learners receive point values by correctly recalling the corresponding items at test, and are given the goal of achieving a high total point score. In two experiments we examined whether a value-based paradigm can motivate learners to differentiate between other-race faces, reducing or eliminating the own-race bias. In Experiment 1, participants studied own- and other-race faces paired with high or low point values. High point values (12-point) indicated that face was highly important to learn, whereas low point values (1-point) indicated that face was less important to learn. Participants demonstrated increased recognition for high-value own-race (but not other-race) faces, suggesting that motivation alone is not enough to reduce the own-race bias. In Experiment 2, we examined whether participants could use value to enhance recognition when permitted to self-pace their study. Recognition did not differ between high-value own- and other-race faces, reducing the own-race bias. Such data suggest that motivation can influence the own-race bias when participants can control encoding.  相似文献   

14.
Principal component analysis (PCA) and common factor analysis are often used to model latent data structures. Typically, such analyses assume a single population whose correlation or covariance matrix is modelled. However, data may sometimes be unwittingly sampled from mixed populations containing a taxon (nonarbitrary subpopulation) and its complement class. One derives relations between values of PCA parameters within subpopulations and their values in the mixed population. These results are then extended to factor analysis in mixed populations. As relationships between subpopulation and mixed-population principal components and factors sensitively depend on within-subpopulation structures and between-subpopulation differences, naive interpretation of PCA or factor analytic findings can potentially mislead. Several analyses, better suited to the dimensional analysis of admixture data structures, are presented and compared.  相似文献   

15.
Common region: a new principle of perceptual grouping.   总被引:6,自引:0,他引:6  
A new principle of grouping is proposed that is based on elements being located within a common region of space. Demonstrations analogous to Wertheimer's original displays show that this factor strongly influences perceived grouping and is capable of overcoming the effects of other powerful grouping factors such as proximity and similarity. Grouping by common region is further shown to depend on perceived depth relations, indicating that it is influenced by processes that occur after at least some depth perception has been achieved. Further demonstrations suggest that it is dominated by the smallest background area and that it can follow a hierarchical embedding scheme. It is argued that common region cannot be reduced to the effects of proximity, closure, or any other previously known factor and therefore constitutes a genuinely new principle of grouping.  相似文献   

16.
In the psychological literature, there are two seemingly different approaches to inference: that from estimation of posterior intervals and that from Bayes factors. We provide an overview of each method and show that a salient difference is the choice of models. The two approaches as commonly practiced can be unified with a certain model specification, now popular in the statistics literature, called spike-and-slab priors. A spike-and-slab prior is a mixture of a null model, the spike, with an effect model, the slab. The estimate of the effect size here is a function of the Bayes factor, showing that estimation and model comparison can be unified. The salient difference is that common Bayes factor approaches provide for privileged consideration of theoretically useful parameter values, such as the value corresponding to the null hypothesis, while estimation approaches do not. Both approaches, either privileging the null or not, are useful depending on the goals of the analyst.  相似文献   

17.
Observers performed lightness matches for physically equivalent gray targets of a simultaneous lightness contrast display and displays in which both targets were on the same background. Targets either shared a common line-texture pattern with their respective backgrounds or did not. Results indicate that when targets share a line-texture pattern with their respective backgrounds, a contrast effect is obtained. However, when the target's pattern is different than the background's pattern, perceived contrast is significantly reduced and the target appears as a separate 3-D entity. This result applies to both vertically and horizontally oriented displays, to targets that are increments or decrements, and to line-texture patterns that are black or white. Line patterns that are shared by targets and backgrounds result in T-junctions that provide occlusion information. We conclude that targets and backgrounds perceived to be on separate planes because of T-junctions are less likely to be perceptually grouped together and that their luminance values are less likely to be compared with one another.  相似文献   

18.
In this paper we characterise a tension between two views about how an agent could achieve efficient action selection. On one hand, it is common in some of the cognitive and behavioural sciences to maintain that efficient action selection requires that the value of all actions or options available to an agent are represented on a unidimensional scale of values, in other words that action selection make use of a “common currency”. On the other hand, early work in situated, embodied robotics and distributed control associated with Rodney Brooks maintained that “intelligence” could be achieved without the instantiation of any representations at all, and without centralised control systems. This line of thinking has exerted significant influence in situated and enactivist approaches to human cognition. If what situated roboticists count as “intelligence” includes capacity for efficient action selection, then their claim that intelligence can be achieved without representations is in tension with the views of those who argue that efficient action selection requires that a common currency be represented. We argue here that the apparent tension is genuine, develop an analysis of the tension itself, and offer a preliminary overview of the considerations relevant to navigating it.  相似文献   

19.
In their recent paper, Marchant, Simons, and De Fockert (2013) claimed that the ability to average between multiple items of different sizes is limited by small samples of arbitrarily attended members of a set. This claim is based on a finding that observers are good at representing the average when an ensemble includes only two sizes distributed among all items (regular sets), but their performance gets worse when the number of sizes increases with the number of items (irregular sets). We argue that an important factor not considered by Marchant et al. (2013) is the range of size variation that was much bigger in their irregular sets. We manipulated this factor across our experiments and found almost the same efficiency of averaging for both regular and irregular sets when the range was stabilized. Moreover, highly regular sets consisting only of small and large items (two-peaks distributions) were averaged with greater error than sets with small, large, and intermediate items, suggesting a segmentation threshold determining whether all variable items are perceived as a single ensemble or distinct subsets. Our results demonstrate that averaging can actually be parallel but the visual system has some difficulties with it when some items differ too much from others.  相似文献   

20.
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite—taking multiple parameter values—such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号