首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
This study tests between two modern theories of decision making. Rank- and sign-dependent utility (RSDU) models, including cumulative prospect theory (CPT), imply stochastic dominance and two cumulative independence conditions. Configural weight models, with parameters estimated in previous research, predict systematic violations of these properties for certain choices. Experimental data systematically violate all three properties, contrary to RSDU but consistent with configural weight models. This study also tests whether violations of stochastic dominance can be explained by violations of transitivity. Violations of transitivity may be evidence of a dominance detecting mechanism. Although some transitivity violations were observed, most choice triads violated stochastic dominance without violating transitivity. Judged differences between gambles were not consistent with the CPT model. Data were not consistent with the editing principles of cancellation and combination. The main findings are interpreted in terms of coalescing, the principle that equal outcomes can be combined in a gamble by adding their probabilities. RSDU models imply coalescing but configural weight models violate it, allowing configural weighting to explain violations of stochastic dominance and cumulative independence.  相似文献   

2.
In two studies, we used structural equation models to test the hypothesis that a General Factor of Personality (GFP) occupies the apex of the hierarchy of personality. In Study 1, we found a GFP that explained 45% of the reliable variance in a model that went from the Big Five to the Big Two to the Big One in the 14 studies of inter-scale correlations (N = 4496) assembled by Digman (1997). A higher order factor of Alpha/Stability was defined by Conscientiousness, Emotional Stability, and Agreeableness, with loadings of from 0.61 to 0.70, while Beta/Plasticity was defined by Openness and Extraversion with loadings of 0.55 and 0.77. In turn, the GFP was defined by Alpha and Beta with loadings of 0.67. In Study 2, a GFP explained 44% of the reliable variance in a similar model using data from a published meta-analysis of the Big Five (N = 4000) by Mount, Barrick, Scullen, and Rounds (2005). Strong general factors such as these, based on large data sets with good model fits that cross validate perfectly, are unlikely to be due to artifacts and response sets.  相似文献   

3.
James, Demaree, Mulaik, and Ladd (1992) proposed that situational variables may act as substantive ("common") causes of relationships between individual difference variables as well as statistical artifacts (i.e., measurement unreliability) associated with these variables, thus invalidating assumptions of current validity generalization/meta-analysis procedures. In this investigation, we report the results of two large-scale studies designed to test hypothesized relationships derived from two "common cause" models. Study 1 examines relationships between store-level organizational climate variables and employee satisfaction and performance variables for 33,097 sales personnel in 537 retail stores. Study 2 investigates relationships between store-level situational constraints and customer service perception and shopping variables for 31,611 customers from 564 retail stores. The results of these studies did not support the proposition that situational variables act as substantive causes of correlations among the respective employee and customer variables or the variances and reliabilities of these variables. The implications of these findings for meta-analyses in applied psychology as well as the generalizability of the findings are discussed.  相似文献   

4.
Tett, Jackson, and Rothstein (1991) recently presented a meta-analysis of the relationship between personality and job performance. Many of their findings, particularly those pertaining to the Big Five personality dimensions, are at odds with one other large scale meta-analytic study (Barrick & Mount, 1991) investigating the relation between personality and performance. In order to reconcile these new results with previous findings, we examined differences in the sample sizes used, the process for assigning pre-existing scales to personality dimensions, and the nature of the jobs investigated. In addition, we found four technical errors in the Tett et al. moderator meta-analyses in computing sampling error, the bias correction, sampling error for bias corrected correlations, and computing sampling error variance across studies. These errors raise serious questions about the interpretation of their results for various moderators of the personality-job performance relationship.  相似文献   

5.
Categorical moderators are often included in mixed-effects meta-analysis to explain heterogeneity in effect sizes. An assumption in tests of categorical moderator effects is that of a constant between-study variance across all levels of the moderator. Although it rarely receives serious thought, there can be statistical ramifications to upholding this assumption. We propose that researchers should instead default to assuming unequal between-study variances when analysing categorical moderators. To achieve this, we suggest using a mixed-effects location-scale model (MELSM) to allow group-specific estimates for the between-study variance. In two extensive simulation studies, we show that in terms of Type I error and statistical power, little is lost by using the MELSM for moderator tests, but there can be serious costs when an equal variance mixed-effects model (MEM) is used. Most notably, in scenarios with balanced sample sizes or equal between-study variance, the Type I error and power rates are nearly identical between the MEM and the MELSM. On the other hand, with imbalanced sample sizes and unequal variances, the Type I error rate under the MEM can be grossly inflated or overly conservative, whereas the MELSM does comparatively well in controlling the Type I error across the majority of cases. A notable exception where the MELSM did not clearly outperform the MEM was in the case of few studies (e.g., 5). With respect to power, the MELSM had similar or higher power than the MEM in conditions where the latter produced non-inflated Type 1 error rates. Together, our results support the idea that assuming unequal between-study variances is preferred as a default strategy when testing categorical moderators.  相似文献   

6.

In this article we examine the relationship between perceptions of intergroup distinctiveness and intergroup differentiation. Research in this area has highlighted two contrasting hypotheses: high distinctiveness is predicted to lead to increased intergroup differentiation (self-categorisation theory), while low distinctiveness or too much similarity can also underlie positive differentiation (social identity theory). We argue for a theoretical integration of these predictions and outline their domains of applicability. In addition to empirical studies from our own laboratory, support for these hypotheses in the literature is examined meta-analytically, and we assess the power of a number of moderators of the distinctiveness ‐ differentiation relation. We focus on group identification and salience of the superordinate category as the most powerful moderators of this relation. We report evidence that low group distinctiveness leads to more differentiation for high identifiers, while high group distinctiveness leads to more differentiation for low identifiers. In addition, our meta-analysis revealed that when the superordinate category was not salient, low distinctiveness tended to lead to differentiation (albeit not significantly so) while high distinctiveness led to differentiation when the salience of the superordinate category was high. A model is proposed integrating our predictions concerning moderators of the distinctiveness ‐ differentiation relation. Theoretical implications of these findings are discussed and we suggest directions for future research.  相似文献   

7.
This study's purpose was to meta-analytically estimate the magnitude of the relationship between typical and maximum job performance to determine if this distinction deserves greater attention. We also tested several moderators including three associated with the temporal boundaries of this relationship and examined theoretical antecedents of typical and maximum performance (ability, motivation, and personality). This meta-analysis revealed a moderate typical–maximum performance association (ρ = .42), suggesting that a meaningful distinction does exist. Although the examined temporal moderators did not meaningfully affect the typical–maximum performance relationship, task complexity, type of performance measure, and study setting were significant moderators. Antecedent analyses confirmed that both ability and Openness to Experience are more strongly related to maximum than typical performance. The implications of these findings are discussed.  相似文献   

8.
The common consequence paradox of Allais can be decomposed into three simpler principles: transitivity, coalescing, and restricted branch independence. Different theories attribute such paradoxes to violations of restricted branch independence only, to coalescing only, or to both. This study separates tests of these two properties in order to compare these theories. Although rank-dependent utility (RDU) theories, including cumulative prospect theory (CPT), violate branch independence, the empirical pattern of violations is opposite that required by RDU theories to account for Allais paradoxes. Data also show systematic violations of coalescing, which refute RDU theories. The findings contradict both original and CPTs with or without their editing principles of combination and cancellation. Modal choices were well predicted by Birnbaum's RAM and TAX models with parameters estimated from previous data. The effects of event framing on these tests were also assessed and found to be negligible.  相似文献   

9.
Three plausible assumptions of conditional independence in a hierarchical model for responses and response times on test items are identified. For each of the assumptions, a Lagrange multiplier test of the null hypothesis of conditional independence against a parametric alternative is derived. The tests have closed-form statistics that are easy to calculate from the standard estimates of the person parameters in the model. In addition, simple closed-form estimators of the parameters under the alternatives of conditional dependence are presented, which can be used to explore model modification. The tests were applied to a data set from a large-scale computerized exam and showed excellent power to detect even minor violations of conditional independence.  相似文献   

10.
This article examines the relationship between personality disorder (PD) symptoms and personality traits using a variety of distributional assumptions. Prior work in this area relies almost exclusively on linear models that treat PD symptoms as normally distributed and continuous. However, these assumptions rarely hold, and thus the results of prior studies are potentially biased. Here we explore the effect of varying the distributions underlying regression models relating PD symptomatology to personality traits using the initial wave of the Longitudinal Study of Personality Disorders (N = 250; Lenzenweger, 1999), a university-based sample selected to include PD rates resembling epidemiological samples. PD symptoms were regressed on personality traits. The distributions underlying the dependent variable (i.e., PD symptoms) were variously modeled as normally distributed, as counts (Poisson, Negative-Binomial), and with two-part mixture distributions (zero-inflated, hurdle). We found that treating symptoms as normally distributed resulted in violations of model assumptions, that the negative-binomial and hurdle models were empirically equivalent, but that the coefficients achieving significance often differ depending on which part of the mixture distributions are being predicted (i.e., presence vs. severity of PD). Results have implications for how the relationship between normal and abnormal personality is understood. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

11.
Loglinear Rasch model tests   总被引:1,自引:0,他引:1  
Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch model are discussed and the Rasch model is reformulated as a quasi-independence model. The model is a quasi-loglinear model for the incomplete subgroup × score × item 1 × item 2 × ... × itemk contingency table. Using ordinary contingency table methods the Rasch model can be tested generally or against less restrictive quasi-loglinear models to investigate specific violations of its assumptions.  相似文献   

12.
This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.  相似文献   

13.
Four experiments were conducted to demonstrate that embarrassment and shame are distinct emotions that result from violations of different types of internalized standards. Embarrassment results from violating one's particular persona; shame results from violating a shared, objective ideal. Subjects vividly imagined themselves in situations and indicated their emotional reactions. In Experiment I, we demonstrate that people differentiate between embarrassment and shame systematically (F(1,27) = 74.4, p < 0.001). In Experiments 2 and 3, we demonstrate that embarrassment results from violating a persona (n = 34, p < 0.001; n = 23, p < 0.001), and shame results from violating an objective ideal (n = 34, p < 0.001; n = 23, p < 0.001). In Experiment 4, we demonstrate that it is the type of standard that is violated (n = 30, p < 0.001), not whether or not the violation was intentional, that determines whether one experiences embarrassment or shame. We argue that both shame and embarrassment play an important role in maintaining personal identity.  相似文献   

14.
Scruggs and Mastropieri (Behaviour Research and Therapy, 32, 879–883, 1994) take issue with criticisms of their PND (Percent of Nonoverlapping Data) statistic that we offered in our recent article (Allison & German, Behaviour Research and Therapy, 31, 621–631, 1993), which advocated a regressionbased method for obtaining effect sizes in single-subject studies. They contend that their PND approach has several advantages over our approach because: (1) they believe that, unlike ours, it can take advantage of the small number of observations that are typically available in single-case studies; (2) it is simple to compute; (3) it frees researchers from traditional regression assumptions of normality, homogeneity of variance, and independence of observations and residuals; and (4) it correlates with visual judgements made by experts. As we shall argue, these claims are built upon very questionable assumptions and they are very difficult to substantiate. In addition, we show that the expected value of the PND is so strongly related to sample size as to be rendered meaningless.  相似文献   

15.
Signal detection theory offers several indexes of sensitivity (d', Az, and A') that are appropriate for two-choice discrimination when data consist of one hit rate and one false alarm rate per condition. These measures require simplifying assumptions about how target and lure evidence is distributed. We examine three statistical properties of these indexes: accuracy (good agreement between the parameter and the sampling distribution mean), precision (small variance of the sampling distribution), and robustness (small influence of violated assumptions on accuracy). We draw several conclusions from the results. First, a variety of parameters (sample size, degree of discriminability, and magnitude of hits and false alarms) influence statistical bias in these indexes. Comparing conditions that differ in these parameters entails discrepancies that can be reduced by increasing N. Second, unequal variance of the evidence distributions produces significant bias that cannot be reduced by increasing N-a serious drawback to the use of these sensitivity indexes when variance is unknown. Finally, their relative statistical performances suggest that Az is preferable to A'.  相似文献   

16.
Observers completed perceptual categorization tasks in which base rates and payoffs were manipulated separately or simultaneously across a range of category discriminabilities. Decision criterion estimates from the simultaneous base-rate/payoff conditions were closer to optimal than those predicted from the independence assumption, in line with predictions from the flat-maxima hypothesis. A hybrid model that instantiated the flat-maxima and competition between reward and accuracy maximization hypotheses was applied to the data as well as used in a reanalysis of C. J. Bohil and W.J. Maddox's (2001) study. The hybrid model was superior to a model that incorporated the independence assumption, suggesting that violations of the independence assumption are to be expected and are well captured by the flat-maxima hypothesis, without requiring any additional assumptions.  相似文献   

17.
The purpose of this study was to apply a set of rarely reported psychometric indices that, nevertheless, are important to consider when evaluating psychological measures. All can be derived from a standardized loading matrix in a confirmatory bifactor model: omega reliability coefficients, factor determinacy, construct replicability, explained common variance, and percentage of uncontaminated correlations. We calculated these indices and extended the findings of 50 recent bifactor model estimation studies published in psychopathology, personality, and assessment journals. These bifactor derived indices (most not presented in the articles) provided a clearer and more complete picture of the psychometric properties of the assessment instruments. We reached 2 firm conclusions. First, although all measures had been tagged “multidimensional,” unit-weighted total scores overwhelmingly reflected variance due to a single latent variable. Second, unit-weighted subscale scores often have ambiguous interpretations because their variance mostly reflects the general, not the specific, trait. Finally, we review the implications of our evaluations and consider the limits of inferences drawn from a bifactor modeling approach.  相似文献   

18.
On the law of Regular Minimality: Reply to Ennis   总被引:1,自引:0,他引:1  
Ennis's critique touches on issues important for psychophysics, but the points he makes against the hypothesis that Regular Minimality is a basic property of sensory discrimination are not tenable.(1) Stimulus variability means that one and the same apparent stimulus value (as measured by experimenter) is a probabilistic mixture of true stimulus values. The notion of a true stimulus value is a logical necessity: variability and distribution presuppose the values that vary and are distributed (even if these values are represented by processes or sets rather than real numbers). Regular Minimality is formulated for true stimulus values. That a mixture of probabilities satisfying Regular Minimality does not satisfy this principle (unless it also satisfies Constant Self-Similarity) is an immediate consequence of my 2003 analysis. Stimulus variability can be controlled or estimated: the cases when observed violations of Regular Minimality can be accounted for by stimulus variability corroborate rather than falsify this principle. In this respect stimulus variability is no different from fatigue, perceptual learning, and other factors creating mixtures of discrimination probabilities in an experiment.(2) Could it be that well-behaved Thurstonian-type models are true models of discrimination but their parameters are so adjusted that the violations of Regular Minimality they lead to (due to my 2003 theorems) are too small to be detected experimentally? This is possible, but this amounts to admitting that Regular Minimality is a law after all, albeit only approximate: nothing in the logic of the Thurstonian-type representations per se prevents them from violating Regular Minimality grossly rather than slightly. Moreover, even very small violations predicted by a given class of Thurstonian-type models can be tested in specially designed experiments (perhaps under additional, independently testable assumptions). The results of one such experiment, in which observers were asked to alternately adjust to each other the values of stimuli in two observation areas, indicate that violations of Regular Minimality, if any, are far below limits of plausible interpretability.  相似文献   

19.
20.
This paper reports on two studies that investigated the relationship between the Big Five personality traits, self‐estimates of intelligence (SEI), and scores on two psychometrically validated intelligence tests. In study 1 a total of 100 participants completed the NEO‐PI‐R, the Wonderlic Personnel Test and the Baddeley Reasoning Test, and estimated their own intelligence on a normal distribution curve. Multiple regression showed that psychometric intelligence was predicted by Conscientiousness and SEI, while SEI was predicted by gender, Neuroticism (notably anxiety) and Agreeableness (notably modesty). Personality was a better predictor of SEI than of psychometric intelligence itself. Study 2 attempted to explore the relationship between SEI and psychometric intelligence. A total of 130 participants completed the NEO‐PI‐R, the Baddeley Reasoning Test, and the S & M Spatial intelligence test. In addition, SEI and participants conceptions of intelligence were also examined. In combination with gender and previous IQ test experience, these variables were found to predict about 11% of the variance in SEI. SEI was the only significant predictor of psychometrically measured intelligence. Inconsistencies between results of the two studies, theoretical and applied implications, and limitations of this work are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号