共查询到20条相似文献,搜索用时 15 毫秒
1.
《Quarterly journal of experimental psychology (2006)》2013,66(12):2388-2408
Contingency is an important cue to causation. Research shows that people unequally weight the cells of a 2 × 2 contingency table as follows: cause-present/effect-present (A) > cause-present/effect-absent (B) > cause-absent/effect-present (C) > cause-absent/effect-absent (D). Although some models of causal judgement can accommodate that fact, most of them assume that the weighting of information is invariant as a function of whether one is assessing a hypothesized generative versus preventive relationship. An experiment was conducted that tested the hypothesis-independence assumption against the predictions of a novel weighted-positive-test-strategy account, which predicts hypothesis dependence in cell weighting. Supporting that account, judgements of hypothesized generative causes showed the standard A > B > C > D inequality, but judgements of hypothesized preventive causes showed the predicted B > A > D > C inequality. The findings reveal that cell weighting in causal judgement is both unequal and hypothesis dependent. 相似文献
2.
Low numerical probabilities tend to be directionally ambiguous, meaning they can be interpreted either positively, suggesting the occurrence of the target event, or negatively, suggesting its non-occurrence. High numerical probabilities, however, are typically interpreted positively. We argue that the greater directional ambiguity of low numerical probabilities may make them more susceptible than high probabilities to contextual influences. Results from five experiments supported this premise, with perceived base rate affecting the interpretation of an event’s numerical posterior probability more when it was low than high. The effect is consistent with a confirmatory hypothesis testing process, with the relevant perceived base rate suggesting the directional hypothesis which people then test in a confirmatory manner. 相似文献
3.
Statistical consistency and hypothesis testing for nonmetric multidimensional scaling 总被引:1,自引:0,他引:1
Henry E. Brady 《Psychometrika》1985,50(4):509-537
The properties of nonmetric multidimensional scaling (NMDS) are explored by specifying statistical models, proving statistical consistency, and developing hypothesis testing procedures. Statistical models with errors in the dependent and independent variables are described for quantitative and qualitative data. For these models, statistical consistency often depends crucially upon how error enters the model and how data are collected and summarized (e.g., by means, medians, or rank statistics). A maximum likelihood estimator for NMDS is developed, and its relationship to the standard Shepard-Kruskal estimation method is described. This maximum likelihood framework is used to develop a method for testing the overall fit of the model. 相似文献
4.
IGOR KNEZ 《Scandinavian journal of psychology》1992,33(1):56-67
In a recent paper, Knez (1991) showed an interaction of data and hypotheses in probabilistic inference tasks. The results illustrated two, earlier not obtained, significant main effects on subjects' hypothesis sampling , viz. the effect of different forms of data presentation and subjects' execution of cognitive control over their hypothesis pool throughout the series of trials. The present paper followed up these results in that the subjects' hypothesis testing , in Knez (1991) was subjected to an analysis. Hence, to see if the effects mentioned above significantly influenced the subjects' hypothesis testing, as they did for subjects' hypothesis sampling. The results showed a consistency with Knez (1991), i.e. the results emphasize the interaction of data and hypothesis in probabilistic inference tasks, as well as the subjects' execution of cognitive control over their hypothesis pool concerning both the subjects' hypothesis sampling and testing. 相似文献
5.
对传统的假设检验作为心理学实验的数据分析工具的评价涉及两个标准:首先是它是否合法,其次是它是否有用。置于频率学派统计学框架中的传统假设检验在逻辑上实际上是合法的;但它在效用性方面则有着备择假设不可证伪以及只能提供定性结论这两方面的缺陷。置信区间能够集中地改进和弥补这些缺陷。对传统假设检验使用中错误的澄清也使得研究者们开始重视PSI问题,这使得心理学实验的设计和数据分析从关注总体转向关注个体。 相似文献
6.
Gaillard C 《The Journal of analytical psychology》2012,57(3):299-334
This essay on The Red Book seeks to underscore a characteristic specific to Jung's approach to psychoanalysis. In this book, and more generally, in all of his writings, Jung's thinking is based on his personal experience of the unconscious, in which he leaves himself open to progressive encounters. Some of them, in the years 1913-14 and 1929-30, particularly his meeting with the giant Izdubar, were quite threatening. As a result, he forged an original way of thinking that is qualified here as 'imaging' and 'emergent'. The Red Book served as the first vessel for theories Jung would later express. His way of thinking, with its failures and semi-successes, all of which are always temporary, of course, is compared to the art of the potter. The author shows the kinship between the formation of the main Jungian concepts and the teachings of the French poet, professor, and art critic Yves Bonnefoy. He also considers certain recurrent formal themes in the work of contemporary German painter and sculptor Anselm Kiefer. Lastly, this epistemological study, constantly aware of the demands of Jungian clinical practice, demonstrates the continuity in Jung's work, from The Red Book to Answer to Job, where Jung ultimately elaborated a conception of history that defines our ethical position today. 相似文献
7.
Although many common uses of p-values for making statistical inferences in contemporary scientific research have been shown to be invalid, no one, to our knowledge, has adequately assessed the main original justification for their use, which is that they can help to control the Type I error rate (Neyman & Pearson, 1928, 1933). We address this issue head-on by asking a specific question: Across what domain, specifically, do we wish to control the Type I error rate? For example, do we wish to control it across all of science, across all of a specific discipline such as psychology, across a researcher's active lifetime, across a substantive research area, across an experiment, or across a set of hypotheses? In attempting to answer these questions, we show that each one leads to troubling dilemmas wherein controlling the Type I error rate turns out to be inconsistent with other scientific desiderata. This inconsistency implies that we must make a choice. In our view, the other scientific desiderata are much more valuable than controlling the Type I error rate and so it is the latter, rather than the former, with which we must dispense. But by doing so—that is, by eliminating the Type I error justification for computing and using p-values—there is even less reason to believe that p is useful for validly rejecting null hypotheses than previous critics have suggested. 相似文献
8.
William O'Donohue Jeff Szymanski 《Journal of Rational-Emotive & Cognitive-Behavior Therapy》1993,11(4):207-222
The effectiveness of two hypothesized change mechanisms in cognitive therapy was investigated: logical analysis and empirical hypothesis testing. Thirty-eight spider phobics, as determined by performance on a behavioral avoidance test, were randomly assigned to either one of these two conditions or to a no-treatment control condition. Subjects participated in three group sessions. Outcome phobia questionnaire data suggested that both mechanisms produced desirable changes in a short period of time, with stronger evidence that logical analysis was superior to the control. Outcome from the behavior avoidance test and self-efficacy ratings failed to reach statistical significance but the trends were in the direction of positive change. Results are discussed in terms of the tripartite response dessynchrony hypothesis. Suggestions for future process research in cognitive therapy are provided.William O'Donohue, Ph.D., is an assistant professor of psychology at Northern Illinois University.Jeff Szymanski is a graduate student in clinical psychology at Northern Illinois University.The authors would like to thank Christine Casselles, Melissa McKelvie, Thomas M. Brown, Jill C. Rudman, Bonnie Schrieber, Amy Ray, Anne Valle, Lisa Herold, Jacqueline Ryan, Heather Barta, and Angela Leek for their assistance in this project. Moreover, the authors are grateful to Sol Feldman and Jane Fisher for their comments on an earlier version of this paper. 相似文献
9.
采用无线索回忆再认范式,对基于熟悉性的汉字语义特征再认进行了探索,考察了重复学习和重复测验对汉字语义特征的无线索回忆再认效应(RWCR效应)的影响。实验1采用即时测验,实验2采用延时测验,结果发现:(1)无论即时还是延时测验,汉字语义特征的再认均存在RWCR效应。(2)在即时测验时,重复学习对熟悉性有显著影响,重复测验对熟悉性没有影响;重复学习和重复测验均提高回想成绩,但二者无差异。(3)在延时测验时,重复学习组和重复测验组的熟悉性评分均下降,但前者下降快于后者;重复测验组回想的遗忘率较低,重复学习组回想的遗忘率较高。上述结果说明,汉字的语义特征存在稳定的RWCR效应,且重复学习主要影响熟悉性,重复测验主要影响回想。进一步证明了再认记忆的双加工理论。 相似文献
10.
本研究运用2个行为实验探讨了复杂情境下自我决策、为他人决策和预期他人决策在无意识思维方式和有意识思维方式下的决策表现差异。实验1发现复杂情境中无意识思维方式下,自我决策和为他人决策的决策表现显著优于预期他人决策,自我决策和为他人决策表现没有显著差异。实验2发现复杂情境中有意识思维时,在陌生人条件下,为他人决策表现分数显著高于自我决策与预期他人决策,自我决策和预期他人决策表现无显著差异;在朋友条件下,为他人决策和预期他人决策的决策表现显著优于自我决策,为他人决策和预期他人决策表现无显著差异。研究结果支持了本文提出的决策视角-心理距离作用假设(the Perspective-Distance Effect Hypothesis,PDEH)。 相似文献
11.
In a previous experiment, the authors demonstrated that kindergarten and first-grade children can be trained to test hypotheses sequentially within the context of a discrimination learning task. The present experiment is concerned with delineating various aspects of the pretraining that contribute to the improved hypothesis-testing strategies of kindergarten children (mean CA = 71.6 months). It was found that children who have learned to anticipate an invariant cue-reward relation in such tasks manifest improved hypothesis-testing behavior, as well as improved discrimination performance, whereas children who have been trained to identify and name the various stimulus components of the discriminanda do not perform better than those without such training. It was also found that children who have had practice in shifting from an irrelevant to a relevant dimension perform better than those who have not had such experience. Moreover, children who have been given explicit instruction and training in the use of win-stay and lose-shift rules, as well as in the use of valid hypotheses, manifest strategies superior to those without such training. Finally, extensive pretraining over two sessions, administered on separate days, resulted in a marked reduction in the proportion of children who were dimensionally fixated while solving discrimination problems with two genuine dimensions. 相似文献
12.
The two‐sample Student t test of location was performed on random samples of scores and on rank‐transformed scores from normal and non‐normal population distributions with unequal variances. The same test also was performed on scores that had been explicitly selected to have nearly equal sample variances. The desired homogeneity of variance was brought about by repeatedly rejecting pairs of samples having a ratio of standard deviations that exceeded a predetermined cut‐off value of 1.1, 1.2, or 1.3, while retaining pairs with ratios less than the cut‐off value. Despite this forced conformity with the assumption of equal variances, the tests on the selected samples were no more robust than tests on unselected samples, and in most cases substantially less robust. Under conditions where sample sizes were unequal, so that Type I error rates were inflated and power curves were atypical, the selection procedure produced still greater inflation and distortion of the power curves. 相似文献
13.
14.
In DeCaro et al. [DeCaro, M. S., Thomas, R. D., & Beilock, S. L. (2008). Individual differences in category learning: Sometimes less working memory capacity is better than more. Cognition, 107, 284-294] we demonstrated that sometimes less working memory (WM) has its advantages. The lower individuals’ WM, the faster they achieved success on an information-integration (II) category learning task adopted from Waldron and Ashby [Waldron, E. M., & Ashby, F. G. (2001). The effects of concurrent task interference on category learning: Evidence for multiple category learning systems. Psychonomic Bulletin & Review, 8, 168-176]. We attributed this success to the inability of lower WM individuals to employ explicit learning strategies heavily reliant on executive control. This in turn, we hypothesized, might push lower WM individuals to readily adopt procedural-based strategies thought to lead to success on the II task. Tharp and Pickering [Tharp, I. J., & Pickering, A. D. (2009). A note on DeCaro, Thomas, and Beilock (2008): Further data demonstrate complexities in the assessment of information-integration category learning. Cognition] recently questioned whether the II category learning task DeCaro et al. used really reflects procedural learning. In an effort to investigate Tharp and Pickering’s assertions with respect to individual differences in WM, we replicate and extend our previous work, in part by modeling participants’ response strategies during learning. We once again reveal that lower WM individuals demonstrate earlier II learning than their higher WM counterparts. However, we also show that low WM individuals’ initial success is not because of procedural-based responding. Instead, individuals lower in WM capacity perseverate in using simple rule-based strategies that circumvent heavy demands on WM while producing above-chance accuracy. 相似文献
15.
16.
Dack and Astington (Journal of Experimental Child Psychology 110 2011 94–114) attempted to replicate the deontic reasoning advantage among preschoolers reported by Cummins (Memory & Cognition 24 1996 823–829) and by Harris and Nuñez (Child Development. 67 1996 572–1591). Dack and Astington argued that the apparent deontic advantage reported by these studies was in fact an artifact due to a methodological confound, namely, inclusion of an authority in the deontic condition only. Removing this confound attenuated the effect in young children but had no effect on the reasoning of 7-year-olds and adults. Thus, removing reference to authority “explains away” young children’s apparent precocity at this type of reasoning. But this explanation rests on (a) a misunderstanding of norms as targets of deontic reasoning and (b) conclusions based on a sample size that was too small to detect the effect in young children. 相似文献
17.
George Karabatsos 《Journal of mathematical psychology》2005,49(1):51-69
The multinomial (Dirichlet) model, derived from de Finetti's concept of exchangeability, is proposed as a general Bayesian framework to test axioms on data, in particular, deterministic axioms characterizing theories of choice or measurement. For testing, the proposed framework does not require a deterministic axiom to be cast in a probabilistic form (e.g., casting deterministic transitivity as weak stochastic transitivity). The generality of this framework is demonstrated through empirical tests of 16 different axioms, including transitivity, consequence monotonicity, segregation, additivity of joint receipt, stochastic dominance, coalescing, restricted branch independence, double cancellation, triple cancellation, and the Thomsen condition. The model generalizes many previously proposed methods of axiom testing under measurement error, is analytically tractable, and provides a Bayesian framework for the random relation approach to probabilistic measurement (J. Math. Psychol. 40 (1996) 219). A hierarchical and nonparametric generalization of the model is discussed. 相似文献
18.
In a 2–4–4–like reasoning task, 69 subjects tested hypotheses following exposure to a low-expertise source proposing an alternative hypothesis. Subjects compared self- and source's competence either independently or interdependently. Results show that interdependence leads subjects to assert self-validity and the source's invalidity, and to test hypotheses through confirmation. Independence produces a conflict between incompetences, i.e. doubt concerning self- and source's validity, leading to disconfirmatory testing. 相似文献
19.
We examined whether individual differences in working memory influence the facility with which individuals learn new categories. Participants learned two different types of category structures: rule-based and information-integration. Successful learning of the former category structure is thought to be based on explicit hypothesis testing that relies heavily on working memory. Successful learning of the latter category structure is believed to be driven by procedural learning processes that operate largely outside of conscious control. Consistent with a widespread literature touting the positive benefits of working memory and attentional control, the higher one’s working memory, the fewer trials one took to learn rule-based categories. The opposite occurred for information-integration categories - the lower one’s working memory, the fewer trials one took to learn this category structure. Thus, the positive relation commonly seen between individual differences in working memory and performance can not only be absent, but reversed. As such, a comprehensive understanding of skill learning - and category learning in particular - requires considering the demands of the tasks being performed and the cognitive abilities of the performer. 相似文献
20.
Gregory R. Hancock 《Psychometrika》2001,66(3):373-388
While effect size estimates, post hoc power estimates, and a priori sample size determination are becoming a routine part of univariate analyses involving measured variables (e.g., ANOVA), such measures and methods have not been articulated for analyses involving latent means. The current article presents standardized effect size measures for latent mean differences inferred from both structured means modeling and MIMIC approaches to hypothesis testing about differences among means on a single latent construct. These measures are then related to post hoc power analysis, a priori sample size determination, and a relevant measure of construct reliability.I wish to convey my appreciation to the reviewers and Associate Editor, whose suggestions extended and strengthened the article's content immensely, and to Ralph Mueller of The George Washington University for enhancing the clarity of its presentation. 相似文献