首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   2篇
  2022年   2篇
  2018年   1篇
  2016年   1篇
  2015年   1篇
  2013年   2篇
  2012年   3篇
  2011年   2篇
  2010年   2篇
  2007年   2篇
  2003年   2篇
排序方式: 共有18条查询结果,搜索用时 15 毫秒
1.
2.
3.
This study evaluated the validity and reliability of the Perceived Ethnic Discrimination Questionnaire-Community Version (PEDQ-CV) Lifetime Exposure scale in a multiethnic Asian sample (N = 509). The 34-item scale measures perceived interpersonal racial/ethnic discrimination and includes four subscales assessing different types of discrimination: Social Exclusion, Stigmatization, Discrimination at Work/School, and Threat/Aggression. The Lifetime Exposure scale demonstrated excellent reliability across the full group and in all major subgroups. Subscales displayed good reliability across the full group and moderate-to-good reliability in each subgroup. The Lifetime Exposure scale was significantly correlated with the depression and anxiety subscales of the SCL-90-R, providing preliminary evidence of construct validity. The data suggest the Lifetime Exposure scale, previously validated in Black and Latino adults, is also appropriate for use with Asian samples, and can be used to examine both within-group and between-groups differences in discrimination.  相似文献   
4.
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma–Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information‐theoretic formalism.  相似文献   
5.
Crupi  Vincenzo  Iacona  Andrea 《Studia Logica》2022,110(1):47-93
Studia Logica - This paper develops a probabilistic analysis of conditionals which hinges on a quantitative measure of evidential support. In order to spell out the interpretation of...  相似文献   
6.
Zhao J  Crupi V  Tentori K  Fitelson B  Osherson D 《Cognition》2012,124(3):373-378
Bayesian orthodoxy posits a tight relationship between conditional probability and updating. Namely, the probability of an event A after learning B should equal the conditional probability of A given B prior to learning B. We examine whether ordinary judgment conforms to the orthodox view. In three experiments we found substantial differences between the conditional probability of an event A supposing an event B compared to the probability of A after having learned B. Specifically, supposing B appears to have less impact on the credibility of A than learning that B is true.  相似文献   
7.
The most prominent research program in inductive logic – here just labeled The Program, for simplicity – relies on probability theory as its main building block and aims at a proper generalization of deductive-logical relations by a theory of partial entailment. We prove a representation theorem by which a class of ordinally equivalent measures of inductive support or confirmation is singled out as providing a uniquely coherent way to work out these two major sources of inspiration of The Program.  相似文献   
8.
Theory change is a central concern in contemporary epistemology and philosophy of science. In this paper, we investigate the relationships between two ongoing research programs providing formal treatments of theory change: the (post-Popperian) approach to verisimilitude and the AGM theory of belief change. We show that appropriately construed accounts emerging from those two lines of epistemological research do yield convergences relative to a specified kind of theories, here labeled “conjunctive”. In this domain, a set of plausible conditions are identified which demonstrably capture the verisimilitudinarian effectiveness of AGM belief change, i.e., its effectiveness in tracking truth approximation. We conclude by indicating some further developments and open issues arising from our results.  相似文献   
9.
Although evidence in real life is often uncertain, the psychology of inductive reasoning has, so far, been confined to certain evidence. The present study extends previous research by investigating whether people properly estimate the impact of uncertain evidence on a given hypothesis. Two experiments are reported, in which the uncertainty of evidence is explicitly (by means of numerical values) versus implicitly (by means of ambiguous pictures) manipulated. The results show that people’s judgments are highly correlated with those predicted by normatively sound Bayesian measures of impact. This sensitivity to the degree of evidential uncertainty supports the centrality of inductive reasoning in cognition and opens the path to the study of this issue in more naturalistic settings.  相似文献   
10.
Bayesian confirmation measures give numerical expression to the impact of evidence E on a hypothesis H. All measures proposed to date are formal—that is, functions of the probabilities Pr(E∧H), Pr(E∧¬H), Pr(¬E∧H), Pr(¬E∧¬H), and nothing more. Experiments reported in Tentori, Crupi, and Osherson (2007) suggest that human confirmation judgment is not formal, but this earlier work leaves open the possibility that formality holds relative to a given semantic domain. The present study discredits even this weaker version of formality by demonstrating the role in confirmation judgments of a probability distribution defined over the possible values of Pr(E∧H), Pr(E∧¬H), Pr(¬E∧H), and Pr(¬E∧¬H)—that is, a second-order probability. Specifically, when for each of the latter quantities a pointwise value is fixed with a maximal second-order probability, evidence impact is rated in accordance with formal and normatively credible confirmation measures; otherwise evidence impact is systematically judged as more moderate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号