首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A “no ethics” principle has long been prevalent in science and has demotivated deliberation on scientific ethics. This paper argues the following: (1) An understanding of a scientific “ethos” based on actual “value preferences” and “value repugnances” prevalent in the scientific community permits and demands critical accounts of the “no ethics” principle in science. (2) The roots of this principle may be traced to a repugnance of human dignity, which was instilled at a historical breaking point in the interrelation between science and ethics. This breaking point involved granting science the exclusive mandate to pass judgment on the life worth living. (3) By contrast, respect for human dignity, in its Kantian definition as “the absolute inner worth of being human,” should be adopted as the basis to ground science ethics. (4) The pathway from this foundation to the articulation of an ethical duty specific to scientific practice, i.e., respect for objective truth, is charted by Karl Popper’s discussion of the ethical principles that form the basis of science. This also permits an integrated account of the “external” and “internal” ethical problems in science. (5) Principles of the respect for human dignity and the respect for objective truth are also safeguards of epistemic integrity. Plain defiance of human dignity by genetic determinism has compromised integrity of claims to knowledge in behavioral genetics and other behavioral sciences. Disregard of the ethical principles that form the basis of science threatens epistemic integrity.  相似文献   

2.
One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task, the participant is presented with four cards and a conditional rule (e.g., “If there is an A on one side of the card, there is always a 2 on the other side”). Participants are asked which cards should be turned to verify whether or not the rule holds. In this simple task, participants consistently provide answers that are incorrect according to formal logic. To account for these errors, several models have been proposed, one of the most prominent being the information gain model (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). This model is based on the assumption that people independently select cards based on the expected information gain of turning a particular card. In this article, we present two estimation methods to fit the information gain model: a maximum likelihood procedure (programmed in R) and a Bayesian procedure (programmed in WinBUGS). We compare the two procedures and illustrate the flexibility of the Bayesian hierarchical procedure by applying it to data from a meta-analysis of the Wason task (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). We also show that the goodness of fit of the information gain model can be assessed by inspecting the posterior predictives of the model. These Bayesian procedures make it easy to apply the information gain model to empirical data. Supplemental materials may be downloaded along with this article from .  相似文献   

3.
This article is about the epistemic basing relation, which is the relation that obtains between beliefs and the reasons for which they are held. We need an adequate account of the basing relation if we want to have a satisfactory account of doxastic justification, which we should want to have. To that end, this article aims to achieve two goals. The first is to show that a plausible account of the basing relation must invoke counterfactual concepts. The second is to set out two related analyses of the basing relation, each of which seems quite plausible.  相似文献   

4.
Jeremy Gwiazda made two criticisms of my formulation in terms of Bayes’s theorem of my probabilistic argument for the existence of God. The first criticism depends on his assumption that I claim that the intrinsic probabilities of all propositions depend almost entirely on their simplicity; however, my claim is that that holds only insofar as those propositions are explanatory hypotheses. The second criticism depends on a claim that the intrinsic probabilities of exclusive and exhaustive explanatory hypotheses of a phenomenon must sum to 1; however it is only those probabilities plus the intrinsic probability of the non-occurrence of the phenomenon which must sum to 1.  相似文献   

5.
This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.  相似文献   

6.
Exemplar, prototype, and connectionist models typically assume that events constitute the basic unit of learning and representation in categorization. In these models, each learning event updates a statistical representation of a category independently of other learning events. An implication is that events involving the same individual affect learning independently and are not integrated into a single structure that represents the individual in an internal model of the world. A series of experiments demonstrates that human subjects track individuals across events, establish representations of them, and use these representations in categorization. These findings are consistent with “representationalism,” the view that an internal model of the world constitutes a physical level of representation in the brain, and that the brain does not simply capture the statistical properties of events in an undifferentiated dynamical system. Although categorization is an inherently statistical process that produces generalization, pattern completion, frequency effects, and adaptive learning, it is also an inherently representational process that establishes an internal model of the world. As a result, representational structures evolve in memory to track the histories of individuals, accumulate information about them, and simulate them in events.  相似文献   

7.
Several recent works have tackled the estimation issue for the unidimensional four-parameter logistic model (4PLM). Despite these efforts, the issue remains a challenge for the multidimensional 4PLM (M4PLM). Fu et al. (2021) proposed a Gibbs sampler for the M4PLM, but it is time-consuming. In this paper, a mixture-modelling-based Bayesian MH-RM (MM-MH-RM) algorithm is proposed for the M4PLM to obtain the maximum a posteriori (MAP) estimates. In a comparison of the MM-MH-RM algorithm to the original MH-RM algorithm, two simulation studies and an empirical example demonstrated that the MM-MH-RM algorithm possessed the benefits of the mixture-modelling approach and could produce more robust estimates with guaranteed convergence rates and fast computation. The MATLAB codes for the MM-MH-RM algorithm are available in the online appendix.  相似文献   

8.
Mark Sargent 《Erkenntnis》2009,70(2):237-252
This essay answers the “Bayesian Challenge,” which is an argument offered by Bayesians that concludes that belief is not relevant to rational action. Patrick Maher and Mark Kaplan argued that this is so because there is no satisfactory way of making sense of how it would matter. The two ways considered so far, acting as if a belief is true and acting as if a belief has a probability over a threshold, do not work. Contrary to Maher and Kaplan, Keith Frankish argued that there is a way to make sense of how belief matters by introducing a dual process theory of mind in which decisions are made at the conscious level using premising policies. I argue that Bayesian decision theory alone shows that it is sometimes rational to base decisions on beliefs; we do not need a dual process theory of mind to solve the Bayesian Challenge. This point is made clearer when we consider decision levels: acting as if a belief is true is sometimes rational at higher decision levels.
Mark SargentEmail:
  相似文献   

9.
The article presents a Bayesian model of causal learning that incorporates generic priors--systematic assumptions about abstract properties of a system of cause-effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes--causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.  相似文献   

10.
We introduce the fundamental tenets of Bayesian inference, which derive from two basic laws of probability theory. We cover the interpretation of probabilities, discrete and continuous versions of Bayes’ rule, parameter estimation, and model comparison. Using seven worked examples, we illustrate these principles and set up some of the technical background for the rest of this special issue of Psychonomic Bulletin & Review. Supplemental material is available via https://osf.io/wskex/.  相似文献   

11.
We tested a method for solving Bayesian reasoning problems in terms of spatial relations as opposed to mathematical equations. Participants completed Bayesian problems in which they were given a prior probability and two conditional probabilities and were asked to report the posterior odds. After a pretraining phase in which participants completed problems with no instruction or external support, participants watched a video describing a visualization technique that used the length of bars to represent the probabilities provided in the problem. Participants then completed more problems with a chance to implement the technique by clicking interactive bars on the computer screen. Performance improved dramatically from the pretraining phase to the interactive‐bar phase. Participants maintained improved performance in transfer phases in which they were required to implement the visualization technique with either pencil‐and‐paper or no external medium. Accuracy levels for participants using the visualization technique were very similar to participants trained to solve the Bayes theorem equation. The results showed no evidence of learning across problems in the pretraining phase or for control participants who did not receive training, so the improved performance of participants using the visualization method could be uniquely attributed to the method itself. A classroom sample demonstrated that these benefits extend to instructional settings. The results show that people can quickly learn to perform Bayesian reasoning without using mathematical equations. We discuss ways that a spatial solution method can enhance classroom instruction on Bayesian inference and help students apply Bayesian reasoning in everyday settings.  相似文献   

12.
Association models constitute an attractive alternative to the usual log-linear models for modeling the dependence between classification variables. They impose special structure on the underlying association by assigning scores on the levels of each classification variable, which can be fixed or parametric. Under the general row-column (RC) association model, both row and column scores are unknown parameters without any restriction concerning their ordinality. However, when the classification variables are ordinal, order restrictions on the scores arise naturally. Under such restrictions, we adopt an alternative parameterization and draw inferences about the equality of adjacent scores using the Bayesian approach. To achieve that, we have constructed a reversible jump Markov chain Monte Carlo algorithm for moving across models of different dimension and estimate accurately the posterior model probabilities which can be used either for model comparison or for model averaging. The proposed methodology is evaluated through a simulation study and illustrated using actual datasets.  相似文献   

13.
The Minkowski property of psychological space has long been of interest to researchers. A common strategy has been calculating the stress in multidimensional scaling for many Minkowski exponent values and choosing the one that results in the lowest stress. However, this strategy has an arbitrariness problem—that is, a loss function. Although a recently proposed Bayesian approach could solve this problem, the method was intended for individual subject data. It is unknown whether this method is directly applicable to averaged or single data, which are common in psychology and behavioral science. Therefore, we first conducted a simulation study to evaluate the applicability of the method to the averaged data problem and found that it failed to recover the true Minkowski exponent. Therefore, a new method is proposed that is a simple extension of the existing Euclidean Bayesian multidimensional scaling to the Minkowski metric. Another simulation study revealed that the proposed method could successfully recover the true Minkowski exponent. BUGS codes used in this study are given in the Appendix.  相似文献   

14.
Progress in science often comes from discovering invariances in relationships among variables; these invariances often correspond to null hypotheses. As is commonly known, it is not possible to state evidence for the null hypothesis in conventional significance testing. Here we highlight a Bayes factor alternative to the conventional t test that will allow researchers to express preference for either the null hypothesis or the alternative. The Bayes factor has a natural and straightforward interpretation, is based on reasonable assumptions, and has better properties than other methods of inference that have been advocated in the psychological literature. To facilitate use of the Bayes factor, we provide an easy-to-use, Web-based program that performs the necessary calculations.  相似文献   

15.
Multilevel covariance structure models have become increasingly popular in the psychometric literature in the past few years to account for population heterogeneity and complex study designs. We develop practical simulation based procedures for Bayesian inference of multilevel binary factor analysis models. We illustrate how Markov Chain Monte Carlo procedures such as Gibbs sampling and Metropolis-Hastings methods can be used to perform Bayesian inference, model checking and model comparison without the need for multidimensional numerical integration. We illustrate the proposed estimation methods using three simulation studies and an application involving student's achievement results in different areas of mathematics. The authors thank Ian Westbury, University of Illinois at Urbana Champaign for kindly providing the SIMS data for the application.  相似文献   

16.
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam’s window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.  相似文献   

17.
A Bayesian random effects model for testlets   总被引:4,自引:0,他引:4  
Standard item response theory (IRT) models fit to dichotomous examination responses ignore the fact that sets of items (testlets) often come from a single common stimuli (e.g. a reading comprehension passage). In this setting, all items given to an examinee are unlikely to be conditionally independent (given examinee proficiency). Models that assume conditional independence will overestimate the precision with which examinee proficiency is measured. Overstatement of precision may lead to inaccurate inferences such as prematurely ending an examination in which the stopping rule is based on the estimated standard error of examinee proficiency (e.g., an adaptive test). To model examinations that may be a mixture of independent items and testlets, we modified one standard IRT model to include an additional random effect for items nested within the same testlet. We use a Bayesian framework to facilitate posterior inference via a Data Augmented Gibbs Sampler (DAGS; Tanner & Wong, 1987). The modified and standard IRT models are both applied to a data set from a disclosed form of the SAT. We also provide simulation results that indicates that the degree of precision bias is a function of the variability of the testlet effects, as well as the testlet design.The authors wish to thank Robert Mislevy, Andrew Gelman and Donald B. Rubin for their helpful suggestions and comments, Ida Lawrence and Miriam Feigenbaum for providing us with the SAT data analyzed in section 5, and to the two anonymous referees for their careful reading and thoughtful suggestions on an earlier draft. We are also grateful to the Educational Testing service for providing the resources to do this research.  相似文献   

18.
Several authors have suggested the use of multilevel models for the analysis of data from single case designs. Multilevel models are a logical approach to analyzing such data, and deal well with the possible different time points and treatment phases for different subjects. However, they are limited in several ways that are addressed by Bayesian methods. For small samples Bayesian methods fully take into account uncertainty in random effects when estimating fixed effects; the computational methods now in use can fit complex models that represent accurately the behavior being modeled; groups of parameters can be more accurately estimated with shrinkage methods; prior information can be included; and interpretation is more straightforward. The computer programs for Bayesian analysis allow many (nonstandard) nonlinear models to be fit; an example using floor and ceiling effects is discussed here.  相似文献   

19.
Bayesian theories of perception provide a link between observed response distributions and theoretical constructs from Bayesian decision theory. Using Bayesian psychophysics we derive response distributions for two cases, one based on a normal distribution and one on a von Mises distribution for angular variables. Interestingly, where the theoretical response distribution is always unimodal in the case of normal distributions, it can become bimodal in the angular setting in the case when prior and likelihood are about equally strong.  相似文献   

20.
Owen (1975) proposed an approximate empirical Bayes procedure for item selection in computerized adaptive testing (CAT). The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach but is no longer necessary given the computational power currently available for adaptive testing. This paper suggests several item selection criteria for adaptive testing which are all based on the use of the true posterior. Some of the statistical properties of the ability estimator produced by these criteria are discussed and empirically characterized.Portions of this paper were presented at the 60th annual meeting of the Psychometric Society, Minneapolis, Minnesota, June, 1995. The author is indebted to Wim M. M. Tielen for his computational support.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号