首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A model of cue-based probability judgment is developed within the framework of support theory. Cue diagnosticity is evaluated from experience as represented by error-free frequency counts. When presented with a pattern of cues, the diagnostic implications of each cue are assessed independently and then summed to arrive at an assessment of the support for a hypothesis, with greater weight placed on present than on absent cues. The model can also accommodate adjustment of support in light of the baserate or prior probability of a hypothesis. Support for alternatives packed together in a "residual" hypothesis is discounted; fewer cues are consulted in assessing support for alternatives as support for the focal hypothesis increases. Results of fitting this and several alternative models to data from four new multiple-cue probability learning experiments are reported.  相似文献   

2.
A taxonomy of latent structure assumptions (LSAs) for probability matrix decomposition (PMD) models is proposed which includes the original PMD model (Maris, De Boeck, & Van Mechelen, 1996) as well as a three-way extension of the multiple classification latent class model (Maris, 1999). It is shown that PMD models involving different LSAs are actually restricted latent class models with latent variables that depend on some external variables. For parameter estimation a combined approach is proposed that uses both a mode-finding algorithm (EM) and a sampling-based approach (Gibbs sampling). A simulation study is conducted to investigate the extent to which information criteria, specific model checks, and checks for global goodness of fit may help to specify the basic assumptions of the different PMD models. Finally, an application is described with models involving different latent structure assumptions for data on hostile behavior in frustrating situations.Note: The research reported in this paper was partially supported by the Fund for Scientific Research-Flanders (Belgium) (project G.0207.97 awarded to Paul De Boeck and Iven Van Mechelen), and the Research Fund of K.U. Leuven (F/96/6 fellowship to Andrew Gelman, OT/96/10 project awarded to Iven Van Mechelen and GOA/2000/02 awarded to Paul De Boeck and Iven Van Mechelen). We thank Marcel Croon and Kristof Vansteelandt for commenting on an earlier draft of this paper.  相似文献   

3.
Probability judgment is a vital part of many aspects of everyday life. In the present paper, we present a new theory of the way in which individuals produce probability estimates for joint events: conjunctive and disjunctive. We propose that a majority of individuals produce conjunctive (disjunctive) estimates by making a quasi‐random adjustment, positive or negative, from the less (more) likely component probability with the other component playing no obvious role. In two studies, we produce evidence supporting propositions that follow from our theory. First, the component probabilities do appear to play the distinct roles we propose in determining the joint event probabilities. Second, contrary to probability theory and other accounts of probability judgment, we show that the conjunctive‐less likely probability difference is unrelated to the more likely disjunctive probability difference (in normative theory these quantities are identical). In conclusion, while violating the norms of probability judgment, we argue that estimates produced in the manner we propose will be close enough to the normative values especially given the changing nature of the external environment and the incomplete nature of available information.  相似文献   

4.
Igor Douven 《Synthese》2008,164(1):19-44
According to so-called epistemic theories of conditionals, the assertability/acceptability/acceptance of a conditional requires the existence of an epistemically significant relation between the conditional’s antecedent and its consequent. This paper points to some linguistic data that our current best theories of the foregoing type appear unable to explain. Further, it presents a new theory of the same type that does not have that shortcoming. The theory is then defended against some seemingly obvious objections.  相似文献   

5.
6.
7.
8.
Joel Pust 《Synthese》2013,190(9):1489-1501
Terence Horgan defends the thirder position on the Sleeping Beauty problem, claiming that Beauty can, upon awakening during the experiment, engage in “synchronic Bayesian updating” on her knowledge that she is awake now in order to justify a 1/3 credence in heads. In a previous paper, I objected that epistemic probabilities are equivalent to rational degrees of belief given a possible epistemic situation and so the probability of Beauty’s indexical knowledge that she is awake now is necessarily 1, precluding such updating. In response, Horgan maintains that the probability claims in his argument are to be taken, not as claims about possible rational degrees of belief, but rather as claims about “quantitative degrees of evidential support.” This paper argues that the most plausible account of quantitative degree of support, when conjoined with any of the three major accounts of indexical thought in such a way as to plausibly constrain rational credence, contradicts essential elements of Horgan’s argument.  相似文献   

9.
Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost abolished when they evaluate the probability that the same intervals include the quantity. The authors successfully apply a method for adaptive adjustment of probability intervals as a debiasing tool and discuss a tentative explanation in terms of a naive sampling model. According to this view, people report their experiences accurately, but they are naive in that they treat both sample proportion and sample dispersion as unbiased estimators, yielding small bias in probability evaluation but strong bias in interval production.  相似文献   

10.
The Qolla Indians have high rates of involvement in agonistic forms of interaction. In previous reports the author suggested that ecological and physiological factors are causally associated with intracommunity differential participation in aggressive behavior. The present article described tests of hypotheses using other variables to explain this behavioral differentiation. The hypothesis that aggressiveness and participation in litigation are a function of the amount of social support the individual can potentially mobilize is tested. The relationships between indicators of social status (wealth, education, age, political activities, and ritual participation), on the one hand, and aggressiveness and litigiousness, on the other, also are examined.  相似文献   

11.
Openshaw  James  Weksler  Assaf 《Philosophical Studies》2022,179(11):3325-3348
Philosophical Studies - According to capacitism, to perceive is to employ personal-level, perceptual capacities. In a series of publications, Schellenberg (2016, 2018, 2019b, 2020) has argued that...  相似文献   

12.
Subjective career success: A study of managers and support personnel   总被引:1,自引:0,他引:1  
Despite popular belief that managers are successful by virtue of their positions, few studies have examined the position-success relationship. In this research, it was predicted that subjective career success is a multi-dimensional construct whose facets can be measured by several factors. Moreover, the phenomenon of career success was tested to see if it would relate to an employee's perception of occupational self-concept and job features. The notion that these dimensions would predict some aspects of career success more accurately for either managers or support personnel was also investigated. The confirming results obtained in this study and their implications for future research as well as practitioners are discussed.An earlier version of this article was presented at the 93rd Annual Convention of the American Psychological Association, Los Angeles, California, 1985.  相似文献   

13.
The probability score (PS) can be used to measure the overall accuracy of probability judgments for a single event, e.g., “Rain falls,” or “This patient has cancer.” It has been shown previously how a “covariance decomposition” of the mean of PS over many occasions indexes several distinct aspects of judgment performance (J. F. Yates, Organizational Behavior and Human Performance, 30, 132–156 (1982)). There are many situations in which probability judgments are reported for sample space partitions containing more than one event and its complement, e.g., medical situations in which a patient might suffer from Disease X, Disease Y, or Disease Z, or testing situations in which the correct answer to an item might be any one of alternatives (a) through (e). The probability score for multiple events (PSM) serves as a measure of the overall accuracy of probability judgments for the events in partitions of any size. The present article describes and interprets an extension of the covariance decomposition to the mean of PSM. The decomposition is illustrated with data from two contexts, medicine and education.  相似文献   

14.
We report four experiments investigating conjunctive inferences (from a conjunction and two conditional premises) and disjunctive inferences (from a disjunction and the same two conditionals). The mental model theory predicts that the conjunctive inferences, which require one model, should be easier than the disjunctive inferences, which require multiple models. Formal rule theories predict either the opposite result or no difference between the inferences. The experiments showed that the inferences were equally easy when the participants evaluated given conclusions, but that the conjunctive inferences were easier than the disjunctive inferences (1) when the participants drew their own conclusions, (2) when the conjunction and disjunction came last in the premises, (3) in the time the participants spent reading the premises and in responding to given conclusions, and (4) in their ratings of the difficulty of the inferences. The results support the model theory and demonstrate the importance of reasoners' inferential strategies.  相似文献   

15.
This paper studies consistency in the judged probability of a target hypothesis in lists of mutually exclusive nonexhaustive hypotheses. Specifically, it controls the role played by the support of displayed competing hypotheses and the relatedness between the target hypothesis and its alternatives. Three experiments are reported. In all experiments, groups of people were presented with a list of mutually exclusive nonexhaustive causes of a person's death. In the first two experiments, they were asked to judge the probability of each cause as that of the person's decease. In the third experiment, people were asked for a frequency estimation task. Target causes were presented in all lists. Several other alternative causes to the target ones differed across the lists. Findings show that the judged probability/frequency of a target cause changes as a function of the support of the displayed competing causes. Specifically, it is higher when its competing displayed causes have low rather than high support. Findings are consistent with the contrastive support hypothesis within the support theory.  相似文献   

16.
In this study, decomposition is used as a tool for the assessment of continuous probability distributions. The goal of using decomposition is to obtain better calibrated probability distributions by expressing a variable as a known function of several component variables. Three target quantities were used in the study. Each was assessed holistically and using two different decompositions. Thus, each subject provided three distributions for each of the three target quantities. The recomposed assessments were compared to holistic assessments. The distributions obtained using decomposition were found to be much better calibrated than those obtained holistically. Two methods of aggregating distributions from multiple subjects were also examined. One involves aggregating (averaging) distributions before recomposing while the second method involves recomposing and then averaging distributions for the target variable. The second method was found to be slightly better, although both showed better calibration than was found in the individual assessments.  相似文献   

17.
For comparing nested covariance structure models, the standard procedure is the likelihood ratio test of the difference in fit, where the null hypothesis is that the models fit identically in the population. A procedure for determining statistical power of this test is presented where effect size is based on a specified difference in overall fit of the models. A modification of the standard null hypothesis of zero difference in fit is proposed allowing for testing an interval hypothesis that the difference in fit between models is small, rather than zero. These developments are combined yielding a procedure for estimating power of a test of a null hypothesis of small difference in fit versus an alternative hypothesis of larger difference.  相似文献   

18.
Jan Von Plato 《Synthese》1982,53(3):419-432
De Finetti's representation theorem is a special case of the ergodic decomposition of stationary probability measures. The problems of the interpretation of probabilities centred around de Finetti's theorem are extended to this more general situation. The ergodic decomposition theorem has a physical background in the ergodic theory of dynamical systems. Thereby the interpretations of probabilities in the cases of de Finetti's theorem and its generalization and in ergodic theory are systematically connected to each other.This paper is an extended version of footnote 5 of von Plato (1981).  相似文献   

19.
20.
According to Bayesians, the null hypothesis significance-testing procedure is not deductively valid because it involves the retention or rejection of the null hypothesis under conditions where the posterior probability of that hypothesis is not known. Other criticisms are that this procedure is pointless and encourages imprecise hypotheses. However, according to non-Bayesians, there is no way of assigning a prior probability to the null hypothesis, and so Bayesian statistics do not work either. Consequently, no procedure has been accepted by both groups as providing a compelling reason to accept or reject hypotheses. The author aims to provide such a method. In the process, the author distinguishes between probability and epistemic estimation and argues that, although both are important in a science that is not completely deterministic, epistemic estimation is most relevant for hypothesis testing. Based on this analysis, the author proposes that hypotheses be evaluated via epistemic ratios and explores the implications of this proposal. One implication is that it is possible to encourage precise theorizing by imposing a penalty for imprecise hypotheses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号