首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 39 毫秒
1.
People's numeric probability estimates for 2 mutually exclusive and exhaustive events commonly sum to 1.0, which seems to indicate the full complementarity of subjective certainty in the 2 events (i.e., increases in certainty for one event are accompanied by decreases in certainty for the other). In this article, however, a distinction is made between the additivity of probability estimates and the complementarity of internal perceptions of certainty. In Experiment 1, responses on a verbal measure of certainty provide evidence of binary noncomplementarity in the perceived likelihoods of possible scenario outcomes, and a comparison of verbal and numeric certainty estimates suggests that numeric probabilities overestimated the complementarity of people's certainty. Experiment 2 used a choice task to detect binary noncomplementarity. Soliciting numeric probability estimates prior to the choice task changed the participants' choices in a direction consistent with complementarity. Possible mechanisms yielding (non)complementarity are discussed.  相似文献   

2.
3.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

4.
5.
Abstract: Previous studies on subjective probability judgment indicate that pair‐wise comparison between the focal and the strongest alternative outcome plays an important role in probability judgment. This study, however, found that the randomness of alternative outcomes affected probability judgment for focal outcome. In the present study, 182 participants provided probability estimates for winning on hypothetical slot machines where both successes and losses were composed of multiple outcomes. The randomness of both the focal and alternative outcomes were defined by the expression used in Rappoport and Budescu (1997 ). The analysis indicated that the more random the distributions of both focal and alternative outcomes, the higher the estimated probability for focal outcome. Some theoretical suggestions are discussed.  相似文献   

6.
Jeffrey (1983) proposed a generalisation of conditioning as a means of updating probability distributions when new evidence drives no event to certainty. His rule requires the stability of certain conditional probabilities through time. We tested this assumption (“invariance”) from the psychological point of view. In Experiment 1 participants offered probability estimates for events in Jeffrey's candlelight example. Two further scenarios were investigated in Experiment 2, one in which invariance seems justified, the other in which it does not. Results were in rough conformity to Jeffrey's (1983) principle.  相似文献   

7.
One goal of survey research is to optimize sampling procedures so that the collected data will produce accurate population estimates. In this context, sampling bias is a primary threat to a study's validity. If individuals who do not respond are a random sample of the population, then the estimates obtained from such a subsample are unbiased. However, as the percentage of nonrespondents increases, the assumption of unbiased estimation becomes increasingly tenuous. At this point an investigator has two choices: delete all subjects who have not provided data as part of the first data collection, or allow a respondent's point of entry to define his baseline measures for the study. No previous discussion of the latter option has been noted in the methods literature. Therefore the authors have termed this approach to baseline the "first record". Conditions under which the "first record" technique would be appropriate or inappropriate are discussed.  相似文献   

8.
Jeremy Gwiazda made two criticisms of my formulation in terms of Bayes’s theorem of my probabilistic argument for the existence of God. The first criticism depends on his assumption that I claim that the intrinsic probabilities of all propositions depend almost entirely on their simplicity; however, my claim is that that holds only insofar as those propositions are explanatory hypotheses. The second criticism depends on a claim that the intrinsic probabilities of exclusive and exhaustive explanatory hypotheses of a phenomenon must sum to 1; however it is only those probabilities plus the intrinsic probability of the non-occurrence of the phenomenon which must sum to 1.  相似文献   

9.
Numerous studies have found that likelihood judgment typically exhibits subadditivity in which judged probabilities of events are less than the sum of judged probabilities of constituent events. Whereas traditional accounts of subadditivity attribute this phenomenon to deterministic sources, this paper demonstrates both formally and empirically that subadditivity is systematically influenced by the stochastic variability of judged probabilities. First, making rather weak assumptions, we prove that regressive error (or variability) in mapping covert probability judgments to overt responses is sufficient to produce subadditive judgments. Experiments follow in which participants provided repeated probability estimates. The results support our model assumption that stochastic variability is regressive in probability estimation tasks and show the contribution of such variability to subadditivity. The theorems and the experiments focus on within-respondent variability, but most studies use between-respondent designs. Numerical simulations extend the work to contrast within- and between-respondent measures of subadditivity. Methodological implications of all the results are discussed, emphasizing the importance of taking stochastic variability into account when estimating the role of other factors (such as the availability bias) in producing subadditive judgments.  相似文献   

10.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

11.
A probabilistic explication is offered of equipoise and uncertainty in clinical trials. In order to be useful in the justification of clinical trials, equipoise has to be interpreted in terms of overlapping probability distributions of possible treatment outcomes, rather than point estimates representing expectation values. Uncertainty about treatment outcomes is shown to be a necessary but insufficient condition for the ethical defensibility of clinical trials. Additional requirements are proposed for the nature of that uncertainty. The indecisiveness of our criteria for cautious decision-making under uncertainty creates the leeway that makes clinical trials defensible.  相似文献   

12.
The distribution of figural "goodness" in 2 mental shape spaces, the space of triangles and the space of quadrilaterals, was examined. In Experiment 1, participants were asked to rate the typicality of visually presented triangles and quadrilaterals (perceptual task). In Experiment 2, participants were asked to draw triangles and quadrilaterals by hand (production task). The rated typicality of a particular shape and the probability that that shape was generated by participants were each plotted as a function of shape parameters, yielding estimates of the subjective distribution of shape goodness in shape space. Compared with neutral distributions of random shapes in the same shape spaces, these distributions showed a marked bias toward regular forms (equilateral triangles and squares). Such psychologically modal shapes apparently represent ideal forms that maximize the perceptual preference for regularity and symmetry.  相似文献   

13.
Judgments of proportions   总被引:2,自引:0,他引:2  
This study investigated the processes that underlie estimates of relative frequency. Ss performed 4 tasks using the same stimuli (squares containing black and white dots); they judged "percentages" of white dots, "percentages" of black dots, "ratios" of black dots to white dots, and "differences" between the number of black and white dots. Results were consistent with the theory that Ss used the instructed operations with the same scale values in all tasks. Despite the use of the correct operation, Ss consistently overestimated small proportions and underestimated large proportions. Variations in the distributions of actual proportions affected the extent to which Ss overestimated small proportions and underestimated large proportions in the direction predicted by range-frequency theory. Results suggest that proportion judgments, and by analogy probability judgments, should not be taken at face value.  相似文献   

14.
Decisions under risk in the medical domain have been found to systematically diverge from decisions in the monetary domain. When making choices between monetary options, people commonly rely on a decision strategy that trades off outcomes with their probabilities; when making choices between medical options, people tend to neglect probability information. In two experimental studies, we tested to what extent differences between medical and monetary decisions also emerge when the decision outcomes affect another person. Using a risky choice paradigm for medical and monetary decisions, we compared hypothetical decisions that participants made for themselves to decisions for a socially distant other (Study 1) and to recommendations as financial advisor or doctor (Study 2). In addition, we examined people's information search in a condition in which information about payoff distributions had to be learned from experiential sampling. Formal modeling and analyses of search behavior revealed a similarly pronounced gap between medical and monetary decisions in decisions for others as in decisions for oneself. Our results suggest that when making medical decisions, people try to avoid the worst outcome while neglecting its probability—even when the outcomes affect others rather than themselves.  相似文献   

15.
This study investigated developmental differences in the relationship of probability and cost estimates to worrying. Adults, younger children (M age = 8.67 years) and older children (M age = 11.06 years) rated the extent to which they worry about a list of negative social and physical outcomes and provided subjective probability and cost estimates for the same outcomes. Adults reported worrying more about social outcomes and rated them as less ‘bad’ (or costly) but more likely to occur than physical outcomes. Unlike adults, children in both age groups reported worrying more about physical outcomes. However, similar to adults, they also rated social outcomes as less ‘bad’ but more likely to occur than physical outcomes. Regression analyses showed that probability ratings were the best predictors of worry in adults, both probability and cost ratings equally predicted worry in older children, but only cost ratings predicted worry in younger children.
Marianna SzabóEmail:
  相似文献   

16.
E. Maris 《Psychometrika》1998,63(1):65-71
In the context ofconditional maximum likelihood (CML) estimation, confidence intervals can be interpreted in three different ways, depending on the sampling distribution under which these confidence intervals contain the true parameter value with a certain probability. These sampling distributions are (a) the distribution of the data given theincidental parameters, (b) the marginal distribution of the data (i.e., with the incidental parameters integrated out), and (c) the conditional distribution of the data given the sufficient statistics for the incidental parameters. Results on the asymptotic distribution of CML estimates under sampling scheme (c) can be used to construct asymptotic confidence intervals using only the CML estimates. This is not possible for the results on the asymptotic distribution under sampling schemes (a) and (b). However, it is shown that theconditional asymptotic confidence intervals are also valid under the other two sampling schemes. I am indebted to Theo Eggen, Norman Verhelst and one of Psychometrika's reviewers for their helpful comments.  相似文献   

17.
18.
Research on estimation of a psychometric function psi has usually focused on comparing alternative algorithms to apply to the data, rarely addressing how best to gather the data themselves (i.e., what sampling plan best deploys the affordable number of trials). Simulation methods were used here to assess the performance of several sampling plans in yes-no and forced-choice tasks, including the QUEST method and several variants of up-down staircases and of the method of constant stimuli (MOCS). We also assessed the efficacy of four parameter estimation methods. Performance comparisons were based on analyses of usability (i.e., the percentage of times that a plan yields usable data for the estimation of all the parameters of psi) and of the resultant distributions of parameter estimates. Maximum likelihood turned out to be the best parameter estimation method. As for sampling plans, QUEST never exceeded 80% usability even when 1000 trials were administered and rendered accurate estimates of threshold but misestimated the remaining parameters. MOCS and up-down staircases yielded similar and acceptable usability (above 95% with 400-500 trials) and, although neither type of plan allowed estimating all parameters with optimal precision, each type appeared well suited to estimating a distinct subset of parameters. An analysis of the causes of this differential suitability allowed designing alternative sampling plans (all based on up-down staircases) for yes-no and forced-choice tasks. These alternative plans rendered near optimal distributions of estimates for all parameters. The results just described apply when the fitted psi has the same mathematical form as the actual psi generating the data; in case of form mismatch, all parameters except threshold were generally misestimated but the relative performance of all the sampling plans remained identical. Detailed practical recommendations are given.  相似文献   

19.
Discriminability measures such asd’ and logd become infinite when performance is extremely accurate and no errors are recorded. Different arbitrary corrections can be applied to produce finite values, but how well do these values estimate true performance? To answer this question, we directly calculated the effects of a range of different corrections on the sampling distributions of\(\hat d'\) and log\(\hat d\). Many arbitrary corrections produced better estimates of discriminability than did the intuitively plausible technique of rerunning problem conditions. We concluded that when it is not possible to run more trials and when other techniques are not appropriate, the best correction overall is to add a correction constant between 0.25 and 0.5 to all response counts, regardless of their value.  相似文献   

20.
Teigen KH  Keren G 《Cognition》2003,87(2):55-71
Outcome expectations can be expressed prospectively in terms of probability estimates, and retrospectively in terms of surprise. Surprise ratings and probability estimates differ, however, in some important ways. Surprises are generally created by low-probability outcomes, yet, as shown by several experiments, not all low-probability outcomes are equally surprising. To account for surprise, we propose a contrast hypothesis according to which the level of surprise associated with an outcome is mainly determined by the extent to which it contrasts with the default, expected alternative. Three ways by which contrasts can be established are explored: contrasts due to relative probabilities, where the obtained outcome is less likely than a default alternative; contrasts formed by novelty and change, where a contrast exists between the obtained outcome and the individual's previous experience; and contrasts due to the perceptual or conceptual distance between the expected and the obtained. In all these cases, greater contrast was accompanied by higher ratings of surprise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号