首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article considers procedures for combining individual probability distributions that belong to some “family” into a “group” probability distribution that belongs to the same family. The procedures considered are Vincentizing, in which quantiles are averaged across distributions; generalized Vincentizing, in which the quantiles are transformed before averaging; and pooling based on the distribution function or the probability density function. Some of these results are applied to models of reaction time in psychological experiments.  相似文献   

2.
Fabio G. Cozman 《Synthese》2012,186(2):577-600
This paper analyzes concepts of independence and assumptions of convexity in the theory of sets of probability distributions. The starting point is Kyburg and Pittarelli??s discussion of ??convex Bayesianism?? (in particular their proposals concerning E-admissibility, independence, and convexity). The paper offers an organized review of the literature on independence for sets of probability distributions; new results on graphoid properties and on the justification of ??strong independence?? (using exchangeability) are presented. Finally, the connection between Kyburg and Pittarelli??s results and recent developments on the axiomatization of non-binary preferences, and its impact on ??complete?? independence, are described.  相似文献   

3.
We consider the problems arising from using sequences of experiments to discover the causal structure among a set of variables, none of whom are known ahead of time to be an “outcome”. In particular, we present various approaches to resolve conflicts in the experimental results arising from sampling variability in the experiments. We provide a sufficient condition that allows for pooling of data from experiments with different joint distributions over the variables. Satisfaction of the condition allows for an independence test with greater sample size that may resolve some of the conflicts in the experimental results. The pooling condition has its own problems, but should—due to its generality—be informative to techniques for meta-analysis.  相似文献   

4.
Probabilistic independence among multiple random variables (e.g., among the outputs of multiple spatial-frequency channels) has been invoked to explain two effects found with many kinds of stimuli: increments in detection performance due to “probability summation” and decrements in detection and identification performance due to “extrinsic uncertainty.” Quantitative predictions of such effects, however, depend on the precise assumptions. Here we calculate predictions from multidimensional signal-detection theory assuming any of several different probability distributions characterizing the random variables (including two-state, Gaussian, exponential, and double-exponential distributions) and either of two rules for combining the multiple random variables into a single decision variable (taking the maximum or summing them). In general, the probability distributions predicting shallower ROC curves predict greater increments due to summation but smaller decrements due to extrinsic uncertainty. Some probability distributions yield steep-enough ROC curves to actually predict decrements due to summation in blocked-summation experiments. Probability distribution matters much less for intermixed-summation than for blocked-summation predictions. Of the two combination rules, the sum-of-outputs rule usually predicts both greater increments due to summation and greater decrements due to extrinsic uncertainty. Put another way, of the two combination rules, the sum-of-outputs rule usually predicts better performance on the compound stimulus under any condition but worse performance on simple stimuli under intermixed conditions.  相似文献   

5.
Ruurik Holm 《Synthese》2013,190(18):4001-4007
This article discusses the classical problem of zero probability of universal generalizations in Rudolf Carnap’s inductive logic. A correction rule for updating the inductive method on the basis of evidence will be presented. It will be shown that this rule has the effect that infinite streams of uniform evidence assume a non-zero limit probability. Since Carnap’s inductive logic is based on finite domains of individuals, the probability of the corresponding universal quantification changes accordingly. This implies that universal generalizations can receive positive prior and posterior probabilities, even for (countably) infinite domains.  相似文献   

6.
王玉龙  申继亮 《心理科学》2012,35(1):238-242
研究对203名脑卒中患者的家庭照料者进行了问卷调查,以探讨患者的功能独立性对家庭照料者负担感的作用机制。结果显示,82.3%脑卒中患者的家庭照料者有明显的负担感;患者的功能独立性与家庭照料者的负担感显著负相关;来自患者的社会支持在患者功能独立性与家庭照料者负担感之间起中介作用,而来自患者之外的社会支持则起调节作用。结果表明,在家庭照料负担的干预实践中,应注意区分来源不同的社会支持及其作用机制。  相似文献   

7.
In this article we present symmetric diffusion networks, a family of networks that instantiate the principles of continuous, stochastic, adaptive and interactive propagation of information. Using methods of Markovion diffusion theory, we formalize the activation dynamics of these networks and then show that they can be trained to reproduce entire multivariate probability distributions on their outputs using the contrastive Hebbion learning rule (CHL). We show that CHL performs gradient descent on an error function that captures differences between desired and obtained continuous multivariate probability distributions. This allows the learning algorithm to go beyond expected values of output units and to approximate complete probability distributions on continuous multivariate activation spaces. We argue that learning continuous distributions is an important task underlying a variety of real-life situations that were beyond the scope of previous connectionist networks. Deterministic networks, like back propagation, cannot learn this task because they are limited to learning average values of independent output units. Previous stochastic connectionist networks could learn probability distributions but they were limited to discrete variables. Simulations show that symmetric diffusion networks can be trained with the CHL rule to approximate discrete and continuous probability distributions of various types.  相似文献   

8.
Yellott (1978) has shown that there are Thurstone models with probability distributions of different types that are equivalent for complete experiments with three alternatives. This note generalizes and extends his findings by showing that for any number of alternatives n, there exists a pair of Thurstone models with probability distributions of different types that are equivalent for complete experiments with n alternatives, but which are not equivalent for complete experiments with n + 1 alternatives.  相似文献   

9.
Vallentyne  Peter 《Synthese》2000,122(3):261-290
Where there are infinitely many possible basic states of the world, a standard probability function must assign zero probability to each state – since any finite probability would sum to over one. This generates problems for any decision theory that appeals to expected utility or related notions. For it leads to the view that a situation in which one wins a million dollars if any of a thousand of the equally probable states is realized has an expected value of zero (since each such state has probability zero). But such a situation dominates the situation in which one wins nothing no matter what (which also has an expected value of zero), and so surely is more desirable. I formulate and defend some principles for evaluating options where standard probability functions cannot strictly represent probability – and in particular for where there is an infinitely spread, uniform distribution of probability. The principles appeal to standard probability functions, but overcome at least some of their limitations in such cases.  相似文献   

10.
We present an hierarchical Bayes approach to modeling parameter heterogeneity in generalized linear models. The model assumes that there are relevant subpopulations and that within each subpopulation the individual-level regression coefficients have a multivariate normal distribution. However, class membership is not known a priori, so the heterogeneity in the regression coefficients becomes a finite mixture of normal distributions. This approach combines the flexibility of semiparametric, latent class models that assume common parameters for each sub-population and the parsimony of random effects models that assume normal distributions for the regression parameters. The number of subpopulations is selected to maximize the posterior probability of the model being true. Simulations are presented which document the performance of the methodology for synthetic data with known heterogeneity and number of sub-populations. An application is presented concerning preferences for various aspects of personal computers.  相似文献   

11.
When item characteristic curves are nondecreasing functions of a latent variable, the conditional or local independence of item responses given the latent variable implies nonnegative conditional covariances between all monotone increasing functions of a set of item responses given any function of the remaining item responses. This general result provides a basis for testing the conditional independence assumption without first specifying a parametric form for the nondecreasing item characteristic curves. The proposed tests are simple, have known asymptotic null distributions, and possess certain optimal properties. In an example, the conditional independence hypothesis is rejected for all possible forms of monotone item characteristic curves.The author acknowledges Paul W. Holland for valuable conversations on the subject of this paper; Henry Braun and Fred Lord for comments at a presentation on this subject which led to improvements in the paper; Carl H. Haag for permission to use the data in §4; Bruce Kaplan for assistance with computing; and two referees for helpful suggestions. Requests for reprints should be sent to Paul R. Rosenbaum  相似文献   

12.
There are a number of reasons for being interested in uncertainty, and there are also a number of uncertainty formalisms. These formalisms are not unrelated. It is argued that they can all be reflected as special cases of the approach of taking probabilities to be determined by sets of probability functions defined on an algebra of statements. Thus, interval probabilities should be construed as maximum and minimum probabilities within a set of distributions, Glenn Shafer's belief functions should be construed as lower probabilities, etc. Updating probabilities introduces new considerations, and it is shown that the representation of belief as a set of probabilities conflicts in this regard with the updating procedures advocated by Shafer. The attempt to make subjectivistic probability plausible as a doctrine of rational belief by making it more flowery — i.e., by adding new dimensions — does not succeed. But, if one is going to represent beliefs by sets of distributions, those sets of distributions might as well be based in statistical knowledge, as they are in epistemological or evidential probability.  相似文献   

13.
Cognitive psychologists have characterized the temporal properties of human information processing in terms of discrete and continuous models. Discrete models postulate that component mental processes transmit a finite number of intermittent outputs (quanta) of information over time, whereas continuous models postulate that information is transmitted in a gradual fashion. These postulates may be tested by using an adaptive response-priming procedure and analysis of reaction-time mixture distributions. Three experiments based on this procedure and analysis are reported. The experiments involved varying the temporal interval between the onsets of a prime stimulus and a subsequent test stimulus to which a response had to be made. Reaction time was measured as a function of the duration of the priming interval and the type of prime stimulus. Discrete models predict that manipulations of the priming interval should yield a family of reaction-time mixture distributions formed from a finite number of underlying basis distributions, corresponding to distinct preparatory states. Continuous models make a different prediction. Goodness-of-fit tests between these predictions and the data supported either the discrete or the continuous models, depending on the nature of the stimuli and responses being used. When there were only two alternative responses and the stimulus-response mapping was a compatible one, discrete models with two or three states of preparation fit the results best. For larger response sets with an incompatible stimulus-response mapping, a continuous model fit some of the data better. These results are relevant to the interpretation of reaction-time data in a variety of contexts and to the analysis of speed-accuracy trade-offs in mental processes.  相似文献   

14.
Experience with real data indicates that psychometric measures often have heavy-tailed distributions. This is known to be a serious problem when comparing the means of two independent groups because heavy-tailed distributions can have a serious effect on power. Another problem that is common in some areas is outliers. This paper suggests an approach to these problems based on the one-step M-estimator of location. Simulations indicate that the new procedure provides very good control over the probability of a Type I error even when distributions are skewed, have different shapes, and the variances are unequal. Moreover, the new procedure has considerably more power than Welch's method when distributions have heavy tails, and it compares well to Yuen's method for comparing trimmed means. Wilcox's median procedure has about the same power as the proposed procedure, but Wilcox's method is based on a statistic that has a finite sample breakdown point of only 1/n, wheren is the sample size. Comments on other methods for comparing groups are also included.  相似文献   

15.
The aim of the present secondary data analysis was to explore antecedents and consequences of family socialization values emphasizing independence or interdependence, using a Taiwanese national probability sample. Analysis of variance revealed that those who were male, older and less educated emphasized greater interdependence values. In contrast, those who were younger, with higher social status and urban residents emphasized greater independence values. Multiple regression analysis further revealed that valuing interdependence was related to preferring a greater number of offspring, a higher endorsement of filial piety, greater marital and life satisfaction. Finally, in this national sample, endorsement on independence and interdependence values was equivalent.  相似文献   

16.
Epistemology and probability   总被引:1,自引:0,他引:1  
Pollock  John L. 《Synthese》1983,55(2):231-252
Probability is sometimes regarded as a universal panacea for epistemology. It has been supposed that the rationality of belief is almost entirely a matter of probabilities. Unfortunately, those philosophers who have thought about this most extensively have tended to be probability theorists first, and epistemologists only secondarily. In my estimation, this has tended to make them insensitive to the complexities exhibited by epistemic justification. In this paper I propose to turn the tables. I begin by laying out some rather simple and uncontroversial features of the structure of epistemic justification, and then go on to ask what we can conclude about the connection between epistemology and probability in the light of those features. My conclusion is that probability plays no central role in epistemology. This is not to say that probability plays no role at all. In the course of the investigation, I defend a pair of probabilistic acceptance rules which enable us, under some circumstances, to arrive at justified belief on the basis of high probability. But these rules are of quite limited scope. The effect of there being such rules is merely that probability provides one source for justified belief, on a par with perception, memory, etc. There is no way probability can provide a universal cure for all our epistemological ills.  相似文献   

17.
We study various axioms of discrete probabilistic choice, measuring how restrictive they are, both alone and in the presence of other axioms, given a specific class of prior distributions over a complete collection of finite choice probabilities. We do this by using Monte Carlo simulation to compute, for a range of prior distributions, probabilities that various simple and compound axioms hold. For example, the probability of the triangle inequality is usually many orders of magnitude higher than the probability of random utility. While neither the triangle inequality nor weak stochastic transitivity imply the other, the conditional probability that one holds given the other holds is greater than the marginal probability, for all priors in the class we consider. The reciprocal of the prior probability that an axiom holds is an upper bound on the Bayes factor in favor of a restricted model, in which the axiom holds, against an unrestricted model. The relatively high prior probability of the triangle inequality limits the degree of support that data from a single decision maker can provide in its favor. The much lower probability of random utility implies that the Bayes factor in favor of it can be much higher, for suitable data.  相似文献   

18.
It has been argued by Shepard that there is a robust psychological law that relates the distance between a pair of items in psychological space and the probability that they will be perceived as similar. Specifically, this probability is a negative exponential function of the distance between the pair of items. In experimental contexts, distance is typically defined in terms of a multidimensional space—but this assumption seems unlikely to hold for complex stimuli. We show that, nonetheless, the Universal Law of Generalization can be derived in the more complex setting of arbitrary stimuli, using a much more universal measure of distance. This universal distance is defined as the length of the shortest program that transforms the representations of the two items of interest into one another: The algorithmic information distance. It is universal in the sense that it minorizes every computable distance: It is the smallest computable distance. We show that the Universal Law of Generalization holds with probability going to one—provided the probabilities concerned are computable. We also give a mathematically more appealing form of the Universal Law.  相似文献   

19.
Observers completed perceptual categorization tasks that included 25 base-rate/payoff conditions constructed from the factorial combination of five base-rate ratios (1:3, 1:2, 1:1, 2:1, and 3:1) with five payoff ratios (1:3, 1:2, 1:1, 2:1, and 3:1). This large database allowed an initial comparison of the competition between reward and accuracy maximization (COBRA) hypothesis with a competition between reward maximization and probability matching (COBRM) hypothesis, and an extensive and critical comparison of the flat-maxima hypothesis with the independence assumption of the optimal classifier. Model-based instantiations of the COBRA and COBRM hypotheses provided good accounts of the data, but there was a consistent advantage for the COBRM instantiation early in learning and for the COBRA instantiation later in learning. This pattern held in the present study and in a reanalysis of Bohil and Maddox (2003). Strong support was obtained for the flat-maxima hypothesis over the independence assumption, especially as the observers gained experience with the task. Model parameters indicated that observers' reward-maximizing decision criterion rapidly approaches the optimal value and that more weight is placed on accuracy maximization in separate base-rate/payoff conditions than in simultaneous base-rate/payoff conditions. The superiority of the flat-maxima hypothesis suggests that violations of the independence assumption are to be expected, and are well captured by the flat-maxima hypothesis, with no need for any additional assumptions.  相似文献   

20.
Permutation tests are based on all possible arrangements of observed data sets. Consequently, such tests yield exact probability values obtained from discrete probability distributions. An exact nondirectional method to combine independent probability values that obey discrete probability distributions is introduced. The exact method is the discrete analog to Fisher's classical method for combining probability values from independent continuous probability distributions. If the combination of probability values includes even one probability value that obeys a sparse discrete probability distribution, then Fisher's classical method may be grossly inadequate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号