首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The multinomial (Dirichlet) model, derived from de Finetti's concept of exchangeability, is proposed as a general Bayesian framework to test axioms on data, in particular, deterministic axioms characterizing theories of choice or measurement. For testing, the proposed framework does not require a deterministic axiom to be cast in a probabilistic form (e.g., casting deterministic transitivity as weak stochastic transitivity). The generality of this framework is demonstrated through empirical tests of 16 different axioms, including transitivity, consequence monotonicity, segregation, additivity of joint receipt, stochastic dominance, coalescing, restricted branch independence, double cancellation, triple cancellation, and the Thomsen condition. The model generalizes many previously proposed methods of axiom testing under measurement error, is analytically tractable, and provides a Bayesian framework for the random relation approach to probabilistic measurement (J. Math. Psychol. 40 (1996) 219). A hierarchical and nonparametric generalization of the model is discussed.  相似文献   

2.
3.
The axioms of additive conjoint measurement provide a means of testing the hypothesis that testing data can be placed onto a scale with equal-interval properties. However, the axioms are difficult to verify given that item responses may be subject to measurement error. A Bayesian method exists for imposing order restrictions from additive conjoint measurement while estimating the probability of a correct response. In this study an improved version of that methodology is evaluated via simulation. The approach is then applied to data from a reading assessment intentionally designed to support an equal-interval scaling.  相似文献   

4.
Multinomial random variables are used across many disciplines to model categorical outcomes. Under this framework, investigators often use a likelihood ratio test to determine goodness-of-fit. If the permissible parameter space of such models is defined by inequality constraints, then the maximum likelihood estimator may lie on the boundary of the parameter space. Under this condition, the asymptotic distribution of the likelihood ratio test is no longer a simple χ2 distribution. This article summarizes recent developments in the constrained inference literature as they pertain to the testing of multinomial random variables, and extends existing results by considering the case of jointly independent mutinomial random variables of varying categorical size. This article provides an application of this methodology to axiomatic measurement theory as a means of evaluating properly operationalized measurement axioms. This article generalizes Iverson and Falmagne’s [Iverson, G. J. & Falmagne, J. C. (1985). Statistical issues in measurement. Mathematical Social Sciences, 10, 131-153] seminal work on the empirical evaluation of measurement axioms and provides a classical counterpart to Myung, Karabatsos, and Iverson’s [Myung, J. I., Karabatsos, G. & Iverson, G. J. (2005). A Bayesian approach to testing decision making axioms. Journal of Mathematical Psychology, 49, 205-225] Bayesian methodology on the same topic.  相似文献   

5.
Quantitative opponent-colors theory is based on cancellation of redness by admixture of a standard green, of greenness by admixture of a standard red, of yellowness by blue, and of blueness by yellow. The fundamental data are therefore the equilibrium colors: the set A1 of lights that are in red/green equilibrium and the set A2 of lights that are in yellow/blue equilibrium. The result that a cancellation function is linearly related to the color-matching functions can be proved from more basic axioms, particularly, the closure of the set of equilibrium colors under linear operations. Measurement analysis treats this as a representation theorem, in which the closure properties are axioms and in which the colorimetric homomorphism has the cancellation functions as two of its coordinates.Consideration of equivalence relations based on opponent cancellation leads to a further step: analysis of equivalence relations based on direct matching of hue attributes. For additive whiteness matching, this yields a simple extension of the representation theorem, in which the third coordinate is luminance. For other attributes, precise representation theorems must await a better qualitative characterization of various nonlinear phenomena, especially the veiling of one hue attribute by another and the various hue shifts.  相似文献   

6.
This article examines a Bayesian nonparametric approach to model selection and model testing, which is based on concepts from Bayesian decision theory and information theory. The approach can be used to evaluate the predictive-utility of any model that is either probabilistic or deterministic, with that model analyzed under either the Bayesian or classical-frequentist approach to statistical inference. Conditional on an observed set of data, generated from some unknown true sampling density, the approach identifies the “best” model as the one that predicts a sampling density that explains the most information about the true density. Furthermore, in the approach, the decision is to reject a model when it does not explain enough information about the true density (according to a straightforward calibration of the Kullback-Leibler divergence measure). The posterior estimate of the true density is based on a Bayesian nonparametric prior that can give positive support to the entire space of sampling densities (defined on some sample space). This article also discusses the theoretical and practical advantages of the Bayesian nonparametric approach over all other types of model selection procedures, and over any model testing procedure that depends on interpreting a p-value. Finally, the Bayesian nonparametric approach is illustrated on four real data sets, in the comparison and testing of order-constrained models, cognitive models, models of choice-behavior, and a test of a general psychometric model.  相似文献   

7.
A method is developed for determining the absolute and relative strengths of qualitative preference axioms in normative Bayesian decision theory. These strengths are calculated for the three most common qualitative axioms; transitivity, the sure-thing principle, and dominance. The relative strength of the latter two axioms with respect to transitivity is calculated for special cases, and a bound is derived which is applicable to a larger class of decision problems. Possible implications of this theoretical work for decision heuristics are discussed.  相似文献   

8.
Klotzke  Konrad  Fox  Jean-Paul 《Psychometrika》2019,84(3):649-672

A multivariate generalization of the log-normal model for response times is proposed within an innovative Bayesian modeling framework. A novel Bayesian Covariance Structure Model (BCSM) is proposed, where the inclusion of random-effect variables is avoided, while their implied dependencies are modeled directly through an additive covariance structure. This makes it possible to jointly model complex dependencies due to for instance the test format (e.g., testlets, complex constructs), time limits, or features of digitally based assessments. A class of conjugate priors is proposed for the random-effect variance parameters in the BCSM framework. They give support to testing the presence of random effects, reduce boundary effects by allowing non-positive (co)variance parameters, and support accurate estimation even for very small true variance parameters. The conjugate priors under the BCSM lead to efficient posterior computation. Bayes factors and the Bayesian Information Criterion are discussed for the purpose of model selection in the new framework. In two simulation studies, a satisfying performance of the MCMC algorithm and of the Bayes factor is shown. In comparison with parameter expansion through a half-Cauchy prior, estimates of variance parameters close to zero show no bias and undercoverage of credible intervals is avoided. An empirical example showcases the utility of the BCSM for response times to test the influence of item presentation formats on the test performance of students in a Latin square experimental design.

  相似文献   

9.
This paper addresses conditions for the existence of additive separable utilities. It considers mainly two-dimensional Cartesian products in which restricted solvability holds w.r.t. one component, but some results are extended to n-dimensional spaces. The main result shows that, in general, cancellation axioms of any order are required to ensure additive representability. More precisely, a generic family of counterexamples is provided, proving that the (m+1)st order cancellation axiom cannot be derived from the mth order cancellation axiom when m is even. However, a special case is considered in which the existence of additive representations can be derived from the independence axiom alone. Unlike the classical representation theorems, these representations are not unique up to strictly positive affine transformations, but follow Fishburn's (1981) uniqueness property. Copyright 2000 Academic Press.  相似文献   

10.
In the field of cognitive psychology, the p-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the p-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is conveyed by a Bayesian hypothesis test, which prefers the model with the highest average likelihood. One of the main problems with this Bayesian hypothesis test, however, is that it often requires relatively sophisticated numerical methods for its computation. Here we draw attention to the Savage–Dickey density ratio method, a method that can be used to compute the result of a Bayesian hypothesis test for nested models and under certain plausible restrictions on the parameter priors. Practical examples demonstrate the method’s validity, generality, and flexibility.  相似文献   

11.
Boris Čulina 《Axiomathes》2018,28(2):155-180
In this article I develop an elementary system of axioms for Euclidean geometry. On one hand, the system is based on the symmetry principles which express our a priori ignorant approach to space: all places are the same to us (the homogeneity of space), all directions are the same to us (the isotropy of space) and all units of length we use to create geometric figures are the same to us (the scale invariance of space). On the other hand, through the process of algebraic simplification, this system of axioms directly provides the Weyl’s system of axioms for Euclidean geometry. The system of axioms, together with its a priori interpretation, offers new views to philosophy and pedagogy of mathematics: (1) it supports the thesis that Euclidean geometry is a priori, (2) it supports the thesis that in modern mathematics the Weyl’s system of axioms is dominant to the Euclid’s system because it reflects the a priori underlying symmetries, (3) it gives a new and promising approach to learn geometry which, through the Weyl’s system of axioms, leads from the essential geometric symmetry principles of the mathematical nature directly to modern mathematics.  相似文献   

12.
A new theoretical approach to Aristotelian Logic (AL) based on three axioms has been recently introduced. This formalization of the theory allowed for the unification of its uncommunicated traditional branches, thus restoring the theoretical unity of AL. In this brief paper, the applicability of the three AL axioms to Propositional Logic (PL) is explored. First, it is shown how the AL axioms can be applied to some simple PL arguments in a straightforward manner. Second, the development of a proof method for PL inspired by the AL axioms is presented. This method mimics the underlying mechanics of the proof method from AL, and offers a complementary alternative to proof methods such as truth trees.  相似文献   

13.
The tendency to test outcomes that are predicted by our current theory (the confirmation bias) is one of the best‐known biases of human decision making. We prove that the confirmation bias is an optimal strategy for testing hypotheses when those hypotheses are deterministic, each making a single prediction about the next event in a sequence. Our proof applies for two normative standards commonly used for evaluating hypothesis testing: maximizing expected information gain and maximizing the probability of falsifying the current hypothesis. This analysis rests on two assumptions: (a) that people predict the next event in a sequence in a way that is consistent with Bayesian inference; and (b) when testing hypotheses, people test the hypothesis to which they assign highest posterior probability. We present four behavioral experiments that support these assumptions, showing that a simple Bayesian model can capture people's predictions about numerical sequences (Experiments 1 and 2), and that we can alter the hypotheses that people choose to test by manipulating the prior probability of those hypotheses (Experiments 3 and 4).  相似文献   

14.
An expression is given for the number of ways of ranking the cells of an r by c factorial design so as to satisfy independence. For selected values of r and c, estimates are given of the number of rankings that satisfy both independence and double cancellation, and also of the number of rankings allowing an additive representation. These results may be used in at least two ways: first, in evaluating the probability of the satisfaction of certain measurement axioms by chance; and second, in placing a lower bound on the amount of information necessary to establish the ordering of the cells of a factorial design when it is known that these axioms are satisfied.  相似文献   

15.
In the field of psychology, the practice ofp value null-hypothesis testing is as widespread as ever. Despite this popularity, or perhaps because of it, most psychologists are not aware of the statistical peculiarities of thep value procedure. In particular,p values are based on data that were never observed, and these hypothetical data are themselves influenced by subjective intentions. Moreover,p values do not quantify statistical evidence. This article reviews thesep value problems and illustrates each problem with concrete examples. The three problems are familiar to statisticians but may be new to psychologists. A practical solution to thesep value problems is to adopt a model selection perspective and use the Bayesian information criterion (BIC) for statistical inference (Raftery, 1995). The BIC provides an approximation to a Bayesian hypothesis test, does not require the specification of priors, and can be easily calculated from SPSS output.  相似文献   

16.
This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

17.
Schulte  Oliver 《Synthese》1999,118(3):329-361
This paper analyzes the notion of a minimal belief change that incorporates new information. I apply the fundamental decision-theoretic principle of Pareto-optimality to derive a notion of minimal belief change, for two different representations of belief: First, for beliefs represented by a theory – a deductively closed set of sentences or propositions – and second for beliefs represented by an axiomatic base for a theory. Three postulates exactly characterize Pareto-minimal revisions of theories, yielding a weaker set of constraints than the standard AGM postulates. The Levi identity characterizes Pareto-minimal revisions of belief bases: a change of belief base is Pareto-minimal if and only if the change satisfies the Levi identity (for “maxichoice” contraction operators). Thus for belief bases, Pareto-minimality imposes constraints that the AGM postulates do not. The Ramsey test is a well-known way of establishing connections between belief revision postulates and axioms for conditionals (“if p, then q”). Pareto-minimal theory change corresponds exactly to three characteristic axioms of counterfactual systems: a theory revision operator that satisfies the Ramsey test validates these axioms if and only if the revision operator is Pareto-minimal. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

18.
Empirical Bayes meta-analysis provides a useful framework for examining test validation. The fixed-effects case in which rho has a single value corresponds to the inference that the situational specificity hypothesis can be rejected in a validity generalization study. A Bayesian analysis of such a case provides a simple and powerful test of rho = 0; such a test has practical implications for significance testing in test validation. The random-effects case in which sigma2rho > 0 provides an explicit method with which to assess the relative importance of local validity studies and previous meta-analyses. Simulated data are used to illustrate both cases. Results of published meta-analyses are used to show that local validation becomes increasingly important as sigma2rho increases. The meaning of the term validity generalization is explored, and the problem of what can be inferred about test transportability in the random-effects case is described.  相似文献   

19.
In recent years there has been notable interest in additive models of sensory integration. Binaural additivity has emerged as a main hypothesis in the loudness-scaling literature and has recently been asserted by authors using an axiomatic approach to psychophysics. Restrictions of the range of stimuli used in the majority of former experiments, and inherent weaknesses of the axiomatic study by Levelt, Riemersma, and Bunt (1972) are discussed as providing reasons for the present investigation. A limited binaural additivity (LBA) model is proposed that assumes contralateral binaural inhibition for interaural intensity differences that exceed a critical level. Experimental data are reported for 12 subjects in a loudness-matching task designed to test the axioms of cancellation and of commutativity, both necessary to the existence of strict binaural additivity. In a 2 X 2 design, frequencies of 200 Hz and 2 kHz were used, and mean intensity levels were 20 dB apart. Additivity was found violated in 33 out of 48 possible tests. The LBA model is shown to predict the systematic nonadditivity in the loudness judgments and to conform to results from other studies.  相似文献   

20.
The (univariate) isotonic psychometric (ISOP) model (Scheiblechner, 1995) is a nonparametric IRT model for dichotomous and polytomous (rating scale) psychological test data. A weak subject independence axiom W1 postulates that the subjects are ordered in the same way except for ties (i.e., similarly or isotonically) by all items of a psychological test. A weak item independence axiom W2 postulates that the order of the items is similar for all subjects. Local independence (LI or W3) is assumed in all models. With these axioms, sample-free unidimensional ordinal measurements of items and subjects become feasible. A cancellation axiom (Co) gives, as a result, the additive isotonic psychometric (ADISOP) model and interval scales for subjects and items, and an independence axiom (W4) gives the completely additive isotonic psychometric (CADISOP) model with an interval scale for the response variable (Scheiblechner, 1999). The d-ISOP, d-ADISOP, and d-CADISOP models are generalizations to d-dimensional dependent variables (e.g., speed and accuracy of response). The author would like to thank an Associate Editor and two anonymous referees and also Professor H.H. Schulze for their very valuable suggestions and corrections.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号