共查询到20条相似文献,搜索用时 15 毫秒
1.
Haruhiko Ogasawara 《Psychometrika》1996,61(1):73-92
As a multivariate model of the number of events, Rasch's multiplicative Poisson model is extended such that the parameters
for individuals in the prior gamma distribution have continuous covariates. The parameters for individuals are integrated
out and the hyperparameters in the prior distribution are estimated by a numerical method separately from difficulty parameters
that are treated as fixed parameters or random variables. In addition, a method is presented for estimating parameters in
Rasch's model with missing values.
The author is now affiliated with the Otara University of Commerce.
The author is grateful to Yoshio Takane, Haruo Yanai, Eiji Muraki, the editor and referees for their careful readings and
helpful suggestions on earlier versions of this paper. Part of this work was presented at the third European Congress of Psychology
at Tampere, Finland in 1993. 相似文献
2.
Bruce Buchanan 《Psychometrika》1987,52(1):61-78
A non-forced choice model is developed that describes subject behavior on repeat trial discrimination tests of the pick 1 ofk form. The model is developed from the Dirichlet distribution, and it allows for the derivation of individual true scores and of sampling properties for various constructs of interest. These results permit the analysis and comparison of test designs. The model is applied to issues such as forced vs. non-forced choice formats, the best number of alternatives at a choice point, and the selection of expert panels. 相似文献
3.
Werner Ebeling 《World Futures: Journal of General Evolution》2013,69(1-4):467-481
We investigate first several entropy concepts, from static to dynamic entropy. Dynamic entropy is introduced as a measure of predictability. Then the relation between entropy and information is studied. Following Wiener information is considered as a non‐physical quantity, not matter or energy. Information is understood as a binary relation between a sender and a receiver. We differentiate between bound and free information. Information can be created by selforganization, historically it is connected with the origin of life. The structure and predictability of informational strings is investigated. For example we study symbolic sequences generated by evolution, as e.g. texts. It is shown that several information carriers show criticality connected with the existence of long‐range correlations, long memory in time and historical behaviour (self‐tuned criticality). The higher order Shannon entropies and the conditional entropies (dynamical entropies and there limit) are calculated, characteristic scaling laws are found. 相似文献
4.
Bruce Buchanan 《Psychometrika》1988,53(2):209-221
A model is proposed that describes subject behavior on repeat paired comparison preference tests. The model extends prior work in this area in that it explicitly allows for abstentions and permits the derivation of individual true scores for discrimination ability as well as conditional estimates of proportionate preference. With these results, the properties of a paired comparison test can be thoroughly explored. An empirical example is presented, and test design issues are considered. In particular, repeat paired comparison preference tests are shown to be inherently less efficient discrimination tests than are pick 1 of 2 tests. 相似文献
5.
When the underlying distribution is discrete with a limited number of categories, methods for interval estimation of the intraclass correlation which assume normality are theoretically inadequate for use. On the basis of large sample theory, this paper develops an asymptotic closed-form interval estimate of the intraclass correlation for the case where there is a natural score associated with each category. This paper employs Monte Carlo simulation to demonstrate that when the underlying intraclass correlation is large, the traditional interval estimator which assumes normality can be misleading. We find that when the number of classes is 20, the interval estimator proposed here can generally perform reasonably well in a variety of situations. This paper further notes that the proposed interval estimator is invariant with respect to a linear transformation. When the data are on a nominal scale, an extension of the proposed method to account for this case, as well as a discussion on the relationship between the intraclass correlation and a kappa-type measure defined here and on the limitation of the corresponding kappa-type estimator are given.The authors wish to thank the Editor, the Associate Editor, and the three referees for many valuable comments and suggestions to improve the clarity of this paper. The works for the first, the third, and the fourth authors were partially supported by grant #R01AR43025-01 from the National Institute of Arthritis and Musculoskeletal and Skin Diseases. 相似文献
6.
Estimating latent distributions in recurrent choice data 总被引:1,自引:0,他引:1
Ulf Böckenholt 《Psychometrika》1993,58(3):489-509
This paper introduces a flexible class of stochastic mixture models for the analysis and interpretation of individual differences in recurrent choice and other types of count data. These choice models are derived by specifying elements of the choice process at the individual level. Probability distributions are introduced to describe variations in the choice process among individuals and to obtain a representation of the aggregate choice behavior. Due to the explicit consideration of random effect sources, the choice models are parsimonious and readily interpretable. An easy to implement EM algorithm is presented for parameter estimation. Two applications illustrate the proposed approach. 相似文献
7.
The inference process in a probabilistic and conditional environmentunder minimum relative entropy, permits the acquisition of basicknowledge, the consideration of - even uncertain - ad hoc knowledge,and the response to queries. Even if these procedures are wellknown in the relevant literature their realisation for large-scaleapplications needs a sophisticated tool, allowing the communicationwith the user as well as all relevant logical transformationsand numerical calculations. SPIRIT is an Expert-System-Shellfor these purposes. Even for hundreds of consistent facts about the involved variables dependenciesthe shell automatically generates the corresponding epistemicstate, thus permitting the derivation of conclusions from theacquired knowledge. These conclusions reliability orprecision can be checked, inviting the user to enrich the knowledgeby further facts, if desired. Any inconsistencies among providedfacts are detected, and their elimination will be supportedby the shell. Knowledge acquisition can come from provided factsby a knowledge engineer as well as from real world data; inductivelearning supports the use of such data. An important capabilityof the shell is the calculation of impacts upon ideas or conceptsfrom a given stimulus. This paper is a brief survey of theoreticalconcepts and the corresponding features of the system, whichare accompanied by illustrative examples. 相似文献
8.
Greg Jensen Ryan D. Ward Peter D. Balsam 《Journal of the experimental analysis of behavior》2013,100(3):408-431
9.
Jolien Cremers Kees Tim Mulder Irene Klugkist 《The British journal of mathematical and statistical psychology》2018,71(1):75-95
The interpretation of the effect of predictors in projected normal regression models is not straight-forward. The main aim of this paper is to make this interpretation easier such that these models can be employed more readily by social scientific researchers. We introduce three new measures: the slope at the inflection point (bc), average slope (AS) and slope at mean (SAM) that help us assess the marginal effect of a predictor in a Bayesian projected normal regression model. The SAM or AS are preferably used in situations where the data for a specific predictor do not lie close to the inflection point of a circular regression curve. In this case bc is an unstable and extrapolated effect. In addition, we outline how the projected normal regression model allows us to distinguish between an effect on the mean and spread of a circular outcome variable. We call these types of effects location and accuracy effects, respectively. The performance of the three new measures and of the methods to distinguish between location and accuracy effects is investigated in a simulation study. We conclude that the new measures and methods to distinguish between accuracy and location effects work well in situations with a clear location effect. In situations where the location effect is not clearly distinguishable from an accuracy effect not all measures work equally well and we recommend the use of the SAM. 相似文献
10.
An information-theoretic framework is used to analyze the knowledge content in multivariate cross classified data. Several related measures based directly on the information concept are proposed: the knowledge content (S) of a cross classification, its terseness (Zeta), and the separability (Gamma
X
) of one variable, given all others. Exemplary applications are presented which illustrate the solutions obtained where classical analysis is unsatisfactory, such as optimal grouping, the analysis of very skew tables, or the interpretation of well-known paradoxes. Further, the separability suggests a solution for the classic problem of inductive inference which is independent of sample size. 相似文献
11.
In this paper, we present a Bayesian approach to model uncertainty about a group's priorities in a multicriteria evaluation problem and develop a methodology to quantify amount of information provided by a sample of priorities. In so doing, we discuss how the quantification of the information content can be used to decide to elicit additional priorities from the group. We illustrate the implementation of our approach and discuss additional insights that it provides using real‐life data from an academic department's priority analysis. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
12.
This paper studies the problem of scaling ordinal categorical data observed over two or more sets of categories measuring a single characteristic. Scaling is obtained by solving a constrained entropy model which finds the most probable values of the scales given the data. A Kullback-Leibler statistic is generated which operationalizes a measure for the strength of consistency among the sets of categories. A variety of data of two and three sets of categories are analyzed using the entropy approach.This research was partially supported by the Air Force Office of Scientific Research under grant AFOSR 83-0234. The support by the Air Force through grant AFOSR-83-0234 is gratefully acknowledged. The comments of the editor and referees have been most helpful in improving the paper, and in bringing several additional references to our attention. 相似文献
13.
Daniel J. Navarro Thomas L. Griffiths Michael D. Lee 《Journal of mathematical psychology》2006,50(2):101-122
We introduce a Bayesian framework for modeling individual differences, in which subjects are assumed to belong to one of a potentially infinite number of groups. In this model, the groups observed in any particular data set are not viewed as a fixed set that fully explains the variation between individuals, but rather as representatives of a latent, arbitrarily rich structure. As more people are seen, and more details about the individual differences are revealed, the number of inferred groups is allowed to grow. We use the Dirichlet process—a distribution widely used in nonparametric Bayesian statistics—to define a prior for the model, allowing us to learn flexible parameter distributions without overfitting the data, or requiring the complex computations typically required for determining the dimensionality of a model. As an initial demonstration of the approach, we present three applications that analyze the individual differences in category learning, choice of publication outlets, and web-browsing behavior. 相似文献
14.
Katalin Martinás 《World Futures: Journal of General Evolution》2013,69(1-4):483-493
How information can be employed in thermodynamics, and how the tools of thermodynamics can be utilized in information theory—is the subject of our paper. We show that one side of information, the embodied information in the material can be quantitatively measured by entropie terms. As an example the information balance for aluminum chloride production is given. 相似文献
15.
We develop semiparametric Bayesian Thurstonian models for analyzing repeated choice decisions involving multinomial, multivariate
binary or multivariate ordinal data. Our modeling framework has multiple components that together yield considerable flexibility
in modeling preference utilities, cross-sectional heterogeneity and parameter-driven dynamics. Each component of our model
is specified semiparametrically using Dirichlet process (DP) priors. The utility (latent variable) component of our model
allows the alternative-specific utility errors to semiparametrically deviate from a normal distribution. This generates a
robust alternative to popular Thurstonian specifications that are based on underlying normally distributed latent variables.
Our second component focuses on flexibly modeling cross-sectional heterogeneity. The semiparametric specification allows the
heterogeneity distribution to mimic either a finite mixture distribution or a continuous distribution such as the normal,
whichever is supported by the data. Thus, special features such as multimodality can be readily incorporated without the need
to overtly search for the best heterogeneity specification across a series of models. Finally, we allow for parameter-driven
dynamics using a semiparametric state-space approach. This specification adds to the literature on robust Kalman filters.
The resulting framework is very general and integrates divergent strands of the literatures on flexible choice models, Bayesian
nonparametrics and robust time series specifications. Given this generality, we show how several existing Thurstonian models
can be obtained as special forms of our model. We describe Markov chain Monte Carlo methods for the inference of model parameters,
report results from two simulation studies and apply the model to consumer choice data from a frequently purchased product
category. The results from our simulations and application highlight the benefits of using our semiparametric approach. 相似文献
16.
Carl S. Helrich 《Zygon》1999,34(3):501-514
Thermodynamics is the foundation of many of the topics of interest in the religion-science dialogue. Here a nonmathematical outline of the principles of thermodynamics is presented, providing a historical and conceptually understandable development that can serve teachers from disciplines other than physics. The contributions of Gibbs to both classical and rational thermodynamics, emphasizing the importance of the ensemble in statistical mechanics, are discussed. The seminal ideas of Boltzmann on statistical mechanics are contrasted to those of Gibbs in a discussion of the microscopic interpretation of the second law. The role of information theory is discussed, and the modern ideas of Prigogine and nonequilibrium are outlined in some detail with further reference to the second law. Implications for our interaction with God are considered. 相似文献
17.
Sean Devine 《Zygon》2014,49(1):42-65
William Dembski claims to have established a decision process to determine when highly unlikely events observed in the natural world are due to Intelligent Design. This article argues that, as no implementable randomness test is superior to a universal Martin‐Löf test, this test should be used to replace Dembski's decision process. Furthermore, Dembski's decision process is flawed, as natural explanations are eliminated before chance. Dembski also introduces a fourth law of thermodynamics, his “law of conservation of information,” to argue that information cannot increase by natural processes. However, this article, using algorithmic information theory, shows that this law is no more than the second law of thermodynamics. The article concludes that any discussions on the possibilities of design interventions in nature should be articulated in terms of the algorithmic information theory approach to randomness and its robust decision process. 相似文献
18.
1IntroductionSuchman,an anthropologist,argued that actionswere always situated in particular social and physi-cal circumstances[1].In this view actions emergefrom moment-by-moment interactions between ac-tors,and between actors and their environments.The social and environmental aspects of cognitionhave been stressed in Hutchins′work[2].He hadstudied relatively structured decision environments,for example ship navigation and aeroplane piloting.His conclusions are that cognition in such situa… 相似文献
19.
This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback–Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Bayesian inference. The paper then argues that while the KL-divergence is a measure of information, it does not establish a sufficient measure of representational content. We argue that this follows from the fact that the KL-divergence is a measure of relative entropy, which can be shown to be the same as covariance (through a set of additional steps). It is well known that facts about covariance do not entail facts about representational content. So there is no reason to think that the KL-divergence is a measure of (mis-)representational content. This paper thus provides an enactive, non-representational account of Bayesian belief optimisation in hierarchical PP. 相似文献
20.
Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using nonparametric item response models. A central issue is that the derivatives of item characteristic Curves may not be estimated well, which eliminates the availability of the standard maximum Fisher information criterion. As alternatives, procedures based on Shannon entropy and Kullback–Leibler information are proposed. For a long test, these procedures, which do not require the derivatives of the item characteristic eurves, become equivalent to the maximum Fisher information criterion. A simulation study is conducted to study the behavior of these two procedures, compared with random item selection. The study shows that the procedures based on Shannon entropy and Kullback–Leibler information perform similarly in terms of root mean square error, and perform much better than random item selection. The study also shows that item exposure rates need to be addressed for these methods to be practical. The authors would like to thank Hua Chang for his help in conducting this research. 相似文献