首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The G-DINA (generalized deterministic inputs, noisyandgate) model is a generalization of the DINA model with more relaxed assumptions. In its saturated form, the G-DINA model is equivalent to other general models for cognitive diagnosis based on alternative link functions. When appropriate constraints are applied, several commonly used cognitive diagnosis models (CDMs) can be shown to be special cases of the general models. In addition to model formulation, the G-DINA model as a general CDM framework includes a component for item-by-item model estimation based on design and weight matrices, and a component for item-by-item model comparison based on the Wald test. The paper illustrates the estimation and application of the G-DINA model as a framework using real and simulated data. It concludes by discussing several potential implications of and relevant issues concerning the proposed framework.  相似文献   

2.
Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework for inductive probabilistic reasoning is developed: syntactically it consists of an extension of the language of first-order predicate logic that allows to express statements about both statistical and subjective probabilities. Semantics for this representation language are developed that give rise to two distinct entailment relations: a relation ⊨ that models strict, probabilistically valid, inferences, and a relation that models inductive probabilistic inferences. The inductive entailment relation is obtained by implementing cross-entropy minimization in a preferred model semantics. A main objective of our approach is to ensure that for both entailment relations complete proof systems exist. This is achieved by allowing probability distributions in our semantic models that use non-standard probability values. A number of results are presented that show that in several important aspects the resulting logic behaves just like a logic based on real-valued probabilities alone.  相似文献   

3.
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show that these methods can be implemented in a flexible way which requires minimal technical sophistication on the part of the end user. After providing an overview of item factor analysis and MCMC, results from several examples (simulated and real) will be discussed. The bulk of these examples focus on models that are problematic for current “gold-standard” estimators. The results demonstrate that it is possible to obtain accurate parameter estimates using MCMC in a relatively user-friendly package.  相似文献   

4.
We study the identification and consistency of Bayesian semiparametric IRT-type models, where the uncertainty on the abilities’ distribution is modeled using a prior distribution on the space of probability measures. We show that for the semiparametric Rasch Poisson counts model, simple restrictions ensure the identification of a general distribution generating the abilities, even for a finite number of probes. For the semiparametric Rasch model, only a finite number of properties of the general abilities’ distribution can be identified by a finite number of items, which are completely characterized. The full identification of the semiparametric Rasch model can be only achieved when an infinite number of items is available. The results are illustrated using simulated data.  相似文献   

5.
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of “exemplars” as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed a new simulated annealing heuristic for the p-median problem and completed a thorough investigation of its computational performance. The salient findings from our experiments are that our new method substantially outperforms a previous implementation of simulated annealing and is competitive with the most effective metaheuristics for the p-median problem.  相似文献   

6.
If one formulates Helmholtz’s ideas about perception in terms of modern-day theories one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics it can be shown that the problems of inferring what cause our sensory inputs and learning causal regularities in the sensorium can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory information is generated. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of the brain’s organisation and responses. In this paper, we suggest that these perceptual processes are just one emergent property of systems that conform to a free-energy principle. The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free-energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system’s state and structure encode an implicit and probabilistic model of the environment. We will look at models entailed by the brain and how minimisation of free-energy can explain its dynamics and structure.  相似文献   

7.
In this paper we show how recent concepts from Dynamic Logic, and in particular from Dynamic Epistemic logic, can be used to model and interpret quantum behavior. Our main thesis is that all the non-classical properties of quantum systems are explainable in terms of the non-classical flow of quantum information. We give a logical analysis of quantum measurements (formalized using modal operators) as triggers for quantum information flow, and we compare them with other logical operators previously used to model various forms of classical information flow: the “test” operator from Dynamic Logic, the “announcement” operator from Dynamic Epistemic Logic and the “revision” operator from Belief Revision theory. The main points stressed in our investigation are the following: (1) The perspective and the techniques of “logical dynamics” are useful for understanding quantum information flow. (2) Quantum mechanics does not require any modification of the classical laws of “static” propositional logic, but only a non-classical dynamics of information. (3) The main such non-classical feature is that, in a quantum world, all information-gathering actions have some ontic side-effects. (4) This ontic impact can affect in its turn the flow of information, leading to non-classical epistemic side-effects (e.g. a type of non-monotonicity) and to states of “objectively imperfect information”. (5) Moreover, the ontic impact is non-local: an information-gathering action on one part of a quantum system can have ontic side-effects on other, far-away parts of the system.  相似文献   

8.
It is known that the Restricted Predicate Calculus (RPC) can be embedded in an elementary theory, the signature of which consists of exactly two equivalences. Some special models for the mentioned theory were constructed to prove this fact. Besides formal adequacy of these models, a question may be posed concerning their conceptual simplicity, “transparency” of interpretations they assigned to the two stated equivalences. In works known to us these interpretations are rather complex, and can be called “technical”, serving only the purpose of embedding. We propose a conversion method, which transforms an arbitrary model of RPC into some model of the elementary theory TR, which includes three equivalences. RPC is embeddable in TR, and it appears possible to assign some “natural” interpretations to three equivalences using the “Track of Relation” concept (abbreviated to TR).  相似文献   

9.
Computerized adaptive testing for cognitive diagnosis (CD-CAT) needs to be efficient and responsive in real time to meet practical applications' requirements. For high-dimensional data, the number of categories to be recognized in a test grows exponentially as the number of attributes increases, which can easily cause system reaction time to be too long such that it adversely affects the examinees and thus seriously impacts the measurement efficiency. More importantly, the long-time CPU operations and memory usage of item selection in CD-CAT due to intensive computation are impractical and cannot wholly meet practice needs. This paper proposed two new efficient selection strategies (HIA and CEL) for high-dimensional CD-CAT to address this issue by incorporating the max-marginals from the maximum a posteriori query and integrating the ensemble learning approach into the previous efficient selection methods, respectively. The performance of the proposed selection method was compared with the conventional selection method using simulated and real item pools. The results showed that the proposed methods could significantly improve the measurement efficiency with about 1/2–1/200 of the conventional methods' computation time while retaining similar measurement accuracy. With increasing number of attributes and size of the item pool, the computation time advantage of the proposed methods becomes more significant.  相似文献   

10.
In order to look more closely at the many particular skills examinees utilize to answer items, cognitive diagnosis models have received much attention, and perhaps are preferable to item response models that ordinarily involve just one or a few broadly defined skills, when the objective is to hasten learning. If these fine‐grained skills can be identified, a sharpened focus on learning and remediation can be achieved. The focus here is on how to detect when learning has taken place for a particular attribute and efficiently guide a student through a sequence of items to ultimately attain mastery of all attributes while administering as few items as possible. This can be seen as a problem in sequential change‐point detection for which there is a long history and a well‐developed literature. Though some ad hoc rules for determining learning may be used, such as stopping after M consecutive items have been successfully answered, more efficient methods that are optimal under various conditions are available. The CUSUM, Shiryaev–Roberts and Shiryaev procedures can dramatically reduce the time required to detect learning while maintaining rigorous Type I error control, and they are studied in this context through simulation. Future directions for modelling and detection of learning are discussed.  相似文献   

11.
Spreadsheet implementations of two different types of cognitive models—a neural network model and a statistical model—are presented. The two examples illustrate how to employ the facilities of spreadsheets, the spreadsheet data structure, array functions, the built-in function library, and the integrated optimizer, for building cognitive models. The two presented models are new extensions of existing models. They are used for simulating data from experiments illustrating that the extended versions are able to explain experimental results that could not be simulated by the original models. The whole simulation study demonstrates that spreadsheets are a handy tool, especially for researchers without programming knowledge who want to build cognitive models and for instructors teaching cognitive modeling.  相似文献   

12.
Alisa Bokulich 《Synthese》2011,180(1):33-45
Scientific models invariably involve some degree of idealization, abstraction, or fictionalization of their target system. Nonetheless, I argue that there are circumstances under which such false models can offer genuine scientific explanations. After reviewing three different proposals in the literature for how models can explain, I shall introduce a more general account of what I call model explanations, which specify the conditions under which models can be counted as explanatory. I shall illustrate this new framework by applying it to the case of Bohr’s model of the atom, and conclude by drawing some distinctions between phenomenological models, explanatory models, and fictional models.  相似文献   

13.
Elaine Landry 《Synthese》2007,158(1):1-17
Recent semantic approaches to scientific structuralism, aiming to make precise the concept of shared structure between models, formally frame a model as a type of set-structure. This framework is then used to provide a semantic account of (a) the structure of a scientific theory, (b) the applicability of a mathematical theory to a physical theory, and (c) the structural realist’s appeal to the structural continuity between successive physical theories. In this paper, I challenge the idea that, to be so used, the concept of a model and so the concept of shared structure between models must be formally framed within a single unified framework, set-theoretic or other. I first investigate the Bourbaki-inspired assumption that structures are types of set-structured systems and next consider the extent to which this problematic assumption underpins both Suppes’ and recent semantic views of the structure of a scientific theory. I then use this investigation to show that, when it comes to using the concept of shared structure, there is no need to agree with French that “without a formal framework for explicating this concept of ‘structure-similarity’ it remains vague, just as Giere’s concept of similarity between models does ...” (French, 2000, Synthese, 125, pp. 103–120, p. 114). Neither concept is vague; either can be made precise by appealing to the concept of a morphism, but it is the context (and not any set-theoretic type) that determines the appropriate kind of morphism. I make use of French’s (1999, From physics to philosophy (pp. 187–207). Cambridge: Cambridge University Press) own example from the development of quantum theory to show that, for both Weyl and Wigner’s programmes, it was the context of considering the ‘relevant symmetries’ that determined that the appropriate kind of morphism was the one that preserved the shared Lie-group structure of both the theoretical and phenomenological models. I wish to thank Katherine Brading, Anjan Chakravartty, Steven French, Martin Thomson-Jones, Antigone Nounou, Stathis Psillos, Dean Rickles, Mauricio Suarez and two anonymous referees for valuable comments and criticisms, and Gregory Janzen for editorial suggestions. Research for this paper was funded by a generous SSHRC grant for which I am grateful  相似文献   

14.
A Two-Tier Full-Information Item Factor Analysis Model with Applications   总被引:2,自引:0,他引:2  
Li Cai 《Psychometrika》2010,75(4):581-612
Motivated by Gibbons et al.’s (Appl. Psychol. Meas. 31:4–19, 2007) full-information maximum marginal likelihood item bifactor analysis for polytomous data, and Rijmen, Vansteelandt, and De Boeck’s (Psychometrika 73:167–182, 2008) work on constructing computationally efficient estimation algorithms for latent variable models, a two-tier item factor analysis model is developed in this research. The modeling framework subsumes standard multidimensional IRT models, bifactor IRT models, and testlet response theory models as special cases. Features of the model lead to a reduction in the dimensionality of the latent variable space, and consequently significant computational savings. An EM algorithm for full-information maximum marginal likelihood estimation is developed. Simulations and real data demonstrations confirm the accuracy and efficiency of the proposed methods. Three real data sets from a large-scale educational assessment, a longitudinal public health survey, and a scale development study measuring patient reported quality of life outcomes are analyzed as illustrations of the model’s broad range of applicability.  相似文献   

15.
16.
This paper uses a non-distributive system of Boolean fractions (a|b), where a and b are 2-valued propositions or events, to express uncertain conditional propositions and conditional events. These Boolean fractions, ‘a if b’ or ‘a given b’, ordered pairs of events, which did not exist for the founders of quantum logic, can better represent uncertain conditional information just as integer fractions can better represent partial distances on a number line. Since the indeterminacy of some pairs of quantum events is due to the mutual inconsistency of their experimental conditions, this algebra of conditionals can express indeterminacy. In fact, this system is able to express the crucial quantum concepts of orthogonality, simultaneous verifiability, compatibility, and the superposition of quantum events, all without resorting to Hilbert space. A conditional (a|b) is said to be “inapplicable” (or “undefined”) in those instances or models for which b is false. Otherwise the conditional takes the truth-value of proposition a. Thus the system is technically 3-valued, but the 3rd value has nothing to do with a state of ignorance, nor to some half-truth. People already routinely put statements into three categories: true, false, or inapplicable. As such, this system applies to macroscopic as well as microscopic events. Two conditional propositions turn out to be simultaneously verifiable just in case the truth of one implies the applicability of the other. Furthermore, two conditional propositions (a|b) and (c|d) reside in a common Boolean sub-algebra of the non-distributive system of conditional propositions just in case b=d, their conditions are equivalent. Since all aspects of quantum mechanics can be represented with this near classical logic, there is no need to adopt Hilbert space logic as ordinary logic, just a need perhaps to adopt propositional fractions to do logic, just as we long ago adopted integer fractions to do arithmetic. The algebra of Boolean fractions is a natural, near-Boolean extension of Boolean algebra adequate to express quantum logic. While this paper explains one group of quantum anomalies, it nevertheless leaves no less mysterious the ‘influence-at-a-distance’, quantum entanglement phenomena. A quantum realist must still embrace non-local influences to hold that “hidden variables” are the measured properties of particles. But that seems easier than imaging wave-particle duality and instant collapse, as offered by proponents of the standard interpretation of quantum mechanics. Partial support for this work is gratefully acknowledged from the In-House Independent Research Program and from Code 2737 at the Space & Naval Warfare Systems Center (SSC-SD), San Diego, CA 92152-5001. Presently this work is supported by Data Synthesis, 2919 Luna Avenue, San Diego, CA 92117.  相似文献   

17.
In two studies, we investigated to what extent typicalities in conjunctive concepts phrased as relative clauses—such aspets that are also birds—can be predicted from simple functions of constituent typicalities and from extensions of such functions. In a first study, analyses of a large aggregated data set, based on seven different experiments, showed that a calibrated minimum rule model and some extensions of this model accounted for a very large part of the variance in the conjunction typicalities. The same models can also account for the so-called guppy effect. A psychological explanation is presented, which states that typicalities in contrast categories, likepets that are not birds andbirds that are not pets, further improve the prediction of conjunction typicalities. This hypothesis is tested in a second study.  相似文献   

18.
What is common to all languages is notation, so Universal Grammar can be understood as a system of notational types. Given that infants acquire language, it can be assumed to arise from some a priori mental structure. Viewing language as having the two layers of calculus and protocol, we can set aside the communicative habits of speakers. Accordingly, an analysis of notation results in the three types of Identifier, Modifier and Connective. Modifiers are further interpreted as Quantifiers and Qualifiers. The resulting four notational types constitute the categories of Universal Grammar. Its ontology is argued to consist in the underlying cognitive schema of Essence, Quantity, Quality and Relation. The four categories of Universal Grammar are structured as polysemous fields and are each constituted as a radial network centred on some root concept which, however, need not be lexicalized. The branches spread out along troponymic vectors and together map out all possible lexemes. The notational typology of Universal Grammar is applied in a linguistic analysis of the ‘parts of speech’ using the English language. The analysis constitutes a ‘proof of concept’ in (1) showing how the schema of Universal Grammar is capable of classifying the so-called ‘parts of speech’, (2) presenting a coherent analysis of the verb, and (3) showing how the underlying cognitive schema allows for a sub-classification of the auxiliaries.  相似文献   

19.
Signal detection accounts of recognition assume that all item endorsements arise from the assessment of a single continuous indication of memory strength, even when subjects claim to categorically separate items accompanied by contextual recollection from those that are not (viz., remembering vs. knowing). Dissociations of these response types are held to occur because the former require a higher response criterion for item strength than does the latter. Meta-analytic and individual subject data suggest that when theA′ metric is used, accuracy for remembering can systematically deviate from that of overall responding for individual subjects. This occurs because, unlike the symmetric and rigid receiveroperating characteristic (ROC) implied underA′, empirical ROCs are asymmetric and plastic. A dualprocess model predicted that the magnitude of the deviation would vary as a systematic function of the proportion of overall recognition accompanied by subjective remember reports for individual subjects. The predictions were confirmed using multiple regression on Monte Carlo and experimental data sets and were also shown to generalize to the double equal-threshold, single high-threshold [i.e., H — FA; (H — FA)/(1 — FA)], and the equal variance signal detection d9 corrections. The unequal variance signal detection model was also shown to mirror the data, but only under the post hoc assumption that every subject adopts a very similar remember criterion placement rule. The results demonstrate that the systematic failure of tightly constrained models of recognition constitutes valuable regression data for more complex models and simultaneously highlights why single-point measures of accuracy are unsuitable as summaries across conditions or groups. Furthermore, the results show that remember rates carry unique information regarding the underlying processes governing individual subject performance that cannot be gleaned from the overall hit and false alarm rates in isolation.  相似文献   

20.
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier statistic can take both the effects of estimation of the item parameters and the estimation of the person parameters into account. The Lagrange multiplier statistic has an asymptotic χ2-distribution. The Type I error rate and power are investigated using simulation studies. Results show that test statistics that ignore the effects of estimation of the persons’ ability parameters have decreased Type I error rates and power. Incorporating a correction to account for the effects of the estimation of the persons’ ability parameters results in acceptable Type I error rates and power characteristics; incorporating a correction for the estimation of the item parameters has very little additional effect. It is investigated to what extent the three models give comparable results, both in the simulation studies and in an example using data from the NEO Personality Inventory-Revised.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号