首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Jones M  Love BC 《The Behavioral and brain sciences》2011,34(4):169-88; disuccsion 188-231
The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology - namely, Behaviorism and evolutionary psychology - that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements.  相似文献   

2.
This paper shows how to use the log-linear subroutine of SPSS to fit the Rasch model. It also shows how to fit less restrictive models obtained by relaxing specific assumptions of the Rasch model. Conditional maximum likelihood estimation was achieved by including dummy variables for the total scores as covariates in the models. This approach greatly simplifies the specification of the Rasch models. We illustrate these procedures in an analysis of four items selected from the Reiss Premarital Sexual Permissiveness Scale. We found that a modified version of the Rasch model with item dependencies fits the data significantly better than the simple Rasch model. We also found that the item difficulties are the same for men and women, but that the item dependencies are significantly greater for men. Apart from any substantive issues these results raise, the value of this exercise lies in its demonstration of how researchers can use the procedures of popular, accessible software packages to study an increasingly important set of measurement models.  相似文献   

3.
Two basic approaches to explaining the nature of the mind are the rational and the mechanistic approaches. Rational analyses attempt to characterize the environment and the behavioral outcomes that humans seek to optimize, whereas mechanistic models attempt to simulate human behavior using processes and representations analogous to those used by humans. We compared these approaches with regard to their accounts of how humans learn the variability of categories. The mechanistic model departs in subtle ways from rational principles. In particular, the mechanistic model incrementally updates its estimates of category means and variances through error-driven learning, based on discrepancies between new category members and the current representation of each category. The model yields a prediction, which we verify, regarding the effects of order manipulations that the rational approach does not anticipate. Although both rational and mechanistic models can successfully postdict known findings, we suggest that psychological advances are driven primarily by consideration of process and representation and that rational accounts trail these breakthroughs.  相似文献   

4.
Loglinear Rasch model tests   总被引:1,自引:0,他引:1  
Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch model are discussed and the Rasch model is reformulated as a quasi-independence model. The model is a quasi-loglinear model for the incomplete subgroup × score × item 1 × item 2 × ... × itemk contingency table. Using ordinary contingency table methods the Rasch model can be tested generally or against less restrictive quasi-loglinear models to investigate specific violations of its assumptions.  相似文献   

5.
Simple models and algorithms based on restrictive assumptions are often used in the field of neuroimaging for studies involving functional magnetic resonance imaging, voxel based morphometry, and diffusion tensor imaging. Nonparametric statistical methods or flexible Bayesian models can be applied rather easily to yield more trustworthy results. The spatial normalization step required for multisubject studies can also be improved by taking advantage of more robust algorithms for image registration. A common drawback of algorithms based on weaker assumptions, however, is the increase in computational complexity. In this short overview, we will therefore present some examples of how inexpensive PC graphics hardware, normally used for demanding computer games, can be used to enable practical use of more realistic models and accurate algorithms, such that the outcome of neuroimaging studies really can be trusted.  相似文献   

6.
Analyzing the rate at which languages change can clarify whether similarities across languages are solely the result of cognitive biases or might be partially due to descent from a common ancestor. To demonstrate this approach, we use a simple model of language evolution to mathematically determine how long it should take for the distribution over languages to lose the influence of a common ancestor and converge to a form that is determined by constraints on language learning. We show that modeling language learning as Bayesian inference of n binary parameters or the ordering of n constraints results in convergence in a number of generations that is on the order of n log n. We relax some of the simplifying assumptions of this model to explore how different assumptions about language evolution affect predictions about the time to convergence; in general, convergence time increases as the model becomes more realistic. This allows us to characterize the assumptions about language learning (given the models that we consider) that are sufficient for convergence to have taken place on a timescale that is consistent with the origin of human languages. These results clearly identify the consequences of a set of simple models of language evolution and show how analysis of convergence rates provides a tool that can be used to explore questions about the relationship between accounts of language learning and the origins of similarities across languages.  相似文献   

7.
The assumptions underlying item response theory (IRT) models may be expressed as a set of equality and inequality constraints on the parameters of a latent class model. It is well known that the same assumptions imply that the parameters of the manifest distribution have to satisfy a more complicated set of inequality constraints which, however, are necessary but not sufficient. In this paper, we describe how the theory for likelihood-based inference under equality and inequality constraints may be used to test the underlying assumptions of IRT models. It turns out that the analysis based directly on the latent structure is simpler and more flexible. In particular, we indicate how several interesting extensions of the Rasch model may be obtained by partial relaxation of the basic constraints. An application to a data set provided by Educational Testing Service is used to illustrate the approach.We thank Dr. Gorman and Dr. Rogers of the Educational Testing Service for providing the data analyzed in Section 4. We also thank three reviewers for comments and suggestions.This revised article was published online in August 2005 with the PDF paginated correctly.  相似文献   

8.
A statewide survey of moderate and severe behavior disorders in persons with mental retardation in institutional and community settings was conducted. Information on the treatment procedures used and the adequacy of available resources in both settings was also gathered. Results indicated that community staff had considerably less experience than institutional staff in dealing with serious behavior disorders exhibited by persons with substantial cognitive and physical impairments. The types of aberrant behaviors with the highest prevalence rates showed differences in the institutions and the community. The largest differences in prevalence rates for severe behavior disorders in the two settings occurred for aggressive and self-injurious behaviors. Community staff thus had appreciably less experience than institutional staff in designing interventions for severe aggressive and self-injurious behaviors. About half of the identified individuals in both settings received psychotropic medications. Institutional staff were most likely to use restrictive behavioral procedures than community staff. Findings indicated that the most restrictive procedures were used primarily with only certain severe behavior disorders. Subjective ratings of the overall effectiveness of interventions were lower by community than institutional staff. The usefulness of the obtained data base for statewide planning in the area of behavioral supports is discussed.  相似文献   

9.
10.
L J Rips 《Cognition》1990,36(3):291-314
People's performance on knight/knave problems is deliberate. They make assumptions, draw deductive inferences from them, and evaluate the consequences of these inferences. In an initial paper on this topic (Rips, 1989), I proposed a model for a subset of such problems that depend on sentential reasoning. The main component of the model is a set of natural-deduction rules, drawn from prior work on propositional inference. This natural-deduction framework seems well suited to explain the reasoning that subjects display on these problems, since it incorporates a mechanism for making assumptions and following them up. Moreover, the number of assumptions and rule applications needed to solve a problem yields an intuitively appealing measure of how difficult the problem should be. In accord with this prediction, the experiments found increases in error rates and reaction times as a function of the assumptions-plus-inferences measure. In their note, Johnson-Laird and Byrne sketch a possible alternative. Their account posits five processing strategies tailored to this problem domain and a mechanism for evaluating sentential arguments based on mental models. The mental-model component is a variation on the usual truth-table method, where individual models correspond to truth-table rows. The main prediction of this component is that the more models subjects must consider, the harder the problem. However, the experiment reported here found no evidence for this prediction. Problems with larger numbers of models do not yield higher error rates than those with few. What does cause difficulties for subjects is scope relations among connectives, a fact that inference-rule theories can easily explain. Given these findings, it's not surprising that the predictive burden for knight/knave problems must be carried by Johnson-Laird and Byrne's strategies, rather than by mental models. These strategies control the order in which subjects consider parts of the problem, and they provide possible stopping points. There are, however, several difficulties with these strategies. Of their four new strategies, Johnson-Laird and Byrne offer no evidence at all for two of them. Of the remaining two, only one accounts for a significant proportion of the variance when allowance is made for confounding variables. Moreover, all four strategies are ad hoc, rather than being derived from some more general theory. Certainly, much remains to be done in filling out the picture of how such problems are handled, as both Evans and Johnson-Laird and Byrne point out.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

11.
It is shown that deterministic models can compete effectively with stochastic models in summarizing concept identification behavior. Three groups of deterministic models are examined. Examination of individual learners' trial by trial behavior in a concept experiment shows: (1) One person exhibited behavior consistent with a Hypothesis Permutation (HP) model despite being a nonlearner who showed no evidence of improvement over a period of 24 trials. However, when all 50 persons studied in each of two treatment groups were examined, only 22 members of one group and 10 of the other showed no inconsistencies with deterministic local consistency assumptions. (2) Certain deterministic computer programs could find at least one satisfactory order for predicting all responses by 18 of the 22 consistent solvers and 6 of the 10 consistent solvers, respectively, in the two groups just mentioned. For these 24 persons, then, a less restrictive deterministic model is adequate than for the others. (3) Those 38 original members of the first treatment group who met a stringent learning criterion were compared with respect to predictions generated by stochastic and mathematized deterministic models. One deterministic model (RSS-U 9-state) is in some respects the best of the models examined, but this success is a partial reflection of estimating eight parameters from the data.  相似文献   

12.
This article addresses the central paradox of hypnotic pain reduction: the discrepancy between self-reports and physiological measures of pain, using a constructionist perspective. The analysis is embedded in the ethogenic assumptions about the nature of the social and in realist rather than positivist notions of causality. It reformulates the induction procedure as an entrance ritual, considers the hypnosis experiment proper as a social episode, reveals the causal powers that subjects bring to hypnosis experiments and specifies how the act of hypnotic pain reduction is achieved. The analysis uses, in part, the Olympic athlete and the Olympic competition as analogic models to show that the hypnosis experiment is merely a public occasion for talented hypnotic subjects to display their peculiar powers without risking their standing as rational beings. The paradox is seen to be an artifact of positivist assumptions concerning both the nature of hypnosis and of traditional experimental methods. Two distinct lines of investigation, based on the competence/performance distinction, are recommended as guides for future research.  相似文献   

13.
Through the application of finite mixture distribution models, we investigated the existence of distinct modes of behavior in learning a simple discrimination. The data were obtained in a repeated measures study in which subjects aged 6 to 10 years carried out a simple discrimination learning task. In contrast to distribution models of exclusively rational learners or exclusively incremental learners, a mixture distribution model of rational learners and slow learners was found to fit the data of all measurement occasions and all age groups. Hence, the finite mixture distribution analysis provides strong support for the existence of distinct modes of learning behavior. The results of a second experiment support this conclusion by crossvalidation of the models that fit the data of the first experiment. The effect of verbally labeling the values on the relevant stimulus dimension and the consistency of behavior over measurement occasions are related to the mixture model estimates.  相似文献   

14.
The demand that epistemic support be explicated as rational compulsion has consistently undermined the dialogue between theology and science. Rational compulsion entails too restrictive a form of epistemic support for most scientific theorizing, let alone interdisciplinary dialogue. This essay presents a less restrictive form of epistemic support, explicated not as rational compulsion but as explanatory power. Once this notion of epistemic support is developed, a genuinely productive interdisciplinary dialogue between theology and science becomes possible. This essay closes by sketching how the Big Bang model from cosmology and the Christian doctrine of Creation can be viewed as supporting each other.  相似文献   

15.
It was proposed that people attribute an individual's behavior more to internal factors when that individual's actions are influenced by reward than when those actions are influenced by punishment. Previous research has failed to control for the power of reward versus punishment which, in effect, creates a confounding of behavioral base rates (consensus) with the reward-punishment manipulation. The current research created reward and punishment contingencies that were equal in their base rates for producing a compliant response. In Experiment 1, subjects (n = 63) who produced the base-rate data also made attributions regarding a compliant target person. The results supported the reward-punishment attributional asymmetry hypothesis in that the target person was held more responsible for his actions in the reward than in the punishment conditions. A second experiment (n = 72) provided some attributors with information regarding base rates for compliance and measured perceived base rates for compliance. Knowledge of the base rates for compliance eliminated the reward-punishment attributional asymmetry phenomenon. Subjects not provided with such knowledge erroneously assumed different base rates for reward and punishment and maintained the perception of reward-punishment attributional asymmetry. Using subjects' estimates of base rate for compliance as a covariate eliminated the attributional asymmetry effect. It is suggested that erroneous base-rate assumptions mediate the attributional asymmetry phenomenon.  相似文献   

16.
Previous research has uncovered many conditions that encourage base‐rate use. The present research investigates how base‐rates are used when conditions are manipulated to encourage their use in the lawyer/engineer paradigm. To examine the functional form of the response to base‐rate, a factorial design was employed in which both base‐rate and the individuating information were varied within‐subject. We compared the performance of several models of base‐rate use, including a model that allows base‐rate and individuating information to be combined in a strictly additive fashion, and a model which presumes that respondents use Bayes' Rule in forming their judgments. Results from 1493 respondents showed that the additive model is a stronger predictor of base‐rate use than any other model considered, suggesting that the base‐rate and individuating information are processed independently in the lawyer/engineer paradigm. A possible mechanism for this finding is discussed. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.
Despite the growing call for new models of politics grounded in the capacities of real–world decision–makers, much international relations theory still incorporates rationalist assumptions. Scholars defend such assumptions as the best way to produce parsimonious theoretical structures. Recent attempts to deploy prospect theory in the study of international politics are consistent with the call for empirically grounded models of political behavior. However, past attempts have often emphasized individualized comparisons of prospect theory with rational choice at the expense of building deductive theory. The analysis here demonstrates that prospect theory can produce deductive models for empirical comparison with those already manufactured under rational choice. The result is a new set of propositions concerning international politics securely anchored to the actual capacities of human actors.  相似文献   

18.
19.
This paper examines some mathematical implications of two process models of concept identification, the 1-element strategy selection model with a local consistency assumption and Chumbley's hypothesis manipulation (HM) model. Under slightly restrictive assumptions, each process model (Level 1) is shown to imply a stochastic model (Level 3), making predictions of behavior in experimental situations in which the stimulus presented to a subject on any trial is randomly selected and independent of that presented on any other trial. In addition, each model is shown to make predictions at an intermediate level (Level 2) about performance on successive trials with specific stimulus sequences presented.At Level 2, each model is shown to falsely predict zero probabilities for particular response patterns when stated stimulus sequences are used. Fewer such problems arise with the HM model than with the 1-element model. Minimum squared error fits of one set of experimental data show relatively good correspondence of predictions and observations when a Chumbley model with different saliences for different hypotheses is employed.  相似文献   

20.
As Bayesian methods become more popular among behavioral scientists, they will inevitably be applied in situations that violate the assumptions underpinning typical models used to guide statistical inference. With this in mind, it is important to know something about how robust Bayesian methods are to the violation of those assumptions. In this paper, we focus on the problem of contaminated data (such as data with outliers or conflicts present), with specific application to the problem of estimating a credible interval for the population mean. We evaluate five Bayesian methods for constructing a credible interval, using toy examples to illustrate the qualitative behavior of different approaches in the presence of contaminants, and an extensive simulation study to quantify the robustness of each method. We find that the “default” normal model used in most Bayesian data analyses is not robust, and that approaches based on the Bayesian bootstrap are only robust in limited circumstances. A simple parametric model based on Tukey’s “contaminated normal model” and a model based on the t-distribution were markedly more robust. However, the contaminated normal model had the added benefit of estimating which data points were discounted as outliers and which were not.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号