首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
A model for the multiple dual-pair method, a generalization of the traditional dual-pair (4IAX) paradigm, is given. This model is expressed in terms of normal and beta distributions. This generalization allows for the simultaneous estimation of the perceptual distances among three or more stimuli. This model has applications in cases in which multiple two-sample comparisons would be too time consuming and labor intensive. The theory discussed shows how unequal variances can be estimated on the basis of results from that method. Two numerical examples that illustrate the ability of the beta distribution-based model to retrieve the appropriate parameters are given. It is also shown how the traditional dual-pair model is a special case of the multiple dual-pair model.  相似文献   

2.
Generalization,similarity, and Bayesian inference   总被引:1,自引:0,他引:1  
Tenenbaum JB  Griffiths TL 《The Behavioral and brain sciences》2001,24(4):629-40; discussion 652-791
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.  相似文献   

3.
This article provides a synthetic account of the likelihood ratio, optimal decision rules, and correct response probabilities in a signal detection theoretic model of the observer in the dual-pair comparison, or four-interval AX (4IAX), paradigm. The model assumes a static sampling process, resulting in four, equal-variance normally distributed (i.e., Gaussian) observations on each trial First, a likelihood ratio equation allowing for an arbitrary degree of correlation between observations is provided. Specific solutions for the cases of independent and highly correlated observations are then derived. It is shown that these solutions, and the associated decision rules, correspond to those provided independently in earlier publications. A modified 4IAX paradigm involving, as a standard, an additional stimulus (C) located medially between the A and the B stimuli is also considered. It is shown that the optimal (static, equal-variance, Gaussian) decision model for this paradigm is unaffected by correlation between observations and is equivalent to the standard 4IAX with highly correlated observations. Finally, we consider how, under the considered (static, equal-variance, Gaussian) model, the proportion of correct responses in the different versions of the 4IAX paradigm is related to d', and a solution for the case of independent observations is provided.  相似文献   

4.
This study examined the utility of brief academic assessments to identify effective generalization procedures for individual students. Specifically, the study built on the proposal that brief assessments of antecedent and consequence manipulations can identify the most effective generalization strategy for individual students. The design was an alternating treatments design nested within a multiple baseline across six students. Students learned how to solve a set of multiplication facts using a common strategy while spontaneous generalization to other sets of facts was measured. Next, researchers determined whether an antecedent- or consequent-based generalization strategy would be more effective for increasing generalization across multiplication skills and conducted an extended analysis with an alternating treatment phase to confirm results of the brief assessment. Results indicated that the assessment correctly identified the most effective generalization strategy for five of the six students.  相似文献   

5.
Investigating pedestrian crossing and driver yielding decisions should be an important focus considering the high risks of pedestrians in exposed to motorized traffic. Limitations, however, exist in previous studies – variables considered previously have been limited; how their behavior affect each other (defined as interactive impacts) were not sufficiently considered. This paper aims to provide a methodological approach for pedestrian crossing and driver yielding decisions during their interactions, considering of different variable types including interactive impact variables, traffic condition variables, road design variables, and environment variables. A Distance-Velocity (DV) framework proposed in an earlier study is introduced for definitions and concepts in studying pedestrian-vehicle interactions. Logistic regression, support vector machines, neural networks and random forests, are introduced as candidate models. A case study involving six crosswalk locations is conducted, focusing on interactions between pedestrians and right-turn vehicles. The proposed methodological approach is applied, with the performance of the four machine learning methods compared in terms of model generalization and confusion matrix. The model with the best performance is further compared to the typical gap-based model. Results show that random forest and logistic regression models performed the best in modeling pedestrian crossing and driver yielding decisions respective, in terms of model generalization. Besides, the DV-based modeling method (average accuracy of over 90% for pedestrians and 80% for drivers) outperformed the traditional gap-based method in all test seeds. As a key finding, interactive impacts from each other (the pedestrian and the driver) act as a key contributing variable on their decisions.  相似文献   

6.
What accounts for how we know that certain rules of reasoning, such as reasoning by Modus Ponens, are valid? If our knowledge of validity must be based on some reasoning, then we seem to be committed to the legitimacy of rule‐circular arguments for validity. This paper raises a new difficulty for the rule‐circular account of our knowledge of validity. The source of the problem is that, contrary to traditional wisdom, a universal generalization cannot be inferred just on the basis of reasoning about an arbitrary object. I argue in favor of a more sophisticated constraint on reasoning by universal generalization, one which undermines a rule‐circular account of our knowledge of validity.  相似文献   

7.
It is unclear how children learn labels for multiple overlapping categories such as “Labrador,” “dog,” and “animal.” Xu and Tenenbaum (2007a) suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and adults. Here, we report data testing a developmental prediction of the Bayesian model—that more knowledge should lead to narrower category inferences when presented with multiple subordinate exemplars. Two experiments did not support this prediction. Children with more category knowledge showed broader generalization when presented with multiple subordinate exemplars, compared to less knowledgeable children and adults. This implies a U‐shaped developmental trend. The Bayesian model was not able to account for these data, even with inputs that reflected the similarity judgments of children. We discuss implications for the Bayesian model, including a combined Bayesian/morphological knowledge account that could explain the demonstrated U‐shaped trend.  相似文献   

8.
Generalization–deciding whether to extend a property from one stimulus to another stimulus–is a fundamental problem faced by cognitive agents in many different settings. Shepard (1987) provided a mathematical analysis of generalization in terms of Bayesian inference over the regions of psychological space that might correspond to a given property. He proved that in the unidimensional case, where regions are intervals of the real line, generalization will be a negatively accelerated function of the distance between stimuli, such as an exponential function. These results have been extended to rectangular consequential regions in multiple dimensions, but not for circular consequential regions, which play an important role in explaining generalization for stimuli that are not represented in terms of separable dimensions. We analyze Bayesian generalization with circular consequential regions, providing bounds on the generalization function and proving that this function is negatively accelerated.  相似文献   

9.
Individuals with autism spectrum disorder (ASD) show atypical patterns of learning and generalization. We explored the possible impacts of autism-related neural abnormalities on perceptual category learning using a neural network model of visual cortical processing. When applied to experiments in which children or adults were trained to classify complex two-dimensional images, the model can account for atypical patterns of perceptual generalization. This is only possible, however, when individual differences in learning are taken into account. In particular, analyses performed with a self-organizing map suggested that individuals with high-functioning ASD show two distinct generalization patterns: one that is comparable to typical patterns, and a second in which there is almost no generalization. The model leads to novel predictions about how individuals will generalize when trained with simplified input sets and can explain why some researchers have failed to detect learning or generalization deficits in prior studies of category learning by individuals with autism. On the basis of these simulations, we propose that deficits in basic neural plasticity mechanisms may be sufficient to account for the atypical patterns of perceptual category learning and generalization associated with autism, but they do not account for why only a subset of individuals with autism would show such deficits. If variations in performance across subgroups reflect heterogeneous neural abnormalities, then future behavioral and neuroimaging studies of individuals with ASD will need to account for such disparities.  相似文献   

10.
Real-life decision problems are usually so complex that they cannot be modelled with a single objective function, thus creating a need for clear and efficient techniques for handling multiple criteria to support the decision process. A widely used technique and one commonly taught in general OR/MS courses is goal programming, which is clear and appealing. On the other hand, goal programming is strongly criticized by multiple-criteria optimization specialists for its non-compliance with the efficiency (Pareto-optimality) principle. In this paper we show how the implementation techniques of goal programming can be used to model the reference point method and its extension, aspiration/reservation-based decision support. Thereby we show a congruence between these approaches and suggest how the GP model with relaxation of some traditional assumptions can be extended to an efficient decision support technique meeting the efficiency principle and other standards of multiobjective optimization theory.  相似文献   

11.
Driving is a highly complex task that involves the execution of multiple cognitive tasks belonging to different levels of abstraction. Traffic emerges from the interaction of a big number of agents implementing those behaviours, but until recent years, modelling it by the interaction of these agents in the so called micro-simulators was a nearly impossible task as their number grows. However, with the growing computing power it is possible to model increasingly large quantities of individual vehicles according to their individual behaviours. These models are usually composed of two sub-models for two well-defined tasks: car-following and lane-change. In the case of lane-change the literature proposes many different models, but few of them use Computational Intelligence (CI) techniques, and much less use personalization for reaching individual granularity. This study explores one of the two aspects of the lane-change called lane-change acceptance, where the driver performs or not a lane-change given his intention and the vehicle environment. We demonstrate how the lane-change acceptance of a specific driver can be learned from his lane change intention and surrounding environment in an urban scenario using CI techniques such as feed-forward Artificial Neural Network (ANN). We work with Multilayer Perceptron (MLP) and Convolutional Neural Networks (CNN) architectures. How they perform one against the other and how the different topologies affect both to the generalization of the problem and the learning process are studied.  相似文献   

12.
This paper proposes an ordinal generalization of the hierarchical classes model originally proposed by De Boeck and Rosenberg (1998). Any hierarchical classes model implies a decomposition of a two-way two-mode binary arrayM into two component matrices, called bundle matrices, which represent the association relation and the set-theoretical relations among the elements of both modes inM. Whereas the original model restricts the bundle matrices to be binary, the ordinal hierarchical classes model assumes that the bundles are ordinal variables with a prespecified number of values. This generalization results in a classification model with classes ordered along ordinal dimensions. The ordinal hierarchical classes model is shown to subsume Coombs and Kao's (1955) model for nonmetric factor analysis. An algorithm is described to fit the model to a given data set and is subsequently evaluated in an extensive simulation study. An application of the model to student housing data is discussed.  相似文献   

13.
Within a skill-theory framework, the traditional opposition between generalization and specificity is resolved. Neither generalization nor specificity is considered the normal state. Instead, they are both phenomena that can be predicted and explained in terms of skill structures and functional mechanisms of development or learning. A person acquires a skill in a specific context and must work to gradually extend it to other contexts. Within a task domain and across related domains, a set of structural transformations predict the order of generalization of the skill. Range of generalization of a given skill at a point in time varies widely across people and situations as a function of specified functional mechanisms. Generalization is maximized when (a) tasks are similar and familiar, (b) the environment provides opportunities for practice and support, (c) the person has had time to consolidate skills at the relevant developmental level, and (d) he or she is intelligent and in an emotional state facilitative of the particular skill. True generalization must be distinguished from optimal-level synchrony, where new capacities emerge across domains as a new developmental level emerges.  相似文献   

14.
The aim of this paper is to apply the accuracy based approach to epistemology to the case of higher order evidence: evidence that bears on the rationality of one's beliefs. I proceed in two stages. First, I show that the accuracy based framework that is standardly used to motivate rational requirements supports steadfastness—a position according to which higher order evidence should have no impact on one's doxastic attitudes towards first order propositions. The argument for this will require a generalization of an important result by Greaves and Wallace for the claim that conditionalization maximizes expected accuracy. The generalization I provide will, among other things, allow us to apply the result to cases of self‐locating evidence. In the second stage, I develop an alternative framework. Very roughly, what distinguishes the traditional approach from the alternative one is that, on the traditional picture, we're interested in evaluating the expected accuracy of conforming to an update procedure. On the alternative picture that I develop, instead of considering how good an update procedure is as a plan to conform to, we consider how good it is as a plan to make. I show how, given the use of strictly proper scoring rules, the alternative picture vindicates calibrationism: a view according to which higher order evidence should have a significant impact on our beliefs. I conclude with some thoughts about why higher order evidence poses a serious challenge for standard ways of thinking about rationality.  相似文献   

15.
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization of the procedure to a model with multidimensional ability parameters are presented. The procedure is a generalization of a procedure by Albert (1992) for estimating the two-parameter normal ogive model. The procedure supports analyzing data from multiple populations and incomplete designs. It is shown that restrictions can be imposed on the factor matrix for testing specific hypotheses about the ability structure. The technique is illustrated using simulated and real data. The authors would like to thank Norman Verhelst for his valuable comments and ACT, CITO group and SweSAT for the use of their data.  相似文献   

16.
Ben-Yashar R  Nitzan S  Vos HJ 《Psicothema》2006,18(3):652-660
This paper compares the determination of optimal cutoff points for single and multiple tests in the field of personnel selection. Decisional skills of predictor tests composing the multiple test are assumed to be endogenous variables that depend on the cutting points to be set. It is shown how the predictor cutoffs and the collective decision rule are determined dependently by maximizing the multiple test's common expected utility. Our main result specifies the condition that determines the relationship between the optimal cutoff points for single and multiple tests, given the number of predictor tests, the collective decision rule (aggregation procedure of predictor tests' recommendations) and the function relating the tests' decisional skills to the predictor cutoff points. The proposed dichotomous decision-making method is illustrated by an empirical example of selecting trainees by means of the Assessment Center method.  相似文献   

17.
This commentary argues that the quality and usefulness of student‐recruited data can be evaluated by examining the external validity and generalization issues related to this sampling method. Therefore, we discuss how the sampling methods of student‐ and non‐student‐recruited samples can enhance or diminish external validity and generalization. Next, we present the advantages of the student‐recruited sampling method (heterogeneity of the sample, student learning, cost reduction, and elaborate research designs) and conclude with making additional suggestions on how to improve the quality of these data.  相似文献   

18.
19.
There is much empirical evidence that randomized response methods improve the cooperation of the respondents when asking sensitive questions. The traditional methods for analysing randomized response data are restricted to univariate data and only allow inferences at the group level due to the randomized response sampling design. Here, a novel beta‐binomial model is proposed for analysing multivariate individual count data observed via a randomized response sampling design. This new model allows for the estimation of individual response probabilities (response rates) for multivariate randomized response data utilizing an empirical Bayes approach. A common beta prior specifies that individuals in a group are tied together and the beta prior parameters are allowed to be cluster‐dependent. A Bayes factor is proposed to test for group differences in response rates. An analysis of a cheating study, where 10 items measure cheating or academic dishonesty, is used to illustrate application of the proposed model.  相似文献   

20.
As family systems research has expanded, so have investigations into how marital partners coparent together. Although coparenting research has increasingly found support for the influential role of coparenting on both marital relationships and parenting practices, coparenting has traditionally been investigated as part of an indirect system which begins with marital health, is mediated by coparenting processes, and then culminates in each partner's parenting. The field has not tested how this traditional model compares with the equally plausible alternative model, in which coparenting simultaneously predicts both marital relationships and parenting practices. Furthermore, statistical and practical limitations have typically resulted in only one parent being analyzed in these models. This study used model-fitting analyses to include both wives and husbands in a test of these two alternative models of the role of coparenting in the family system. Our data suggested that both the traditional indirect model (marital health to coparenting to parenting practices), and the alternative predictor model where coparenting alliance directly and simultaneously predicts marital health and parenting practices, fit for both spouses. This suggests that dynamic and multiple roles may be played by coparenting in the overall family system, and raises important practical implications for family clinicians.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号