首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
SUSTAIN: a network model of category learning   总被引:5,自引:0,他引:5  
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning.  相似文献   

2.
Recency effects (REs) have been well established in memory and probability learning paradigms but have received little attention in category earning research. Extant categorization models predict REs to be unaffected by learning, whereas a functional interpretation of REs, suggested by results in other domains, predicts that people are able to learn sequential dependencies and incorporate this information into their responses. These contrasting predictions were tested in 2 experiments involving a classification task in which outcome sequences were autocorrelated. Experiment 1 showed that reliance on recent outcomes adapts to the structure of the task, in contrast to models' predictions. Experiment 2 provided constraints on how sequential information is learned and suggested possible extensions to current models to account for this learning.  相似文献   

3.
Cognitive Processing - We use a feature-based association model to fit grouped and individual level category learning and transfer data. The model assumes that people use corrective feedback to...  相似文献   

4.
ALCOVE: an exemplar-based connectionist model of category learning.   总被引:16,自引:0,他引:16  
ALCOVE (attention learning covering map) is a connectionist model of category learning that incorporates an exemplar-based representation (Medin & Schaffer, 1978; Nosofsky, 1986) with error-driven learning (Gluck & Bower, 1988; Rumelhart, Hinton, & Williams, 1986). Alcove selectively attends to relevant stimulus dimensions, is sensitive to correlated dimensions, can account for a form of base-rate neglect, does not suffer catastrophic forgetting, and can exhibit 3-stage (U-shaped) learning of high-frequency exceptions to rules, whereas such effects are not easily accounted for by models using other combinations of representation and learning method.  相似文献   

5.
This paper reviews a body of work conducted in our laboratory that applies functional magnetic resonance imaging (fMRI) to better understand the biological response and change that occurs during prototype-distortion learning. We review results from two experiments (Little, Klein, Shobat, McClure, & Thulborn, 2004; Little & Thulborn, 2005) that provide support for increasing neuronal efficiency by way of a two-stage model that includes an initial period of recruitment of tissue across a distributed network that is followed by a period of increasing specialization with decreasing volume across the same network. Across the two studies, participants learned to classify patterns of random-dot distortions (Posner & Keele, 1968) into categories. At four points across this learning process subjects underwent examination by fMRI using a category-matching task. A large-scale network, altered across the protocol, was identified to include the frontal eye fields, both inferior and superior parietal lobules, and visual cortex. As behavioral performance increased, the volume of activation within these regions first increased and later in the protocol decreased. Based on our review of this work we propose that: (i) category learning is reflected as specialization of the same network initially implicated to complete the novel task, and (ii) this network encompasses regions not previously reported to be affected by prototype-distortion learning.  相似文献   

6.
The assumption that people possess a strategy repertoire for inferences has been raised repeatedly. The strategy selection learning theory specifies how people select strategies from this repertoire. The theory assumes that individuals select strategies proportional to their subjective expectations of how well the strategies solve particular problems; such expectations are assumed to be updated by reinforcement learning. The theory is compared with an adaptive network model that assumes people make inferences by integrating information according to a connectionist network. The network's weights are modified by error correction learning. The theories were tested against each other in 2 experimental studies. Study 1 showed that people substantially improved their inferences through feedback, which was appropriately predicted by the strategy selection learning theory. Study 2 examined a dynamic environment in which the strategies' performances changed. In this situation a quick adaptation to the new situation was not observed; rather, individuals got stuck on the strategy they had successfully applied previously. This "inertia effect" was most strongly predicted by the strategy selection learning theory.  相似文献   

7.
Recent approaches to human category learning have often (re)invoked the notion of systematic search for good rules. The RULEX model of category learning is emblematic of this renewed interest in rule-based categorization, and is able to account for crucial findings previously thought to provide evidence in favor of prototype or exemplar models. However, a major difficulty in comparing RULEX to other models is that RULEX is framed in terms of a stochastic search process, with no analytic expressions available for its predictions. The result is that RULEX predictions can only be found through time consuming simulations, making model-fitting very difficult, and all but prohibiting more detailed investigations of the model. To remedy this problem, this paper describes an algorithmic method of calculating RULEX predictions that does not rely on numerical simulation, and yields some insight into the behavior of the model itself.  相似文献   

8.
A new connectionist model (named RASHNL) accounts for many "irrational" phenomena found in nonmetric multiple-cue probability learning, wherein people learn to utilize a number of discrete-valued cues that are partially valid indicators of categorical outcomes. Phenomena accounted for include cue competition, effects of cue salience, utilization of configural information, decreased learning when information is introduced after a delay, and effects of base rates. Experiments 1 and 2 replicate previous experiments on cue competition and cue salience, and fits of the model provide parameter values for making qualitatively correct predictions for many other situations. The model also makes 2 new predictions, confirmed in Experiments 3 and 4. The model formalizes 3 explanatory principles: rapidly shifting attention with learned shifts, decreasing learning rates, and graded similarity in exemplar representation.  相似文献   

9.
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.  相似文献   

10.
The ALCOVE model of category learning, despite its considerable success in accounting for human performance across a wide range of empirical tasks, is limited by its reliance on spatial stimulus representations. Some stimulus domains are better suited to featural representation, characterizing stimuli in terms of the presence or absence of discrete features, rather than as points in a multidimensional space. We report on empirical data measuring human categorization performance across a featural stimulus domain and show that ALCOVE is unable to capture fundamental qualitative aspects of this performance. In response, a featural version of the ALCOVE model is developed, replacing the spatial stimulus representations that are usually generated by multidimensional scaling with featural representations generated by additive clustering. We demonstrate that this featural version of ALCOVE is able to capture human performance where the spatial model failed, explaining the difference in terms of the contrasting representational assumptions made by the two approaches. Finally, we discuss ways in which the ALCOVE categorization model might be extended further to use “hybrid” representational structures combining spatial and featural components.  相似文献   

11.
The authors propose a reinforcement-learning mechanism as a model for recurrent choice and extend it to account for skill learning. The model was inspired by recent research in neurophysiological studies of the basal ganglia and provides an integrated explanation of recurrent choice behavior and skill learning. The behavior includes effects of differential probabilities, magnitudes, variabilities, and delay of reinforcement. The model can also produce the violation of independence, preference reversals, and the goal gradient of reinforcement in maze learning. An experiment was conducted to study learning of action sequences in a multistep task. The fit of the model to the data demonstrated its ability to account for complex skill learning. The advantages of incorporating the mechanism into a larger cognitive architecture are discussed.  相似文献   

12.
13.
14.
After learning to categorize a set of alien-like stimuli in the context of a story, a group of 5-year-old children and adults judged pairs of stimuli from different categories to be less similar than did groups not learning the category distinction. In a same-different task, the learning group made more errors on pairs of non-identical stimuli from the same category than did the other groups, suggesting increased within-category item similarity, or compression. These expansion and compression effects add further support to the view that concept formation involves systematic changes in the metric of similarity space within which objects are represented. They also suggest that these processes do not vary with age, which is at least consistent with the hypothesis that they are fundamental to the mechanisms underlying concept formation.  相似文献   

15.
This article introduces a connectionist model of category learning that takes into account the prior knowledge that people bring to new learning situations. In contrast to connectionist learning models that assume a feedforward network and learn by the delta rule or backpropagation, this model, the knowledge-resonance model, or KRES, employs a recurrent network with bidirectional symmetric connection whose weights are updated according to a contrastive Hebbian learning rule. We demonstrate that when prior knowledge is represented in the network, KRES accounts for a considerable range of empirical results regarding the effects of prior knowledge on category learning, including (1) the accelerated learning that occurs in the presence of knowledge, (2) the better learning in the presence of knowledge of category features that are not related to prior knowledge, (3) the reinterpretation of features with ambiguous interpretations in light of error-corrective feedback, and (4) the unlearning of prior knowledge when that knowledge is inappropriate in the context of a particular category.  相似文献   

16.
Stress is often seen as a negative factor which affects every individual’s life quality and decision making. To help avoid or deal with extreme emotions caused by an external stressor, a number of practices have been introduced. In the scope of this paper, we take three kinds of therapy into account: mindfulness, humor, and music therapy. This paper aims to see how various practices help people to cope with stress, using mathematical modelling. We present practical implementations in the form of client–server software, incorporating the computational model which describes therapy effects for overcoming stress based on quantitative neuropsychological research. The underlying network model simulates the elicitation of an extremely stressful emotion due to a strong stress-inducing event as an external stimulus, followed by a therapy practice simulation leading to a reduction of the stress level. Each simulation is based on user input and preferences, integrating a parameter tuning process; it fits a simulation for a particular user. The client–server architecture software which has been designed and developed completely fulfills this objective. It includes server part with embedded MATLAB interaction and API for client communication.  相似文献   

17.
In what follows, we explore the general relationship between eye gaze during a category learning task and the information conveyed by each member of the learned category. To understand the nature of this relationship empirically, we used eye tracking during a novel object classification paradigm. Results suggest that the average fixation time per object during learning is inversely proportional to the amount of information that object conveys about its category. This inverse relationship may seem counterintuitive; however, objects that have a high-information value are inherently more representative of their category. Therefore, their generality captures the essence of the category structure relative to less representative objects. As such, it takes relatively less time to process these objects than their less informative companions. We use a general information measure referred to as representational information theory (Vigo, 2011a, 2013a) to articulate and interpret the results from our experiment and compare its predictions to those of three models of prototypicality.  相似文献   

18.
A novel theoretical approach to human category learning is proposed in which categories are represented as coordinated statistical models of the properties of the members. Key elements of the account are learning to recode inputs as task-constrained principle components and evaluating category membership in terms of model fit-that is, the fidelity of the reconstruction after recoding and decoding the stimulus. The approach is implemented as a computational model called DIVA (for DIVergent Autoencoder), an artificial neural network that uses reconstructive learning to solve N-way classification tasks. DIVA shows good qualitative fits to benchmark human learning data and provides a compelling theoretical alternative to established models.  相似文献   

19.
Bod R 《Cognitive Science》2009,33(5):752-793
While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase-structure trees should be assigned to initial sentences, s/he allows (implicitly) for all possible trees and lets linguistic experience decide which is the "best" tree for each sentence. The best tree is obtained by maximizing "structural analogy" between a sentence and previous sentences, which is formalized by the most probable shortest combination of subtrees from all trees of previous sentences. Corpus-based experiments with this model on the Penn Treebank and the Childes database indicate that it can learn both exemplar-based and rule-based aspects of language, ranging from phrasal verbs to auxiliary fronting. By having learned the syntactic structures of sentences, we have also learned the grammar implicit in these structures, which can in turn be used to produce new sentences. We show that our model mimicks children's language development from item-based constructions to abstract constructions, and that the model can simulate some of the errors made by children in producing complex questions.  相似文献   

20.
Participants learned simple and complex category structures under typical single-task conditions and when performing a simultaneous numerical Stroop task. In the simple categorization tasks, each set of contrasting categories was separated by a unidimensional explicit rule, whereas the complex tasks required integrating information from three stimulus dimensions and resulted in implicit rules that were difficult to verbalize. The concurrent Stroop task dramatically impaired learning of the simple explicit rules, but did not significantly delay learning of the complex implicit rules. These results support the hypothesis that category learning is mediated by multiple learning systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号