首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and the number of words in the languages being learned. Although the results are intuitive from the perspective of a language learner (longer sentences, less training, and a larger language all make learning more difficult), standard computational proposals fail to capture several of these results. We describe how probabilistic models of segmentation can be modified to take into account some notion of memory or resource limitations in order to provide a closer match to human performance.  相似文献   

3.
Processing language requires the retrieval of concepts from memory in response to an ongoing stream of information. This retrieval is facilitated if one can infer the gist of a sentence, conversation, or document and use that gist to predict related concepts and disambiguate words. This article analyzes the abstract computational problem underlying the extraction and use of gist, formulating this problem as a rational statistical inference. This leads to a novel approach to semantic representation in which word meanings are represented in terms of a set of probabilistic topics. The topic model performs well in predicting word association and the effects of semantic association and ambiguity on a variety of language-processing and memory tasks. It also provides a foundation for developing more richly structured statistical models of language, as the generative process assumed in the topic model can easily be extended to incorporate other kinds of semantic and syntactic structure.  相似文献   

4.
Human memory and Internet search engines face a shared computational problem, needing to retrieve stored pieces of information in response to a query. We explored whether they employ similar solutions, testing whether we could predict human performance on a fluency task using PageRank, a component of the Google search engine. In this task, people were shown a letter of the alphabet and asked to name the first word beginning with that letter that came to mind. We show that PageRank, computed on a semantic network constructed from word-association data, outperformed word frequency and the number of words for which a word is named as an associate as a predictor of the words that people produced in this task. We identify two simple process models that could support this apparent correspondence between human memory and Internet search, and relate our results to previous rational models of memory.  相似文献   

5.
Chunking mechanisms in human learning   总被引:2,自引:0,他引:2  
Pioneering work in the 1940s and 1950s suggested that the concept of 'chunking' might be important in many processes of perception, learning and cognition in humans and animals. We summarize here the major sources of evidence for chunking mechanisms, and consider how such mechanisms have been implemented in computational models of the learning process. We distinguish two forms of chunking: the first deliberate, under strategic control, and goal-oriented; the second automatic, continuous, and linked to perceptual processes. Recent work with discrimination-network computational models of long- and short-term memory (EPAM/CHREST) has produced a diverse range of applications of perceptual chunking. We focus on recent successes in verbal learning, expert memory, language acquisition and learning multiple representations, to illustrate the implementation and use of chunking mechanisms within contemporary models of human learning.  相似文献   

6.
In learning the meanings of words, children are guided by a set of constraints that give privilege to some potential meanings over others. These word-learning constraints are sometimes viewed as part of a specifically linguistic endowment. However, several recent computational models suggest concretely how word-learning - constraints included - might emerge from more general aspects of cognition, such as associative learning, attention and rational inference. This article reviews these models, highlighting the link between general cognitive forces and the word-learning they subserve. Ultimately, these cognitive forces might leave their mark not just on language learning, but also on language itself: in constraining the space of possible meanings, they place limits on cross-linguistic semantic variation.  相似文献   

7.
Understanding a sentence requires a working memory of the partial products of comprehension, so that linguistic relations between temporally distal parts of the sentence can be rapidly computed. We describe an emerging theoretical framework for this working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item (but not order) information subject to interference from similar items, and activation decay (forgetting over time). A computational model embodying these principles provides an explanation of the functional capacities and severe limitations of human processing, as well as accounts of reading times. The broad implication is that the detailed nature of cross-linguistic sentence processing emerges from the interaction of general principles of human memory with the specialized task of language comprehension.  相似文献   

8.
Biological and computational concepts that underlie the nature working memory are briefly reviewed. The conceptualization of working memory has changed dramatically in the last 30 years. Current biological work has monitored several aspects of memory, including activation decay, sustained activation, long-term connection change, and differential structures for episodic (hippocampal formation) and procedural learning. Current connectionist modeling has identified factors including multiple-region-based processing, control processing as well as data storage, tradeoffs between fast- and slow-connection-change learning effects, and the speeding of acquisition via multiple levels of learning. The need to relate the biological, behavioral, and computational constraints into models of working memory is discussed. Finally, conceptualizations of working memory must acknowledge the need for human learning systems to be robust enough to operate in a dynamic world.  相似文献   

9.
Detailed computational modeling of human memory has typically been aimed at either short-term (working) memory or long-term memory in isolation. However, recent research highlights the importance of interactions between these systems for both item and order information. At the same time, computational models of both systems are beginning to converge onto a common framework in which items are associated with an evolving "context" signal and subsequently compete with one another at recall. We review some of these models, and discuss a common mechanism capable of modelling working memory and its interaction with long-term memory, focussing on memory for verbal sequences.  相似文献   

10.
The complementary learning systems framework provides a simple set of principles, derived from converging biological, psychological and computational constraints, for understanding the differential contributions of the neocortex and hippocampus to learning and memory. The central principles are that the neocortex has a low learning rate and uses overlapping distributed representations to extract the general statistical structure of the environment, whereas the hippocampus learns rapidly using separated representations to encode the details of specific events while minimizing interference. In recent years, we have instantiated these principles in working computational models, and have used these models to address human and animal learning and memory findings, across a wide range of domains and paradigms. Here, we review a few representative applications of our models, focusing on two domains: recognition memory and animal learning in the fear-conditioning paradigm. In both domains, the models have generated novel predictions that have been tested and confirmed.  相似文献   

11.
长期以来大家认为人类认知尽管可以看成是非确定的推理计算过程,但它的知识表达、模型结构、及计算方法和概率统计理论在本质上是不同的,因此认知科学和概率统计方法存在巨大的鸿沟,过去两者基本上独立发展。近年来随着Bayesian概率统计模型研究的一系列突破性工作和认知过程本质的不断被发现和挖掘,两者的相关性和互补性逐渐突显出来。许多研究者认为认知是近似遵循概率统计推理原则的,一些研究工作显示两者的结合有可能对人工智能发展产生深远的影响。本文对当前统计认知理论及应用研究的现状进行系统的梳理,并结合自身的研究对它今后的发展提出自己的看法。  相似文献   

12.
Rey A  Perruchet P  Fagot J 《Cognition》2012,123(1):180-184
Influential theories have claimed that the ability for recursion forms the computational core of human language faculty distinguishing our communication system from that of other animals (Hauser, Chomsky, & Fitch, 2002). In the present study, we consider an alternative view on recursion by studying the contribution of associative and working memory processes. After an intensive paired-associate training with visual shapes, we observed that baboons spontaneously ordered their responses in keeping with a recursive, centre-embedded structure. This result suggests that the human ability for recursion might partly if not entirely originate from fundamental processing constraints already present in nonhuman primates and that the critical distinction between animal communication and human language should more likely be found in working memory capacities than in an ability to produce recursive structures per se.  相似文献   

13.
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information‐theoretic measures—ambiguity and distinctiveness—derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine‐grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena.  相似文献   

14.
Bilingual memory research in the past decade and, particularly, in the past five years, has developed a range of sophisticated experimental, neuropsychological and computational techniques that have allowed researchers to begin to answer some of the major long-standing questions of the field. We explore bilingual memory along the lines of the conceptual division of language knowledge and organization, on the one hand, and the mechanisms that operate on that knowledge and organization, on the other. Various interactive-activation and connectionist models of bilingual memory that attempt to incorporate both organizational and operational considerations will serve to bridge these two divisions. Much progress has been made in recent years in bilingual memory research, which also serves to illuminate general (language-independent) memory processes.  相似文献   

15.
The hippocampal region, a group of brain structures important for learning and memory, has been the focus of a large number of computational models. These tend to fall into two groups: (1) models of the role of the hippocampal region in incremental learning, which focus on the development of new representations that are sensitive to stimulus regularities and environmental context; (2) models that focus on the role of the hippocampal region in the rapid storage and retrieval of episodic memories. Rather than being in conflict, it is becoming apparent that both approaches are partially correct and might reflect the different functions of substructures of the hippocampal region. Future computational models will help to elaborate how these different substructures interact.  相似文献   

16.
A key component of research on human sentence processing is to characterize the processing difficulty associated with the comprehension of words in context. Models that explain and predict this difficulty can be broadly divided into two kinds, expectation-based and memory-based. In this work, we present a new model of incremental sentence processing difficulty that unifies and extends key features of both kinds of models. Our model, lossy-context surprisal, holds that the processing difficulty at a word in context is proportional to the surprisal of the word given a lossy memory representation of the context—that is, a memory representation that does not contain complete information about previous words. We show that this model provides an intuitive explanation for an outstanding puzzle involving interactions of memory and expectations: language-dependent structural forgetting, where the effects of memory on sentence processing appear to be moderated by language statistics. Furthermore, we demonstrate that dependency locality effects, a signature prediction of memory-based theories, can be derived from lossy-context surprisal as a special case of a novel, more general principle called information locality.  相似文献   

17.
Batchelder EO 《Cognition》2002,83(2):167-206
Prelinguistic infants must find a way to isolate meaningful chunks from the continuous streams of speech that they hear. BootLex, a new model which uses distributional cues to build a lexicon, demonstrates how much can be accomplished using this single source of information. This conceptually simple probabilistic algorithm achieves significant segmentation results on various kinds of language corpora - English, Japanese, and Spanish; child- and adult-directed speech, and written texts; and several variations in coding structure - and reveals which statistical characteristics of the input have an influence on segmentation performance. BootLex is then compared, quantitatively and qualitatively, with three other groups of computational models of the same infant segmentation process, paying particular attention to functional characteristics of the models and their similarity to human cognition. Commonalities and contrasts among the models are discussed, as well as their implications both for theories of the cognitive problem of segmentation itself, and for the general enterprise of computational cognitive modeling.  相似文献   

18.
A capacity theory of comprehension: individual differences in working memory.   总被引:77,自引:0,他引:77  
A theory of the way working memory capacity constrains comprehension is proposed. The theory proposes that both processing and storage are mediated by activation and that the total amount of activation available in working memory varies among individuals. Individual differences in working memory capacity for language can account for qualitative and quantitative differences among college-age adults in several aspects of language comprehension. One aspect is syntactic modularity: The larger capacity of some individuals permits interaction among syntactic and pragmatic information, so that their syntactic processes are not informationally encapsulated. Another aspect is syntactic ambiguity: The larger capacity of some individuals permits them to maintain multiple interpretations. The theory is instantiated as a production system model in which the amount of activation available to the model affects how it adapts to the transient computational and storage demands that occur in comprehension.  相似文献   

19.
Individual differences in reasoning: implications for the rationality debate?   总被引:21,自引:0,他引:21  
Stanovich KE  West RF 《The Behavioral and brain sciences》2000,23(5):645-65; discussion 665-726
  相似文献   

20.
Computational models of lexical semantics, such as latent semantic analysis, can automatically generate semantic similarity measures between words from statistical redundancies in text. These measures are useful for experimental stimulus selection and for evaluating a model’s cognitive plausibility as a mechanism that people might use to organize meaning in memory. Although humans are exposed to enormous quantities of speech, practical constraints limit the amount of data that many current computational models can learn from. We follow up on previous work evaluating a simple metric of pointwise mutual information. Controlling for confounds in previous work, we demonstrate that this metric benefits from training on extremely large amounts of data and correlates more closely with human semantic similarity ratings than do publicly available implementations of several more complex models. We also present a simple tool for building simple and scalable models from large corpora quickly and efficiently.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号