首页 | 本学科首页   官方微博 | 高级检索  
     


Grounding co-occurrence: Identifying features in a lexical co-occurrence model of semantic memory
Authors:Kevin Durda   Lori Buchanan  Richard Caron
Affiliation:(1) Department of Psychology, Social Science Centre, University of Western Ontario, N6A 5C2 London, ON, Canada;(2) University of Toronto, Scarborough, Ontario, Canada;(3) University of Wisconsin, Madison, Wisconsin
Abstract:Lexical co-occurrence models of semantic memory represent word meaning by vectors in a high-dimensional space. These vectors are derived from word usage, as found in a large corpus of written text. Typically, these models are fully automated, an advantage over models that represent semantics that are based on human judgments (e.g., feature-based models). A common criticism of co-occurrence models is that the representations are not grounded: Concepts exist only relative to each other in the space produced by the model. It has been claimed that feature-based models offer an advantage in this regard. In this article, we take a step toward grounding a cooccurrence model. A feed-forward neural network is trained using back propagation to provide a mapping from co-occurrence vectors to feature norms collected from subjects. We show that this network is able to retrieve the features of a concept from its co-occurrence vector with high accuracy and is able to generalize this ability to produce an appropriate list of features from the co-occurrence vector of a novel concept.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号