首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Learning representations of wordforms with recurrent networks: comment on sibley, kello, plaut, & elman (2008)
Authors:Bowers Jeffrey S  Davis Colin J
Institution:Department of Experimental Psychology, University of Bristol Department of Psychology, Royal Holloway, University of London.
Abstract:Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we agree this is a critical problem with many slot‐coding schemes, we question whether the sequence encoder model addresses this limitation, and we highlight another deficiency of the model. We conclude that alternative theories are more promising.
Keywords:Slot‐coding  Connectionism  Alignment problem  Word identification  Symbols  Position‐invariance
本文献已被 PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号