首页 | 本学科首页   官方微博 | 高级检索  
     


Information Density and Dependency Length as Complementary Cognitive Models
Authors:Michael Xavier Collins
Affiliation:1. Norfolk, VA, USA
Abstract:Certain English constructions permit two syntactic alternations.
  1. a. I looked up the number. b. I looked the number up.
  2. a. He is often at the office. b. He often is at the office.
This study investigates the relationship between syntactic alternations and processing difficulty. What cognitive mechanisms are responsible for our attraction to some alternations and our aversion to others? This article reviews three psycholinguistic models of the relationship between syntactic alternations and processing: Maximum Per Word Surprisal (building on the ideas of Hale, in Proceedings of the 2nd Meeting of the North American chapter of the association for computational linguistics. Association for Computational Linguistics, Pittsburgh, PA, pp 159–166, 2001), Uniform Information Density (UID) (Levy and Jaeger in Adv Neural Inf Process Syst 19:849–856, 2007; inter alia), and Dependency Length Minimization (DLM) (Gildea and Temperley in Cognit Sci 34:286–310, 2010). Each theory makes predictions about which alternations native speakers should favor. Subjects were recruited using Amazon Mechanical Turk and asked to judge which of two competing syntactic alternations sounded more natural. Logistic regression analysis on the resulting data suggests that both UID and DLM are powerful predictors of human preferences. We conclude that alternations that approach uniform information density and minimize dependency length are easier to process than those that do not.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号