Encoder: a connectionist model of how learning to visually encode fixated text images improves reading fluency |
| |
Authors: | Martin Gale L |
| |
Affiliation: | Motorola Corporation, Austin, TX, USA. gale_l_martin@yahoo.com |
| |
Abstract: | This article proposes that visual encoding learning improves reading fluency by widening the span over which letters are recognized from a fixated text image so that fewer fixations are needed to cover a text line. Encoder is a connectionist model that learns to convert images like the fixated text images human readers encode into the corresponding letter sequences. The computational theory of classification learning predicts that fixated text-image size makes this learning difficult but that reducing image variability and biasing learning should help. Encoder confirms these predictions. It fails to learn as image size increases but achieves humanlike visual encoding accuracy when image variability is reduced by regularities in fixation positions and letter sequences and when learning is biased to discover mapping functions based on the sequential, componential structure of text. After training, Encoder exhibits many humanlike text familiarity effects. |
| |
Keywords: | |
本文献已被 PubMed 等数据库收录! |
|