Connectionist and Memory-Array Models of Artificial Grammar Learning |
| |
Authors: | Zoltan Dienes |
| |
Abstract: | Subjects exposed to strings of letters generated by a finite state grammar can later classify grammatical and nongrammatical test strings, even though they cannot adequately say what the rules of the grammar are (e.g., Reber, 1989). The MINERVA 2 (Hintzman, 1986) and Medin and Schaffer (1978) memory-array models and a number of connectionist outoassociator models are tested against experimental data by deriving mainly parameter-free predictions from the models of the rank order of classification difficulty of test strings. The importance of different assumptions regarding the coding of features (How should the absence of a feature be coded? Should single letters or digrams be coded?), the learning rule used (Hebb rule vs. delta rule), and the connectivity (Should features be predicted only by previous features in the string, or by all features simultaneously?) is investigated by determining the performance of the models with and without each assumption. Only one class of connectionist model (the simultaneous delta rule) passes all the tests. It is shown that this class of model can be regarded by abstracting a set of representative but incomplete rules of the grammar. |
| |
Keywords: | |
|
|