首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到5条相似文献,搜索用时 15 毫秒
1.
This study examines French-learning infants’ sensitivity to grammatical non-adjacent dependencies involving subject-verb agreement (e.g., le/les garçons lit/lisent ‘the boy(s) read(s)’) where number is audible on both the determiner of the subject DP and the agreeing verb, and the dependency is spanning across two syntactic phrases. A further particularity of this subsystem of French subject-verb agreement is that number marking on the verb is phonologically highly irregular. Despite the challenge, the HPP results for 24- and 18-month-olds demonstrate knowledge of both number dependencies: between the singular determiner le and the non-adjacent singular verbal forms and between the plural determiner les and the non-adjacent plural verbal forms. A control experiment suggests that the infants are responding to known verb forms, not phonological regularities. Given the paucity of such forms in the adult input documented through a corpus study, these results are interpreted as evidence that 18-month-olds have the ability to extract complex patterns across a range of morphophonologically inconsistent and infrequent items in natural language.  相似文献   

2.
3.
A central issue in cognitive neuroscience today concerns how distributed neural networks in the brain that are used in language learning and processing can be involved in non-linguistic cognitive sequence learning. This issue is informed by a wealth of functional neurophysiology studies of sentence comprehension, along with a number of recent studies that examined the brain processes involved in learning non-linguistic sequences, or artificial grammar learning (AGL). The current research attempts to reconcile these data with several current neurophysiologically based models of sentence processing, through the specification of a neural network model whose architecture is constrained by the known cortico-striato-thalamo-cortical (CSTC) neuroanatomy of the human language system. The challenge is to develop simulation models that take into account constraints both from neuranatomical connectivity, and from functional imaging data, and that can actually learn and perform the same kind of language and artificial syntax tasks. In our proposed model, structural cues encoded in a recurrent cortical network in BA47 activate a CSTC circuit to modulate the flow of lexical semantic information from BA45 to an integrated representation of meaning at the sentence level in BA44/6. During language acquisition, corticostriatal plasticity is employed to allow closed class structure to drive thematic role assignment. From the AGL perspective, repetitive internal structure in the AGL strings is encoded in BA47, and activates the CSTC circuit to predict the next element in the sequence. Simulation results from Caplan's [Caplan, D., Baker, C., & Dehaut, F. (1985). Syntactic determinants of sentence comprehension in aphasia. Cognition, 21, 117-175] test of syntactic comprehension, and from Gomez and Schvaneveldts' [Gomez, R. L., & Schvaneveldt, R. W. (1994). What is learned from artificial grammars?. Transfer tests of simple association. Journal of Experimental Psychology: Learning, Memory and Cognition, 20, 396-410] artificial grammar learning experiments are presented. These results are discussed in the context of a brain architecture for learning grammatical structure for multiple natural languages, and non-linguistic sequences.  相似文献   

4.
Hsu AS  Chater N  Vitányi PM 《Cognition》2011,120(3):380-390
There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact generative model underlying a wide class of languages, purely from observing samples of the language. We then describe a recently proposed practical framework, which quantifies natural language learnability, allowing specific learnability predictions to be made for the first time. In previous work, this framework was used to make learnability predictions for a wide variety of linguistic constructions, for which learnability has been much debated. Here, we present a new experiment which tests these learnability predictions. We find that our experimental results support the possibility that these linguistic constructions are acquired probabilistically from cognition-general principles.  相似文献   

5.
Bod R 《Cognitive Science》2009,33(5):752-793
While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase-structure trees should be assigned to initial sentences, s/he allows (implicitly) for all possible trees and lets linguistic experience decide which is the "best" tree for each sentence. The best tree is obtained by maximizing "structural analogy" between a sentence and previous sentences, which is formalized by the most probable shortest combination of subtrees from all trees of previous sentences. Corpus-based experiments with this model on the Penn Treebank and the Childes database indicate that it can learn both exemplar-based and rule-based aspects of language, ranging from phrasal verbs to auxiliary fronting. By having learned the syntactic structures of sentences, we have also learned the grammar implicit in these structures, which can in turn be used to produce new sentences. We show that our model mimicks children's language development from item-based constructions to abstract constructions, and that the model can simulate some of the errors made by children in producing complex questions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号