首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   112篇
  免费   9篇
  国内免费   8篇
  129篇
  2023年   1篇
  2022年   1篇
  2021年   1篇
  2020年   3篇
  2018年   3篇
  2017年   5篇
  2016年   6篇
  2015年   2篇
  2014年   5篇
  2013年   19篇
  2012年   5篇
  2011年   11篇
  2010年   1篇
  2009年   4篇
  2008年   10篇
  2007年   6篇
  2006年   5篇
  2005年   4篇
  2004年   10篇
  2003年   3篇
  2002年   5篇
  2001年   2篇
  2000年   2篇
  1999年   4篇
  1998年   1篇
  1997年   1篇
  1995年   2篇
  1992年   1篇
  1990年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
排序方式: 共有129条查询结果,搜索用时 0 毫秒
121.
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this manner takes the form of a directed weighted graph, whose nodes are recursively (hierarchically) defined patterns over the elements of the input stream. We evaluated the model in seventeen experiments, grouped into five studies, which examined, respectively, (a) the generative ability of grammar learned from a corpus of natural language, (b) the characteristics of the learned representation, (c) sequence segmentation and chunking, (d) artificial grammar learning, and (e) certain types of structure dependence. The model's performance largely vindicates our design choices, suggesting that progress in modeling language acquisition can be made on a broad front—ranging from issues of generativity to the replication of human experimental findings—by bringing biological and computational considerations, as well as lessons from prior efforts, to bear on the modeling approach.  相似文献   
122.
According to usage‐based approaches to language acquisition, linguistic knowledge is represented in the form of constructions—form‐meaning pairings—at multiple levels of abstraction and complexity. The emergence of syntactic knowledge is assumed to be a result of the gradual abstraction of lexically specific and item‐based linguistic knowledge. In this article, we explore how the gradual emergence of a network consisting of constructions at varying degrees of complexity can be modeled computationally. Linguistic knowledge is learned by observing natural language utterances in an ambiguous context. To determine meanings of constructions starting from ambiguous contexts, we rely on the principle of cross‐situational learning. While this mechanism has been implemented in several computational models, these models typically focus on learning mappings between words and referents. In contrast, in our model, we show how cross‐situational learning can be applied consistently to learn correspondences between form and meaning beyond such simple correspondences.  相似文献   
123.
Laakso A  Calvo P 《Cognitive Science》2011,35(7):1243-1281
Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule-following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network studies demonstrating that a single statistical mechanism can mimic the apparent discovery of structural regularities, beyond the segmentation of speech. We argue that our results undermine one argument for the MOM hypothesis.  相似文献   
124.
As analysts, we strive to say what we mean, which involves understanding the other person's communication, finding the appropriate form of words to articulate what we have understood, and expressing them in the tone of voice which can be heard. Meaning what we say refers to the authenticity of our response, that what we say is sincere. My theme touches on different ways of saying what we mean and how this can affect the meaning of what we say. Some of the issues are aesthetic, some grammatical. How some sentence structures lead to closing off communication while others open it.  相似文献   
125.
Lai J  Poletiek FH 《Cognition》2011,(2):265-273
A theoretical debate in artificial grammar learning (AGL) regards the learnability of hierarchical structures. Recent studies using an AnBn grammar draw conflicting conclusions (Bahlmann and Friederici, 2006 and [De Vries et al., 2008] ). We argue that 2 conditions crucially affect learning AnBn structures: sufficient exposure to zero-level-of-embedding (0-LoE) exemplars and a staged-input. In 2 AGL experiments, learning was observed only when the training set was staged and contained 0-LoE exemplars. Our results might help understanding how natural complex structures are learned from exemplars.  相似文献   
126.
I show how a conversational process that takes simple, intuitively meaningful steps may be understood as a sophisticated computation that derives the richly detailed, complex representations implicit in our knowledge of language. To develop the account, I argue that natural language is structured in a way that lets us formalize grammatical knowledge precisely in terms of rich primitives of interpretation. Primitives of interpretation can be correctly viewed intentionally, as explanations of our choices of linguistic actions; the model therefore fits our intuitions about meaning in conversation. Nevertheless, interpretations for complex utterances can be built from these primitives by simple operations of grammatical derivation. In bridging analyses of meaning at semantic and symbol-processing levels, this account underscores the fundamental place for computation in the cognitive science of language use.  相似文献   
127.
128.
In the present study, using event-related functional magnetic resonance imaging, we investigated a group of participants on a grammaticality classification task after they had been exposed to well-formed consonant strings generated from an artificial regular grammar. We used an implicit acquisition paradigm in which the participants were exposed to positive examples. The objective of this study was to investigate whether brain regions related to language processing overlap with the brain regions activated by the grammaticality classification task used in the present study. Recent meta-analyses of functional neuroimaging studies indicate that syntactic processing is related to the left inferior frontal gyrus (Brodmann’s areas 44 and 45) or Broca’s region. In the present study, we observed that artificial grammaticality violations activated Broca’s region in all participants. This observation lends some support to the suggestions that artificial grammar learning represents a model for investigating aspects of language learning in infants [TICS 4 (2000) 178] and adults [Proceedings of the National Academy of Sciences of United States of America 99 (2002) 529].  相似文献   
129.
《周易》古经是研究上古汉语不可多得的宝贵资料,然而语言学界对它尚缺乏足够的重视,忽略了对它的研究。本文是继《〈周易〉古经句法探析》之后又一篇研究古经语法的文章。它系统地论述了古经文的构词法、各种词类及其语法作用;指出了作为上古汉语著作的古经在词法上的初始性和不完备性,以及词法与句法发展的不平衡性,揭示了上古语法发展的一个特点。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号