首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   289篇
  免费   13篇
  国内免费   10篇
  2023年   3篇
  2022年   3篇
  2021年   7篇
  2020年   6篇
  2019年   10篇
  2018年   15篇
  2017年   5篇
  2016年   9篇
  2015年   9篇
  2014年   8篇
  2013年   27篇
  2012年   4篇
  2011年   5篇
  2010年   2篇
  2009年   14篇
  2008年   23篇
  2007年   20篇
  2006年   18篇
  2005年   14篇
  2004年   14篇
  2003年   16篇
  2002年   11篇
  2001年   13篇
  2000年   17篇
  1999年   12篇
  1998年   5篇
  1997年   10篇
  1996年   5篇
  1995年   3篇
  1994年   1篇
  1993年   1篇
  1988年   1篇
  1987年   1篇
排序方式: 共有312条查询结果,搜索用时 0 毫秒
311.
In this paper, we propose a Vector Semiotic Model as a possible solution to the symbol grounding problem in the context of Visual Question Answering. The Vector Semiotic Model combines the advantages of a Semiotic Approach implemented in the Sign-Based World Model and Vector Symbolic Architectures. The Sign-Based World Model represents information about a scene depicted on an input image in a structured way and grounds abstract objects in an agent’s sensory input. We use the Vector Symbolic Architecture to represent the elements of the Sign-Based World Model on a computational level. Properties of a high-dimensional space and operations defined for high-dimensional vectors allow encoding the whole scene into a high-dimensional vector with the preservation of the structure. That leads to the ability to apply explainable reasoning to answer an input question. We conducted experiments are on a CLEVR dataset and show results comparable to the state of the art. The proposed combination of approaches, first, leads to the possible solution of the symbol-grounding problem and, second, allows expanding current results to other intelligent tasks (collaborative robotics, embodied intellectual assistance, etc.).  相似文献   
312.
Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of these models raise the intriguing possibility that exposure to word use in language also shapes the word knowledge that children amass during development. However, this possibility is strongly challenged by the fact that models use language input and learning mechanisms that may be unavailable to children. Across three studies, we found that unrealistically complex input and learning mechanisms are unnecessary. Instead, simple regularities of word use in children's language input that they have the capacity to learn can foster knowledge about word meanings. Thus, exposure to language may play a simple but powerful role in children's growing word knowledge. A video abstract of this article can be viewed at https://youtu.be/dT83dmMffnM .

Research Highlights

  • Natural Language Processing (NLP) models can learn that words are similar in meaning from higher-order statistical regularities of word use.
  • Unlike NLP models, infants and children may primarily learn only simple co-occurrences between words.
  • We show that infants' and children's language input is rich in simple co-occurrence that can support learning similarities in meaning between words.
  • We find that simple co-occurrences can explain infants' and children's knowledge that words are similar in meaning.
  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号