全文获取类型
收费全文 | 289篇 |
免费 | 20篇 |
国内免费 | 33篇 |
出版年
2024年 | 1篇 |
2023年 | 5篇 |
2022年 | 6篇 |
2021年 | 5篇 |
2020年 | 9篇 |
2019年 | 17篇 |
2018年 | 11篇 |
2017年 | 12篇 |
2016年 | 5篇 |
2015年 | 5篇 |
2014年 | 15篇 |
2013年 | 44篇 |
2012年 | 9篇 |
2011年 | 14篇 |
2010年 | 13篇 |
2009年 | 13篇 |
2008年 | 17篇 |
2007年 | 19篇 |
2006年 | 17篇 |
2005年 | 20篇 |
2004年 | 10篇 |
2003年 | 12篇 |
2002年 | 14篇 |
2001年 | 6篇 |
2000年 | 4篇 |
1999年 | 9篇 |
1998年 | 4篇 |
1997年 | 6篇 |
1996年 | 1篇 |
1995年 | 3篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 4篇 |
1991年 | 1篇 |
1990年 | 1篇 |
1988年 | 1篇 |
1986年 | 1篇 |
1984年 | 1篇 |
1980年 | 2篇 |
1976年 | 1篇 |
排序方式: 共有342条查询结果,搜索用时 0 毫秒
341.
Despite data showing that teacher victimization is at least as great a problem as student victimization, far less research exists regarding teacher victimization than student victimization and overall school crime, particularly with regard to the application of criminological theory to explain the victimization of teachers. We address this gap by examining the hierarchical relationship between communal school organization and teacher victimization in a nationally representative sample of 37,497 teachers from 7,488 public schools in the United States. Results showed that teacher experienced less victimization in schools that were more communally organized. We discuss these findings and present implications for school-based delinquency prevention. 相似文献
342.
Layla Unger Hyungwook Yim Olivera Savic Simon Dennis Vladimir M. Sloutsky 《Developmental science》2023,26(4):e13373
Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of these models raise the intriguing possibility that exposure to word use in language also shapes the word knowledge that children amass during development. However, this possibility is strongly challenged by the fact that models use language input and learning mechanisms that may be unavailable to children. Across three studies, we found that unrealistically complex input and learning mechanisms are unnecessary. Instead, simple regularities of word use in children's language input that they have the capacity to learn can foster knowledge about word meanings. Thus, exposure to language may play a simple but powerful role in children's growing word knowledge. A video abstract of this article can be viewed at https://youtu.be/dT83dmMffnM .
Research Highlights
- Natural Language Processing (NLP) models can learn that words are similar in meaning from higher-order statistical regularities of word use.
- Unlike NLP models, infants and children may primarily learn only simple co-occurrences between words.
- We show that infants' and children's language input is rich in simple co-occurrence that can support learning similarities in meaning between words.
- We find that simple co-occurrences can explain infants' and children's knowledge that words are similar in meaning.