首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3197篇
  免费   1401篇
  2023年   2篇
  2021年   74篇
  2020年   77篇
  2019年   377篇
  2018年   267篇
  2017年   378篇
  2016年   366篇
  2015年   340篇
  2014年   296篇
  2013年   499篇
  2012年   263篇
  2011年   191篇
  2010年   241篇
  2009年   212篇
  2008年   182篇
  2007年   106篇
  2006年   97篇
  2005年   111篇
  2004年   88篇
  2003年   91篇
  2002年   99篇
  2001年   81篇
  2000年   64篇
  1999年   34篇
  1998年   8篇
  1997年   6篇
  1996年   3篇
  1995年   3篇
  1994年   4篇
  1993年   2篇
  1992年   4篇
  1991年   3篇
  1990年   5篇
  1989年   1篇
  1988年   2篇
  1987年   2篇
  1986年   2篇
  1984年   2篇
  1983年   1篇
  1981年   2篇
  1980年   1篇
  1976年   2篇
  1975年   1篇
  1974年   2篇
  1973年   1篇
  1972年   3篇
  1969年   1篇
  1957年   1篇
排序方式: 共有4598条查询结果,搜索用时 15 毫秒
871.
872.
873.
874.
875.
876.
877.
878.
Communicating with multiple addressees poses a problem for speakers: Each addressee necessarily comes to the conversation with a different perspective—different knowledge, different beliefs, and a distinct physical context. Despite the ubiquity of multiparty conversation in everyday life, little is known about the processes by which speakers design language in multiparty conversation. While prior evidence demonstrates that speakers design utterances to accommodate addressee knowledge in multiparty conversation, it is unknown if and how speakers encode and combine different types of perspective information. Here we test whether speakers encode the perspective of multiple addressees, and then simultaneously consider their knowledge and physical context during referential design in a three‐party conversation. Analyses of referential form—expression length, disfluency, and elaboration rate—in an interactive multiparty conversation demonstrate that speakers do take into consideration both addressee knowledge and physical context when designing utterances, consistent with a knowledge‐scene integration view. These findings point to an audience design process that takes as input multiple types of representations about the perspectives of multiple addressees, and that bases the informational content of the to‐be‐designed utterance on a combination of the perspectives of the intended addressees.  相似文献   
879.
Syntactic priming in language production is the increased likelihood of using a recently encountered syntactic structure. In this paper, we examine two theories of why speakers can be primed: error‐driven learning accounts (Bock, Dell, Chang, & Onishi, 2007; Chang, Dell, & Bock, 2006) and activation‐based accounts (Pickering & Branigan, 1999; Reitter, Keller, & Moore, 2011). Both theories predict that speakers should be primed by the syntactic choices of others, but only activation‐based accounts predict that speakers should be able to prime themselves. Here we test whether speakers can be primed by their own productions in three behavioral experiments and find evidence of structural persistence following both comprehension and speakers’ own productions. We also find that comprehension‐based priming effects are larger for rarer syntactic structures than for more common ones, which is most consistent with error‐driven accounts. Because neither error‐driven accounts nor activation‐based accounts fully explain the data, we propose a hybrid model.  相似文献   
880.
Several studies have illuminated how processing manual action verbs (MaVs) affects the programming or execution of concurrent hand movements. Here, to circumvent key confounds in extant designs, we conducted the first assessment of motor–language integration during handwriting—a task in which linguistic and motoric processes are co‐substantiated. Participants copied MaVs, non‐manual action verbs, and non‐action verbs as we collected measures of motor programming and motor execution. Programming latencies were similar across conditions, but execution was faster for MaVs than for the other categories, regardless of whether word meanings were accessed implicitly or explicitly. In line with the Hand‐Action‐Network Dynamic Language Embodiment (HANDLE) model, such findings suggest that effector‐congruent verbs can prime manual movements even during highly automatized tasks in which motoric and verbal processes are naturally intertwined. Our paradigm opens new avenues for fine‐grained explorations of embodied language processes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号