首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3369篇
  免费   182篇
  3551篇
  2023年   22篇
  2022年   32篇
  2021年   40篇
  2020年   64篇
  2019年   70篇
  2018年   109篇
  2017年   140篇
  2016年   113篇
  2015年   85篇
  2014年   96篇
  2013年   365篇
  2012年   152篇
  2011年   172篇
  2010年   95篇
  2009年   112篇
  2008年   162篇
  2007年   141篇
  2006年   142篇
  2005年   113篇
  2004年   108篇
  2003年   86篇
  2002年   105篇
  2001年   51篇
  2000年   66篇
  1999年   61篇
  1998年   42篇
  1997年   35篇
  1996年   33篇
  1995年   19篇
  1994年   19篇
  1993年   26篇
  1992年   45篇
  1991年   30篇
  1990年   36篇
  1989年   25篇
  1988年   37篇
  1987年   36篇
  1986年   19篇
  1985年   31篇
  1984年   21篇
  1983年   20篇
  1979年   34篇
  1976年   29篇
  1975年   22篇
  1974年   20篇
  1973年   20篇
  1972年   20篇
  1971年   23篇
  1968年   27篇
  1966年   19篇
排序方式: 共有3551条查询结果,搜索用时 15 毫秒
101.
102.
103.
Numerous investigators have reported that listeners are able to perceptually differentiate adult stutterers' and nonstutterers' fluent speech productions. However, findings from similar studies with children ranging in age from 3 to 9 yr have indicated that perceptual discrimination of child stutterers is difficult. A logical extension of this line of investigation would be to determine when during maturation from childhood to adulthood stutterers' fluent speech becomes perceptibly different than nonstutterers'. Therefore, in this study similar fluent speech samples from seven 12–16-yr-old adolescent male stutterers and seven matched nonstutterers were analyzed perceptually in a paired stimulus paradigm by 15 sophisticated listeners. Individual subject analyses using signal detection theory revealed that five of the seven stutterers were discriminated. When averaged for subject group comparison, these findings indicated that listeners successfully discriminated between the fluent speech of the two groups. Therefore, the perceptual difference in fluent speech production reported previously for adults appears to be present by adolescence.  相似文献   
104.
105.
106.
107.
Communicating with multiple addressees poses a problem for speakers: Each addressee necessarily comes to the conversation with a different perspective—different knowledge, different beliefs, and a distinct physical context. Despite the ubiquity of multiparty conversation in everyday life, little is known about the processes by which speakers design language in multiparty conversation. While prior evidence demonstrates that speakers design utterances to accommodate addressee knowledge in multiparty conversation, it is unknown if and how speakers encode and combine different types of perspective information. Here we test whether speakers encode the perspective of multiple addressees, and then simultaneously consider their knowledge and physical context during referential design in a three‐party conversation. Analyses of referential form—expression length, disfluency, and elaboration rate—in an interactive multiparty conversation demonstrate that speakers do take into consideration both addressee knowledge and physical context when designing utterances, consistent with a knowledge‐scene integration view. These findings point to an audience design process that takes as input multiple types of representations about the perspectives of multiple addressees, and that bases the informational content of the to‐be‐designed utterance on a combination of the perspectives of the intended addressees.  相似文献   
108.
How do speakers design what they say in order to communicate effectively with groups of addressees who vary in their background knowledge of the topic at hand? Prior findings indicate that when a speaker addresses a pair of listeners with discrepant knowledge, that speakers Aim Low, designing their utterances for the least knowledgeable of the two addressees. Here, we test the hypothesis that speakers will depart from an Aim Low approach in order to efficiently communicate with larger groups of interacting partners. Further, we ask whether the cognitive demands of tracking multiple conversational partners' perspectives places limitations on successful audience design. We find that speakers can successfully track information about what up to four of their partners do and do not know in conversation. When addressing groups of 3–4 addressees at once, speakers design language based on the combined knowledge of the group. These findings point to an audience design process that simultaneously represents the perspectives of multiple other individuals and combines these representations in order to design utterances that strike a balance between the different needs of the individuals within the group.  相似文献   
109.
110.
Unfamiliar simultaneous face matching is error prone. Reducing incorrect identification decisions will positively benefit forensic and security contexts. The absence of view-independent information in static images likely contributes to the difficulty of unfamiliar face matching. We tested whether a novel interactive viewing procedure that provides the user with 3D structural information as they rotate a facial image to different orientations would improve face matching accuracy. We tested the performance of ‘typical’ (Experiment 1) and ‘superior’ (Experiment 2) face recognizers, comparing their performance using high-quality (Experiment 3) and pixelated (Experiment 4) Facebook profile images. In each trial, participants responded whether two images featured the same person with one of these images being either a static face, a video providing orientation information, or an interactive image. Taken together, the results show that fluid orientation information and interactivity prompt shifts in criterion and support matching performance. Because typical and superior face recognizers both benefited from the structural information provided by the novel viewing procedures, our results point to qualitatively similar reliance on pictorial encoding in these groups. This also suggests that interactive viewing tools can be valuable in assisting face matching in high-performing practitioner groups.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号