首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1153篇
  免费   69篇
  2023年   7篇
  2022年   10篇
  2021年   8篇
  2020年   11篇
  2019年   28篇
  2018年   29篇
  2017年   37篇
  2016年   32篇
  2015年   28篇
  2014年   31篇
  2013年   123篇
  2012年   37篇
  2011年   39篇
  2010年   31篇
  2009年   52篇
  2008年   48篇
  2007年   48篇
  2006年   41篇
  2005年   37篇
  2004年   39篇
  2003年   42篇
  2002年   38篇
  2001年   26篇
  2000年   21篇
  1999年   27篇
  1998年   20篇
  1997年   19篇
  1996年   18篇
  1995年   14篇
  1994年   17篇
  1993年   11篇
  1992年   10篇
  1991年   18篇
  1990年   12篇
  1988年   10篇
  1987年   9篇
  1986年   16篇
  1985年   20篇
  1984年   11篇
  1983年   15篇
  1982年   12篇
  1980年   15篇
  1979年   6篇
  1978年   11篇
  1977年   6篇
  1975年   6篇
  1974年   9篇
  1973年   9篇
  1972年   7篇
  1966年   6篇
排序方式: 共有1222条查询结果,搜索用时 187 毫秒
911.
912.
Pomegranate     
Jean Janzen 《Cross currents》2010,60(1):131-133
  相似文献   
913.
Continental Philosophy Review - The article "The logic of hatred and its social and historical expressions: From the great witch-hunt to terror and present-day djihadism," written by Jean...  相似文献   
914.
Theories of sentence production that involve a convergence of activation from conceptual‐semantic and syntactic‐sequential units inspired a connectionist model that was trained to produce simple sentences. The model used a learning algorithm that resulted in a sharing of responsibility (or “division of labor”) between syntactic and semantic inputs for lexical activation according to their predictive power. Semantically rich, or “heavy”, verbs in the model came to rely on semantic cues more than on syntactic cues, whereas semantically impoverished, or “light”, verbs relied more on syntactic cues. When the syntactic and semantic inputs were lesioned, the model exhibited patterns of production characteristic of agrammatic and anomic aphasic patients, respectively. Anomic models tended to lose the ability to retrieve heavy verbs, whereas agrammatic models were more impaired in retrieving light verbs. These results obtained in both sentence production and single‐word naming simulations. Moreover, simulated agrammatic lexical retrieval was more impaired overall in sentences than in single‐word tasks, in agreement with the literature. The results provide a demonstration of the division‐of‐labor principle, as well as general support for the claim that connectionist learning principles can contribute to the understanding of non‐transparent neuropsychological dissociations.  相似文献   
915.
916.
917.
THE DISRUPTIVE EFFECT OF SELF-OBJECTIFICATION ON PERFORMANCE   总被引:1,自引:1,他引:0  
Self-objectification is the act of viewing the self, particularly the body, from a third-person perspective. Objectification theory proposes numerous negative consequences for those who self-objectify, including decreased performance through the disruption of focused attention. In the current study, we examined whether women in a state of self-objectification were slower to respond to a basic Stroop color-naming task. Results showed that regardless of the type of word (color words, body words, or neutral words), participants in a state of self-objectification exhibited decreased performance. This study lends further evidence to objectification theory and highlights the negative performance ramifications of state self-objectification.  相似文献   
918.
For most multisensory events, observers perceive synchrony among the various senses (vision, audition, touch), despite the naturally occurring lags in arrival and processing times of the different information streams. A substantial amount of research has examined how the brain accomplishes this. In the present article, we review several key issues about intersensory timing, and we identify four mechanisms of how intersensory lags might be dealt with: by ignoring lags up to some point (a wide window of temporal integration), by compensating for predictable variability, by adjusting the point of perceived synchrony on the longer term, and by shifting one stream directly toward the other.  相似文献   
919.
920.
Eyetracking facilities are typically restricted to monitoring a single person viewing static images or prerecorded video. In the present article, we describe a system that makes it possible to study visual attention in coordination with other activity during joint action. The software links two eyetracking systems in parallel and provides an on-screen task. By locating eye movements against dynamic screen regions, it permits automatic tracking of moving on-screen objects. Using existing SR technology, the system can also cross-project each participant’s eyetrack and mouse location onto the other’s on-screen work space. Keeping a complete record of eyetrack and on-screen events in the same format as subsequent human coding, the system permits the analysis of multiple modalities. The software offers new approaches to spontaneous multimodal communication: joint action and joint attention. These capacities are demonstrated using an experimental paradigm for cooperative on-screen assembly of a two-dimensional model. The software is available under an open source license.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号