首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   884篇
  免费   82篇
  国内免费   41篇
  2024年   1篇
  2023年   11篇
  2022年   10篇
  2021年   27篇
  2020年   40篇
  2019年   32篇
  2018年   38篇
  2017年   54篇
  2016年   59篇
  2015年   35篇
  2014年   46篇
  2013年   192篇
  2012年   32篇
  2011年   55篇
  2010年   21篇
  2009年   31篇
  2008年   49篇
  2007年   45篇
  2006年   40篇
  2005年   32篇
  2004年   36篇
  2003年   24篇
  2002年   17篇
  2001年   8篇
  2000年   19篇
  1999年   13篇
  1998年   4篇
  1997年   4篇
  1996年   4篇
  1995年   5篇
  1994年   4篇
  1993年   4篇
  1992年   2篇
  1990年   6篇
  1989年   2篇
  1988年   1篇
  1987年   2篇
  1985年   1篇
  1974年   1篇
排序方式: 共有1007条查询结果,搜索用时 15 毫秒
51.
Positive psychotherapy (PPT) is a therapeutic approach broadly based on the principles of positive psychology. Rooted in Chris Peterson’s groundbreaking work on character strengths, PPT integrates symptoms with strengths, resources with risks, weaknesses with values, and hopes with regrets in order to understand the inherent complexities of human experiences in a way that is more balanced than the traditional deficit-oriented approach to psychotherapy. This paper makes the case of an alternative approach to psychotherapy that pays equal attention and effort to negatives and positives. It discusses PPT’s assumptions and describes in detail how PPT exercises work in clinical settings. The paper summarizes results of pilot studies using this approach, discusses caveats in conducting PPT, and suggests potential directions.  相似文献   
52.
Recent work on social injustice has focused on implicit bias as an important factor in explaining persistent injustice in spite of achievements on civil rights. In this paper, I argue that because of its individualism, implicit bias explanation, taken alone, is inadequate to explain ongoing injustice; and, more importantly, it fails to call attention to what is morally at stake. An adequate account of how implicit bias functions must situate it within a broader theory of social structures and structural injustice; changing structures is often a precondition for changing patterns of thought and action and is certainly required for durable change.  相似文献   
53.
It is unclear how children learn labels for multiple overlapping categories such as “Labrador,” “dog,” and “animal.” Xu and Tenenbaum (2007a) suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and adults. Here, we report data testing a developmental prediction of the Bayesian model—that more knowledge should lead to narrower category inferences when presented with multiple subordinate exemplars. Two experiments did not support this prediction. Children with more category knowledge showed broader generalization when presented with multiple subordinate exemplars, compared to less knowledgeable children and adults. This implies a U‐shaped developmental trend. The Bayesian model was not able to account for these data, even with inputs that reflected the similarity judgments of children. We discuss implications for the Bayesian model, including a combined Bayesian/morphological knowledge account that could explain the demonstrated U‐shaped trend.  相似文献   
54.
Words refer to objects in the world, but this correspondence is not one‐to‐one: Each word has a range of referents that share features on some dimensions but differ on others. This property of language is called underspecification. Parts of the lexicon have characteristic patterns of underspecification; for example, artifact nouns tend to specify shape, but not color, whereas substance nouns specify material but not shape. These regularities in the lexicon enable learners to generalize new words appropriately. How does the lexicon come to have these helpful regularities? We test the hypothesis that systematic backgrounding of some dimensions during learning and use causes language to gradually change, over repeated episodes of transmission, to produce a lexicon with strong patterns of underspecification across these less salient dimensions. This offers a cultural evolutionary mechanism linking individual word learning and generalization to the origin of regularities in the lexicon that help learners generalize words appropriately.  相似文献   
55.
Michael Ruse 《Zygon》2015,50(2):361-375
There is a strong need of a reasoned defense of what was known as the “independence” position of the science–religion relationship but that more recently has been denigrated as the “accommodationist” position, namely that while there are parts of religion—fundamentalist Christianity in particular—that clash with modern science, the essential parts of religion (Christianity) do not and could not clash with science. A case for this position is made on the grounds of the essentially metaphorical nature of science. Modern science functions because of its root metaphor of the machine: the world is seen in mechanical terms. As Thomas Kuhn insisted, metaphors function in part by ruling some questions outside their domain. In the case of modern science, four questions go unasked and hence unanswered: Why is there something rather than nothing? What is the foundation of morality? What is mind and its relationship to matter? What is the meaning of it all? You can remain a nonreligious skeptic on these questions, but it is open for the Christian to offer his or her answers, so long as they are not scientific answers. Here then is a way that science and religion can coexist.  相似文献   
56.
Abstract

The thesis that meaning is normative has come under much scrutiny of late. However, there are aspects of the view that have received comparatively little critical attention which centre on meaning’s capacity to guide and justify linguistic action. Call such a view ‘justification normativity’ (JN). I outline Zalabardo’s (1997) account of JN and his corresponding argument against reductive-naturalistic meaning-factualism and argue that the argument presents a genuine challenge to account for the guiding role of meaning in linguistic action. I then present a proposal regarding how this challenge may be met. This proposal is then compared to recent work by Ginsborg (2011; 2012), who has outlined an alternative view of the normativity of meaning that explicitly rejects the idea that meanings guide and justify linguistic use. I outline how Ginsborg utilises this notion of normativity in order to provide a positive account of what it is to mean something by an expression which is intended to serve as a response to Kripke’s semantic sceptic. Finally, I argue that Ginsborg’s response to the sceptic is unsatisfactory, and that, insofar as it is able to preserve our intuitive view of meaning’s capacity to guide linguistic action, my proposal is to be preferred.  相似文献   
57.
Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one …”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence.  相似文献   
58.
In their paper “The case for neuropsychoanalysis” Yovell, Solms, and Fotopoulou (2015) respond to our critique of neuropsychoanalysis (Blass & Carmeli, 2007), setting forth evidence and arguments which, they claim, demonstrate why neuroscience is relevant and important for psychoanalysis and hence why dialogue between the fields is necessary. In the present paper we carefully examine their evidence and arguments and demonstrate how and why their claim is completely mistaken. In fact, Yovell, Solms, and Fotopoulou's paper only confirms our position on the irrelevance and harmfulness to psychoanalysis of the contemporary neuroscientific trend. We show how this trend perverts the essential nature of psychoanalysis and of how it is practiced. The clinical impact and its detrimental nature is highlighted by discussion of clinical material presented by Yovell et al (2015). In the light of this we argue that the debate over neuropsychoanalysis should be of interest to all psychoanalysts, not only those concerned with biology or interdisciplinary dialogue.  相似文献   
59.
The objective of this research was to explore whether orthographic learning occurs as a result of phonological recoding, as expected from the self-teaching hypothesis. The participants were 32 fourth- and fifth-graders (mean age = 10 years 0 months, SD = 7 months) who performed lexical decisions for monosyllabic real words and pseudowords under two matched experimental conditions: a read aloud condition, wherein items were named prior to lexical decision to promote phonological recoding, and a concurrent articulation condition, presumed to attenuate phonological recoding. Later, orthographic learning of the pseudowords was evaluated using orthographic choice, spelling, and naming tasks. Consistent with the self-teaching hypothesis, targets learned with phonological recoding in the read aloud condition yielded greater orthographic learning than those learned with concurrent articulation. The research confirms the critical nature of phonological recoding in the development of visual word recognition skills and an orthographic lexicon.  相似文献   
60.
The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号