首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   620篇
  免费   76篇
  国内免费   74篇
  2024年   1篇
  2023年   5篇
  2022年   11篇
  2021年   21篇
  2020年   21篇
  2019年   26篇
  2018年   23篇
  2017年   36篇
  2016年   36篇
  2015年   17篇
  2014年   35篇
  2013年   108篇
  2012年   26篇
  2011年   51篇
  2010年   16篇
  2009年   39篇
  2008年   43篇
  2007年   34篇
  2006年   38篇
  2005年   27篇
  2004年   36篇
  2003年   18篇
  2002年   11篇
  2001年   16篇
  2000年   6篇
  1999年   7篇
  1998年   4篇
  1997年   7篇
  1996年   5篇
  1995年   4篇
  1994年   4篇
  1993年   2篇
  1992年   2篇
  1991年   1篇
  1990年   1篇
  1988年   3篇
  1987年   2篇
  1986年   1篇
  1985年   1篇
  1984年   4篇
  1981年   2篇
  1979年   3篇
  1978年   4篇
  1977年   5篇
  1976年   6篇
  1974年   1篇
排序方式: 共有770条查询结果,搜索用时 15 毫秒
41.
The eye movements of Finnish first and second graders were monitored as they read sentences where polysyllabic words were either hyphenated at syllable boundaries, alternatingly coloured (every second syllable black, every second red) or had no explicit syllable boundary cues (e.g., ta-lo vs. talo vs. talo = “house”). The results showed that hyphenation at syllable boundaries slows down reading of first and second graders even though syllabification by hyphens is very common in Finnish reading instruction, as all first-grade textbooks include hyphens at syllable boundaries. When hyphens were positioned within a syllable (t-alo vs. ta-lo), beginning readers were even more disrupted. Alternate colouring did not affect reading speed, no matter whether colours signalled syllable structure or not. The results show that beginning Finnish readers prefer to process polysyllabic words via syllables rather than letter by letter. At the same time they imply that hyphenation encourages sequential syllable processing, which slows down the reading of children, who are already capable of parallel syllable processing or recognising words directly via the whole-word route.  相似文献   
42.
It is unclear how children learn labels for multiple overlapping categories such as “Labrador,” “dog,” and “animal.” Xu and Tenenbaum (2007a) suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and adults. Here, we report data testing a developmental prediction of the Bayesian model—that more knowledge should lead to narrower category inferences when presented with multiple subordinate exemplars. Two experiments did not support this prediction. Children with more category knowledge showed broader generalization when presented with multiple subordinate exemplars, compared to less knowledgeable children and adults. This implies a U‐shaped developmental trend. The Bayesian model was not able to account for these data, even with inputs that reflected the similarity judgments of children. We discuss implications for the Bayesian model, including a combined Bayesian/morphological knowledge account that could explain the demonstrated U‐shaped trend.  相似文献   
43.
Words refer to objects in the world, but this correspondence is not one‐to‐one: Each word has a range of referents that share features on some dimensions but differ on others. This property of language is called underspecification. Parts of the lexicon have characteristic patterns of underspecification; for example, artifact nouns tend to specify shape, but not color, whereas substance nouns specify material but not shape. These regularities in the lexicon enable learners to generalize new words appropriately. How does the lexicon come to have these helpful regularities? We test the hypothesis that systematic backgrounding of some dimensions during learning and use causes language to gradually change, over repeated episodes of transmission, to produce a lexicon with strong patterns of underspecification across these less salient dimensions. This offers a cultural evolutionary mechanism linking individual word learning and generalization to the origin of regularities in the lexicon that help learners generalize words appropriately.  相似文献   
44.
Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one …”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence.  相似文献   
45.
In two experiments, we explore how recent experience with particular syntactic constructions affects the strength of the structural priming observed for those constructions. The results suggest that (1) the strength of structural priming observed for double object and prepositional object constructions is affected by the relative frequency with which each construction was produced earlier in the experiment, and (2) the effects of relative frequency are not modulated by the temporal placement of the tokens of each construction within the experiment.  相似文献   
46.
Studies on functional hemispheric asymmetries have suggested that the right vs. left hemisphere should be predominantly involved in low vs. high spatial frequency (SF) analysis, respectively. By manipulating exposure duration of filtered natural scene images, we examined whether the temporal characteristics of SF analysis (i.e., the temporal precedence of low on high spatial frequencies) may interfere with hemispheric specialization. Results showed the classical hemispheric specialization pattern for brief exposure duration and a trend to a right hemisphere advantage irrespective of the SF content for longer exposure duration. The present study suggests that the hemispheric specialization pattern for visual information processing should be considered as a dynamic system, wherein the superiority of one hemisphere over the other could change according to the level of temporal constraints: the higher the temporal constraints of the task, the more the hemispheres are specialized in SF processing.  相似文献   
47.
The objective of this research was to explore whether orthographic learning occurs as a result of phonological recoding, as expected from the self-teaching hypothesis. The participants were 32 fourth- and fifth-graders (mean age = 10 years 0 months, SD = 7 months) who performed lexical decisions for monosyllabic real words and pseudowords under two matched experimental conditions: a read aloud condition, wherein items were named prior to lexical decision to promote phonological recoding, and a concurrent articulation condition, presumed to attenuate phonological recoding. Later, orthographic learning of the pseudowords was evaluated using orthographic choice, spelling, and naming tasks. Consistent with the self-teaching hypothesis, targets learned with phonological recoding in the read aloud condition yielded greater orthographic learning than those learned with concurrent articulation. The research confirms the critical nature of phonological recoding in the development of visual word recognition skills and an orthographic lexicon.  相似文献   
48.
The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.  相似文献   
49.
Sound-symbolism is the idea that the relationship between word sounds and word meaning is not arbitrary for all words, but rather that there are subsets of words in the world’s languages for which sounds and their symbols have some degree of correspondence. The present research investigates sound-symbolism as a possible route to the learning of an unknown word’s meaning. Three studies compared the guesses that adult participants made regarding the potential meanings of sound-symbolic and non-sound symbolic obsolete words. In each study, participants were able to generate better definitions for sound-symbolic words when compared to non-sound symbolic words. Participants were also more likely to recognize the meanings of sound symbolic words. The superior performance on sound-symbolic words held even when definitions generated on the basis of sound association were eliminated. It is concluded that sound symbolism is a word property that influences word learning.  相似文献   
50.
What is the source of the mutual exclusivity bias whereby infants map novel labels onto novel objects? In an intermodal preferential looking task, we found that novel labels support 10-month-olds’ attention to a novel object over a familiar object. In contrast, familiar labels and a neutral phrase gradually reduced attention to a novel object. Markman (1989, 1990) argued that infants must recall the name of a familiar object to exclude it as the referent of a novel label. We argue that 10-month-olds’ attention is guided by the novelty of objects and labels rather than knowledge of the names for familiar objects. Mutual exclusivity, as a language-specific bias, might emerge from a more general constraint on attention and learning.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号