首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1526篇
  免费   83篇
  国内免费   5篇
  1614篇
  2023年   15篇
  2022年   9篇
  2021年   42篇
  2020年   49篇
  2019年   34篇
  2018年   34篇
  2017年   40篇
  2016年   51篇
  2015年   46篇
  2014年   59篇
  2013年   160篇
  2012年   61篇
  2011年   79篇
  2010年   37篇
  2009年   92篇
  2008年   111篇
  2007年   96篇
  2006年   47篇
  2005年   42篇
  2004年   45篇
  2003年   34篇
  2002年   23篇
  2001年   8篇
  2000年   4篇
  1999年   4篇
  1998年   5篇
  1997年   9篇
  1996年   2篇
  1995年   1篇
  1994年   1篇
  1993年   2篇
  1987年   1篇
  1985年   16篇
  1984年   27篇
  1983年   24篇
  1982年   31篇
  1981年   24篇
  1980年   41篇
  1979年   33篇
  1978年   41篇
  1977年   29篇
  1976年   28篇
  1975年   25篇
  1974年   29篇
  1973年   23篇
排序方式: 共有1614条查询结果,搜索用时 15 毫秒
71.
Extant accounts of visually situated language processing do make general predictions about visual context effects on incremental sentence comprehension; these, however, are not sufficiently detailed to accommodate potentially different visual context effects (such as a scene–sentence mismatch based on actions versus thematic role relations, e.g., (Altmann & Kamide, 2007; Knoeferle & Crocker, 2007; Taylor & Zwaan, 2008; Zwaan & Radvansky, 1998)). To provide additional data for theory testing and development, we collected event-related brain potentials (ERPs) as participants read a subject–verb–object sentence (500 ms SOA in Experiment 1 and 300 ms SOA in Experiment 2), and post-sentence verification times indicating whether or not the verb and/or the thematic role relations matched a preceding picture (depicting two participants engaged in an action). Though incrementally processed, these two types of mismatch yielded different ERP effects. Role–relation mismatch effects emerged at the subject noun as anterior negativities to the mismatching noun, preceding action mismatch effects manifest as centro-parietal N400s greater to the mismatching verb, regardless of SOAs. These two types of mismatch manipulations also yielded different effects post-verbally, correlated differently with a participant's mean accuracy, verbal working memory and visual-spatial scores, and differed in their interactions with SOA. Taken together these results clearly implicate more than a single mismatch mechanism for extant accounts of picture–sentence processing to accommodate.  相似文献   
72.
The current study investigated the effects of phonologically related context pictures on the naming latencies of target words in Japanese and Chinese. Reading bare words in alphabetic languages has been shown to be rather immune to effects of context stimuli, even when these stimuli are presented in advance of the target word (e.g., Glaser & Düngelhoff, 1984 Glaser, W. R. and Düngelhoff, F. J. 1984. The time course of picture–word interference. Journal of Experimental Psychology: Human Perception and Performance, 10: 640654. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]; Roelofs, 2003 Roelofs, A. 2003. Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110: 88125. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). However, recently, semantic context effects of distractor pictures on the naming latencies of Japanese kanji (but not Chinese hànzì) words have been observed (Verdonschot, La Heij, & Schiller, 2010 Verdonschot, R. G., La Heij, W. and Schiller, N. O. 2010. Semantic context effects when naming Japanese kanji, but not Chinese hànzì. Cognition, 115: 512518. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). In the present study, we further investigated this issue using phonologically related (i.e., homophonic) context pictures when naming target words in either Chinese or Japanese. We found that pronouncing bare nouns in Japanese is sensitive to phonologically related context pictures, whereas this is not the case in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji.  相似文献   
73.
This study investigated the role of verbal ability and fluid intelligence on children's emotion understanding, testing the hypothesis that fluid intelligence predicts the development of emotion comprehension over and above age and verbal ability. One hundred and two children (48 girls) aged 3.6–6 years completed the Test of Emotion Comprehension (TEC) that comprised external and mental components, the Coloured Progressive Matrices and the Test for Reception of Grammar. Regression analysis showed that fluid intelligence was not equally related to the external and mental components of the TEC (Pons & Harris, 2000). Specifically, the results indicated that the external component was related to age and verbal ability only, whereas recognition of mental emotional patterns required abstract reasoning skills more than age and verbal ability. It is concluded that the development of fluid intelligence has a significant role in the development of mental component of emotion comprehension.  相似文献   
74.
We investigate whether infant‐directed speech (IDS) could facilitate word form learning when compared to adult‐directed speech (ADS). To study this, we examine the distribution of word forms at two levels, acoustic and phonological, using a large database of spontaneous speech in Japanese. At the acoustic level we show that, as has been documented before for phonemes, the realizations of words are more variable and less discriminable in IDS than in ADS. At the phonological level, we find an effect in the opposite direction: The IDS lexicon contains more distinctive words (such as onomatopoeias) than the ADS counterpart. Combining the acoustic and phonological metrics together in a global discriminability score reveals that the bigger separation of lexical categories in the phonological space does not compensate for the opposite effect observed at the acoustic level. As a result, IDS word forms are still globally less discriminable than ADS word forms, even though the effect is numerically small. We discuss the implication of these findings for the view that the functional role of IDS is to improve language learnability.  相似文献   
75.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   
76.
Previous research showed that handwriting production is mediated by linguistically oriented processing units such as syllables and graphemes. The goal of this study was to investigate whether French adults also activate another kind of unit that is more related to semantics than phonology, namely morphemes. Experiment 1 revealed that letter duration and inter-letter intervals were longer for suffixed words than for pseudo-suffixed words. These results suggest that the handwriting production system chunks the letter components of the root and suffix into morpheme-sized units. Experiment 2 compared the production of prefixed and pseudo-prefixed words. The results did not yield significant differences. This asymmetry between suffix and prefix processing has also been observed in other linguistic tasks. In suffixed words, the suffix would be processed on-line during the production of the root, in an analytic fashion. Prefixed words, in contrast, seem to be processed without decomposition, as pseudo-affixed words.  相似文献   
77.
Qian T  Jaeger TF 《Cognitive Science》2012,36(7):1312-1336
Recent years have seen a surge in accounts motivated by information theory that consider language production to be partially driven by a preference for communicative efficiency. Evidence from discourse production (i.e., production beyond the sentence level) has been argued to suggest that speakers distribute information across discourse so as to hold the conditional per-word entropy associated with each word constant, which would facilitate efficient information transfer (Genzel & Charniak, 2002). This hypothesis implies that the conditional (contextualized) probabilities of linguistic units affect speakers' preferences during production. Here, we extend this work in two ways. First, we explore how preceding cues are integrated into contextualized probabilities, a question which so far has received little to no attention. Specifically, we investigate how a cue's maximal informativity about upcoming words (the cue's effectiveness) decays as a function of the cue's recency. Based on properties of linguistic discourses as well as properties of human memory, we analytically derive a model of cue effectiveness decay and evaluate it against cross-linguistic data from 12 languages. Second, we relate the information theoretic accounts of discourse production to well-established mechanistic (activation-based) accounts: We relate contextualized probability distributions over words to their relative activation in a lexical network given preceding discourse.  相似文献   
78.
Previous studies have shown that the effect of the Spatial Musical Association of Response Codes (SMARC) depends on various features, such as task conditions (whether pitch height is implicit or explicit), response dimension (horizontal vs. vertical), presence or absence of a reference tone, and former musical training of the participants. In the present study, we investigated the effects of pitch range and timbre: in particular, how timbre (piano vs. vocal) contributes to the horizontal and vertical SMARC effect in nonmusicians under varied pitch range conditions. Nonmusicians performed a timbre judgement task in which the pitch range was either small (6 or 8 semitone steps) or large (9 or 12 semitone steps) in a horizontal and a vertical response setting. For piano sounds, SMARC effects were observed in all conditions. For the vocal sounds, in contrast, SMARC effects depended on pitch range. We concluded that the occurrence of the SMARC effect, especially in horizontal response settings, depends on the interaction of the timbre (vocal and piano) and pitch range if vocal and instrumental sounds are combined in one experiment: the human voice enhances the attention, both to the vocal and the instrumental sounds.  相似文献   
79.
We tested an embodied account of language proposing that comprehenders create perceptual simulations of the events they hear and read about. In Experiment 1, children (ages 7–13 years) performed a picture verification task. Each picture was preceded by a prerecorded spoken sentence describing an entity whose shape or orientation matched or mismatched the depicted object. Responses were faster for matching pictures, suggesting that participants had formed perceptual-like situation models of the sentences. The advantage for matching pictures did not increase with age. Experiment 2 extended these findings to the domain of written language. Participants (ages 7–10 years) of high and low word reading ability verified pictures after reading sentences aloud. The results suggest that even when reading is effortful, children construct a perceptual simulation of the described events. We propose that perceptual simulation plays a more central role in developing language comprehension than was previously thought.  相似文献   
80.
We assessed the relationship between brain structure and function in 10 individuals with specific language impairment (SLI), compared to six unaffected siblings, and 16 unrelated control participants with typical language. Voxel-based morphometry indicated that grey matter in the SLI group, relative to controls, was increased in the left inferior frontal cortex and decreased in the right caudate nucleus and superior temporal cortex bilaterally. The unaffected siblings also showed reduced grey matter in the caudate nucleus relative to controls. In an auditory covert naming task, the SLI group showed reduced activation in the left inferior frontal cortex, right putamen, and in the superior temporal cortex bilaterally. Despite spatially coincident structural and functional abnormalities in frontal and temporal areas, the relationships between structure and function in these regions were different. These findings suggest multiple structural and functional abnormalities in SLI that are differently associated with receptive and expressive language processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号