首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2352篇
  免费   262篇
  国内免费   423篇
  2024年   5篇
  2023年   52篇
  2022年   61篇
  2021年   86篇
  2020年   108篇
  2019年   180篇
  2018年   138篇
  2017年   158篇
  2016年   147篇
  2015年   103篇
  2014年   136篇
  2013年   468篇
  2012年   106篇
  2011年   161篇
  2010年   78篇
  2009年   131篇
  2008年   142篇
  2007年   103篇
  2006年   86篇
  2005年   89篇
  2004年   94篇
  2003年   89篇
  2002年   83篇
  2001年   42篇
  2000年   40篇
  1999年   25篇
  1998年   13篇
  1997年   25篇
  1996年   18篇
  1995年   15篇
  1994年   14篇
  1993年   11篇
  1992年   8篇
  1991年   7篇
  1990年   2篇
  1989年   3篇
  1988年   1篇
  1985年   3篇
  1984年   2篇
  1981年   1篇
  1980年   1篇
  1978年   1篇
  1977年   1篇
排序方式: 共有3037条查询结果,搜索用时 171 毫秒
261.
Using traditional face perception paradigms the current study explores unfamiliar face processing in two neurodevelopmental disorders. Previous research indicates that autism and Williams syndrome (WS) are both associated with atypical face processing strategies. The current research involves these groups in an exploration of feature salience for processing the eye and mouth regions of unfamiliar faces. The tasks specifically probe unfamiliar face matching by using (a) upper or lower face features, (b) the Thatcher illusion, and (c) featural and configural face modifications to the eye and mouth regions. Across tasks, individuals with WS mirror the typical pattern of performance, with greater accuracy for matching faces using the upper than using the lower features, susceptibility to the Thatcher illusion, and greater detection of eye than mouth modifications. Participants with autism show a generalized performance decrement alongside atypicalities, deficits for utilizing the eye region, and configural face cues to match unfamiliar faces. The results are discussed in terms of feature salience, structural encoding, and the phenotypes typically associated with these neurodevelopmental disorders.  相似文献   
262.
In English, new information typically appears late in the sentence, as does primary accent. Because of this tendency, perceivers might expect the final constituent or constituents of a sentence to contain informational focus. This expectation should in turn affect how they comprehend focus-sensitive constructions such as ellipsis sentences. Results from four experiments on sluicing sentences (e.g., The mobster implicated the thug, but we can't find out who else) suggest that perceivers do prefer to place focus late in the sentence, though that preference can be mitigated by prosodic information (pitch accents, Experiment 2) or syntactic information (clefted sentences, Experiment 3) indicating that focus is located elsewhere. Furthermore, it is not necessarily the direct object, but the informationally focused constituent that is the preferred antecedent (Experiment 4). Expectations regarding the information structure of a sentence, which are only partly cancellable by means of overt focus markers, may explain persistent biases in ellipsis resolution.  相似文献   
263.
264.
Most theories of reference assume that a referent's saliency in the linguistic context determines the choice of referring expression. However, it is less clear whether cognitive factors relating to the nonlinguistic context also have an effect. We investigated whether visual context influences the choice of a pronoun over a repeated noun phrase when speakers refer back to a referent in a preceding sentence. In Experiment 1, linguistic mention as well as visual presence of a competitor with the same gender as the referent resulted in fewer pronouns for the referent, suggesting that both linguistic and visual context determined the choice of referring expression. Experiment 2 showed that even when the competitor had a different gender from the referent, its visual presence reduced pronoun use, indicating that visual context plays a role even if the use of a pronoun is unambiguous. Thus, both linguistic and nonlinguistic information affect the choice of referring expression.  相似文献   
265.
In four experiments, a total of 384 undergraduates incidentally learned a list of 24 nouns twice in the same context (same-context repetition) or different contexts (different-context repetition). Free recall was measured in a neutral context. Experiments 1, 2, and 3 used a context repetition (same- or different-context repetition) × inter-study and retention intervals (10 min or 1 day) between-participants design. Context was manipulated by the combination of place, social environment, and encoding task (Experiment 1), place and social environment (Experiment 2), or place alone (Experiment 3). Experiment 4 used a context repetition × type of context (context manipulated by place or by place, social environment, and encoding task) between-participants design, with a 10-min inter-study interval and a one-day retention interval. The present results indicate that the determinant of the superiority of same- or different-context repetition in recall is the type of context. Implications of the results were discussed.  相似文献   
266.
Two experiments investigated the role that different face regions play in a variety of social judgements that are commonly made from facial appearance (sex, age, distinctiveness, attractiveness, approachability, trustworthiness, and intelligence). These judgements lie along a continuum from those with a clear physical basis and high consequent accuracy (sex, age) to judgements that can achieve a degree of consensus between observers despite having little known validity (intelligence, trustworthiness). Results from Experiment 1 indicated that the face's internal features (eyes, nose, and mouth) provide information that is more useful for social inferences than the external features (hair, face shape, ears, and chin), especially when judging traits such as approachability and trustworthiness. Experiment 2 investigated how judgement agreement was affected when the upper head, eye, nose, or mouth regions were presented in isolation or when these regions were obscured. A different pattern of results emerged for different characteristics, indicating that different types of facial information are used in the various judgements. Moreover, the informativeness of a particular region/feature depends on whether it is presented alone or in the context of the whole face. These findings provide evidence for the importance of holistic processing in making social attributions from facial appearance.  相似文献   
267.
Although linguistic traditions of the last century assumed that there is no link between sound and meaning (i.e., arbitrariness), recent research has established a nonarbitrary relation between sound and meaning (i.e., sound symbolism). For example, some sounds (e.g., /u/ as in took) suggest bigness whereas others (e.g., /i/ as in tiny) suggest smallness. We tested whether sound symbolism only marks contrasts (e.g., small versus big things) or whether it marks object properties in a graded manner (e.g., small, medium, and large things). In two experiments, participants viewed novel objects (i.e., greebles) of varying size and chose the most appropriate name for each object from a list of visually or auditorily presented nonwords that varied incrementally in the number of “large” and “small” phonemes. For instance, “wodolo” contains all large-sounding phonemes, whereas “kitete” contains all small-sounding phonemes. Participants' choices revealed a graded relationship between sound and size: The size of the object linearly predicted the number of large-sounding phonemes in its preferred name. That is, small, medium, and large objects elicited names with increasing numbers of large-sounding phonemes. The results are discussed in relation to cross-modal processing, gesture, and vocal pitch.  相似文献   
268.
The current study investigated the effects of phonologically related context pictures on the naming latencies of target words in Japanese and Chinese. Reading bare words in alphabetic languages has been shown to be rather immune to effects of context stimuli, even when these stimuli are presented in advance of the target word (e.g., Glaser & Düngelhoff, 1984 Glaser, W. R. and Düngelhoff, F. J. 1984. The time course of picture–word interference. Journal of Experimental Psychology: Human Perception and Performance, 10: 640654. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]; Roelofs, 2003 Roelofs, A. 2003. Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110: 88125. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). However, recently, semantic context effects of distractor pictures on the naming latencies of Japanese kanji (but not Chinese hànzì) words have been observed (Verdonschot, La Heij, & Schiller, 2010 Verdonschot, R. G., La Heij, W. and Schiller, N. O. 2010. Semantic context effects when naming Japanese kanji, but not Chinese hànzì. Cognition, 115: 512518. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). In the present study, we further investigated this issue using phonologically related (i.e., homophonic) context pictures when naming target words in either Chinese or Japanese. We found that pronouncing bare nouns in Japanese is sensitive to phonologically related context pictures, whereas this is not the case in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji.  相似文献   
269.
Autism spectrum disorder (ASD) and typically developed (TD) adult participants viewed pairs of scenes for a simple “spot the difference” (STD) and a complex “which one's weird” (WOW) task. There were no group differences in the STD task. In the WOW task, the ASD group took longer to respond manually and to begin fixating the target “weird” region. Additionally, as indexed by the first-fixation duration into the target region, the ASD group failed to “pick up” immediately on what was “weird”. The findings are discussed with reference to the complex information processing theory of ASD (Minshew & Goldstein, 1998 Minshew, N. J. and Goldstein, G. 1998. Autism as a disorder or complex information processing. Mental Retardation and Developmental Disabilities Research Reviews, 4: 129136. [Crossref] [Google Scholar]).  相似文献   
270.
Imagining a counterfactual world using conditionals (e.g., If Joanne had remembered her umbrella . . .) is common in everyday language. However, such utterances are likely to involve fairly complex reasoning processes to represent both the explicit hypothetical conjecture and its implied factual meaning. Online research into these mechanisms has so far been limited. The present paper describes two eye movement studies that investigated the time-course with which comprehenders can set up and access factual inferences based on a realistic counterfactual context. Adult participants were eye-tracked while they read short narratives, in which a context sentence set up a counterfactual world (If . . . then . . .), and a subsequent critical sentence described an event that was either consistent or inconsistent with the implied factual world. A factual consistent condition (Because . . . then . . .) was included as a baseline of normal contextual integration. Results showed that within a counterfactual scenario, readers quickly inferred the implied factual meaning of the discourse. However, initial processing of the critical word led to clear, but distinct, anomaly detection responses for both contextually inconsistent and consistent conditions. These results provide evidence that readers can rapidly make a factual inference from a preceding counterfactual context, despite maintaining access to both counterfactual and factual interpretations of events.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号