首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1411篇
  免费   99篇
  国内免费   186篇
  2023年   23篇
  2022年   32篇
  2021年   41篇
  2020年   79篇
  2019年   98篇
  2018年   113篇
  2017年   89篇
  2016年   81篇
  2015年   51篇
  2014年   62篇
  2013年   318篇
  2012年   41篇
  2011年   73篇
  2010年   48篇
  2009年   77篇
  2008年   56篇
  2007年   67篇
  2006年   53篇
  2005年   47篇
  2004年   42篇
  2003年   32篇
  2002年   35篇
  2001年   17篇
  2000年   19篇
  1999年   16篇
  1998年   13篇
  1997年   13篇
  1996年   13篇
  1995年   6篇
  1994年   7篇
  1993年   5篇
  1992年   6篇
  1991年   2篇
  1990年   5篇
  1989年   1篇
  1987年   1篇
  1986年   3篇
  1985年   4篇
  1984年   2篇
  1983年   3篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1696条查询结果,搜索用时 15 毫秒
281.
To test the hypothesis that native language (L1) phonology can affect the lexical representations of nonnative words, a visual semantic-relatedness decision task in English was given to native speakers and nonnative speakers whose L1 was Japanese or Arabic. In the critical conditions, the word pair contained a homophone or near-homophone of a semantically associated word, where a near-homophone was defined as a phonological neighbor involving a contrast absent in the speaker’s L1 (e.g., ROCK-LOCK for native speakers of Japanese). In all participant groups, homophones elicited more false positive errors and slower processing than spelling controls. In the Japanese and Arabic groups, near-homophones also induced relatively more false positives and slower processing. The results show that, even when auditory perception is not involved, recognition of nonnative words and, by implication, their lexical representations are affected by the L1 phonology.  相似文献   
282.
The working memory model for Ease of Language Understanding (ELU) proposes that language understanding under taxing conditions is related to explicit cognitive capacity. We refer to this as the mismatch hypothesis, since phonological representations based on the processing of speech under established conditions may not be accessed so readily when input conditions change and a match becomes problematic. Then, cognitive capacity requirements may differ from those used for processing speech hitherto. In the present study, we tested this hypothesis by investigating the relationship between aided speech recognition in noise and cognitive capacity in experienced hearing aid users when there was either a match or mismatch between processed speech input and established phonological representations. The settings in the existing digital hearing aids of the participants were adjusted to one of two different compression settings which processed the speech signal in qualitatively different ways ("fast" or "slow"). Testing took place after a 9-week period of experience with the new setting. Speech recognition was tested under different noise conditions and with match or mismatch (i.e. alternative compression setting) manipulations of the input signal. Individual cognitive capacity was measured using a reading span test and a letter monitoring test. Reading span, a reliable measure of explicit cognitive capacity, predicted speech recognition performance under mismatch conditions when processed input was incongruent with recently established phonological representations, due to the specific hearing aid setting. Cognitive measures were not main predictors of performance under match conditions. These findings are in line with the ELU model.  相似文献   
283.
The current study examined the abilities of children (6 and 8 years of age) and adults to freely categorize and label dynamic bodily/facial expressions designed to portray happiness, pleasure, anger, irritation, and neutrality and controlled for their level of valence, arousal, intensity, and authenticity. Multidimensional scaling and cluster analyses showed that children (n = 52) and adults (n = 33) structured expressions in systematic and broadly similar ways. Between 6 and 8 years of age, there was a quantitative, but not a qualitative, improvement in labeling. When exposed to rich and dynamic emotional cues, children as young as 6 years can successfully perceive differences between close expressions (e.g., happiness, pleasure), and can categorize them with clear boundaries between them, with the exception of irritation, which had fuzzier borders. Children’s classifications were not reliant on lexical semantic abilities and were consistent with a model of emotion categories based on their degree of valence and arousal.  相似文献   
284.
This study investigated whether semantic information presented along with novel printed nonwords facilitates orthographic learning and examined predictors of individual differences in this important literacy skill. A sample of 35 fourth graders was tested on a variety of language and literacy tests, and participants were then exposed to 10 target nonwords, 5 of which were presented with semantic information. Children were tested 1 and 4 days later on their ability to correctly recognize and spell the target nonwords. Results revealed a significant main effect on the recognition task, where items presented with semantic information were identified correctly more often than were words presented in isolation. No significant effect of training condition was found for the spelling posttests. Furthermore, multiple regression analyses revealed that both phonological and semantic factors were significant predictors of orthographic learning. The findings support the view that orthographic learning, as measured through visual recognition, involves the integration of phonological, orthographic, and semantic representations.  相似文献   
285.
Despite a large literature on infants’ memory for visually presented stimuli, the processes underlying visual memory are not well understood. Two studies with 4-month-olds (N = 60) examined the effects of providing opportunities for comparison of items on infants’ memory for those items. Experiment 1 revealed that 4-month-olds failed to show evidence of memory for an item presented during familiarization in a standard task (i.e., when only one item was presented during familiarization). In Experiment 2, infants showed robust memory for one of two different items presented during familiarization. Thus, infants’ memory for the distinctive features of individual items was enhanced when they could compare items.  相似文献   
286.
Aggressive responding following benzodiazepine ingestion has been recorded in both experimental and client populations, however, the mechanism responsible for this outcome is unclear. The goal of this study was to identify an affective concomitant linked to diazepam‐induced aggression that might be responsible for this relationship. Thirty males (15 diazepam and 15 placebo) participated in the Taylor Aggression Paradigm while covertly being videotaped. The videotapes were analyzed using the Facial Action Coding System with the goal of identifying facial expression differences between the two groups. Relative to placebo participants, diazepam participants selected significantly higher shock settings for their opponents, consistent with past findings using this paradigm. Diazepam participants also engaged in significantly fewer appeasement expressions (associated with the self‐conscious emotions) during the task, although there were no group differences for other emotion expressions or for movements in general. Aggr. Behav. 35:203–212, 2009. © 2008 Wiley‐Liss, Inc.  相似文献   
287.
Visual scanpath recording was used to investigate the information processing strategies used by a prosopagnosic patient, SC, when viewing faces. Compared to controls, SC showed an aberrant pattern of scanning, directing attention away from the internal configuration of facial features (eyes, nose) towards peripheral regions (hair, forehead) of the face. The results suggest that SC's face recognition deficit can be linked to an inability to assemble an accurate and unified face percept due to an abnormal allocation of attention away from the internal face region. Extraction of stimulus attributes necessary for face identity recognition is compromised by an aberrant face scanning pattern.  相似文献   
288.
Recent research suggests that emotion effects in word processing resemble those in other stimulus domains such as pictures or faces. The present study aims to provide more direct evidence for this notion by comparing emotion effects in word and face processing in a within-subject design. Event-related brain potentials (ERPs) were recorded as participants made decisions on the lexicality of emotionally positive, negative, and neutral German verbs or pseudowords, and on the integrity of intact happy, angry, and neutral faces or slightly distorted faces. Relative to neutral and negative stimuli both positive verbs and happy faces elicited posterior ERP negativities that were indistinguishable in scalp distribution and resembled the early posterior negativities reported by others. Importantly, these ERP modulations appeared at very different latencies. Therefore, it appears that similar brain systems reflect the decoding of both biological and symbolic emotional signals of positive valence, differing mainly in the speed of meaning access, which is more direct and faster for facial expressions than for words.  相似文献   
289.
Expert chess players, specialized in different openings, recalled positions and solved problems within and outside their area of specialization. While their general expertise was at a similar level, players performed better with stimuli from their area of specialization. The effect of specialization on both recall and problem solving was strong enough to override general expertise—players remembering positions and solving problems from their area of specialization performed at around the level of players 1 standard deviation (SD) above them in general skill. Their problem-solving strategy also changed depending on whether the problem was within their area of specialization. When it was, they searched more in depth and less in breadth; with problems outside their area of specialization, the reverse. The knowledge that comes from familiarity with a problem area is more important than general purpose strategies in determining how an expert will tackle it. These results demonstrate the link in experts between problem solving and memory of specific experiences and indicate that the search for context-independent general purpose problem-solving strategies to teach to future experts is unlikely to be successful.  相似文献   
290.
Limb apraxia is a neurological disorder of higher cognitive function characterized by an inability to perform purposeful skilled movements and not attributable to an elementary sensorimotor dysfunction or comprehension difficulty. Corticobasal Syndrome (CBS) is an akinetic rigid syndrome with asymmetric onset and progression with at least one basal ganglia feature (rigidity, limb dystonia or myoclonus) and one cortical feature (limb apraxia, alien hand syndrome or cortical sensory loss). Even though limb apraxia is highly prevalent in CBS (70–80%), very few studies have examined the performance of CBS patients on praxis measures in detail. This review aims to (1) briefly summarize the clinical, neuroanatomical and pathological findings in CBS, (2) briefly outline what limb apraxia is and how it is assessed, (3) to comprehensively review the literature on limb apraxia in CBS to date and (4) to briefly summarize the literature on other forms of apraxia, such as limb-kinetic apraxia and buccofacial apraxia. Overall, the goal of the review is to bring a model-based perspective to the findings available in the literature to date on limb apraxia in CBS.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号