首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   497篇
  免费   55篇
  国内免费   58篇
  2024年   3篇
  2023年   25篇
  2022年   5篇
  2021年   16篇
  2020年   30篇
  2019年   27篇
  2018年   27篇
  2017年   38篇
  2016年   24篇
  2015年   19篇
  2014年   18篇
  2013年   45篇
  2012年   13篇
  2011年   22篇
  2010年   14篇
  2009年   21篇
  2008年   25篇
  2007年   27篇
  2006年   23篇
  2005年   20篇
  2004年   25篇
  2003年   24篇
  2002年   21篇
  2001年   19篇
  2000年   4篇
  1999年   6篇
  1998年   7篇
  1997年   3篇
  1996年   7篇
  1995年   2篇
  1994年   3篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   5篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   2篇
  1985年   3篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有610条查询结果,搜索用时 15 毫秒
31.
Previous studies have found that subjects diagnosed with verbal auditory agnosia (VAA) from bilateral brain lesions may experience difficulties at the prephonemic level of acoustic processing. In this case study, we administered a series of speech and nonspeech discrimination tests to an individual with unilateral VAA as a result of left-temporal-lobe damage. The results indicated that the subject's ability to perceive steady-state acoustic stimuli was relatively intact but his ability to perceive dynamic stimuli was drastically reduced. We conclude that this particular aspect of acoustic processing may be a major contributing factor that disables speech perception in subjects with unilateral VAA.  相似文献   
32.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.  相似文献   
33.
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.  相似文献   
34.
The origin and functions of the hand and arm gestures that accompany speech production are poorly understood. It has been proposed that gestures facilitate lexical retrieval, but little is known about when retrieval is accompanied by gestural activity and how this activity is related to the semantics of the word to be retrieved. Electromyographic (EMG) activity of the dominant forearm was recorded during a retrieval task in which participants tried to identify target words from their definitions. EMG amplitudes were significantly greater for concrete than for abstract words. The relationship between EMG amplitude and other conceptual attributes of the target words was examined. EMG was positively related to a word’s judged spatiality, concreteness, drawability, and manipulability. The implications of these findings for theories of the relation between speech production and gesture are discussed.This experiment was done by the first author under the supervision of the second author in partial completion of the Ph.D. degree at Columbia University. We gratefully acknowledge the advice and comments of Lois Putnam, Robert Remez, James Magnuson, Michele Miozzo, and Robert B. Tallarico, and the assistance of Stephen Krieger, Lauren Walsh, Jennifer Kim, and Jillian White.  相似文献   
35.
Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.  相似文献   
36.
37.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   
38.
The main objectives in the present study were to examine meaningful irrelevant speech and road traffic noise effects on episodic and semantic memory, and to evaluate whether gender differences in memory performance interact with noise. A total of 96 subjects, aged 13-14 years (n = 16 boys and 16 girls in each of three groups), were randomly assigned to a silent or two noise conditions. Noise effects found were restricted to impairments from meaningful irrelevant speech on recognition and cued recall of a text in episodic memory and of word comprehension in semantic memory. The obtained noise effect suggests that the meaning of the speech were processed semantically by the pupils, which reduced their ability to comprehend a text that also involved processing of meaning. Meaningful irrelevant speech was also assumed to cause a poorer access to the knowledge base in semantic memory. Girls outperformed boys in episodic and semantic memory materials, but these differences did not interact with noise.  相似文献   
39.
Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well.  相似文献   
40.
To explore why noise has reliable effects on delayed recall in a certain text-reading task, this episodic memory task was employed with other memory tests in a study of road traffic noise and meaningful but irrelevant speech. Context-dependent memory was tested and self-reports of affect were taken. Participants were 96 high school students. The results showed that both road traffic noise and meaningful irrelevant speech impaired recall of the text. Retrieval in noise from semantic memory was also impaired. Attention was impaired by both noise sources, but attention did not mediate the noise effects on episodic memory. Recognition was not affected by noise. Context-dependent memory was not shown. The lack of mediation by attention, and road traffic noise being as harmful as meaningful irrelevant speech, are discussed in relation to where in the input/storing/output sequence noise has its effect and what the distinctive feature of the disturbing noise is.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号