首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   455篇
  免费   41篇
  国内免费   51篇
  2024年   1篇
  2023年   24篇
  2022年   4篇
  2021年   13篇
  2020年   24篇
  2019年   27篇
  2018年   21篇
  2017年   32篇
  2016年   22篇
  2015年   19篇
  2014年   14篇
  2013年   40篇
  2012年   10篇
  2011年   20篇
  2010年   11篇
  2009年   21篇
  2008年   23篇
  2007年   24篇
  2006年   22篇
  2005年   19篇
  2004年   23篇
  2003年   24篇
  2002年   19篇
  2001年   18篇
  2000年   4篇
  1999年   5篇
  1998年   7篇
  1997年   2篇
  1996年   7篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有547条查询结果,搜索用时 15 毫秒
21.
Despite the lack of invariance problem (the many-to-many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side-stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real-world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them to provide direct insights into mechanisms that may support HSR. In this brief article, we report preliminary results from a two-layer network that borrows one element from ASR, long short-term memory nodes, which provide dynamic memory for a range of temporal spans. This allows the model to learn to map real speech from multiple talkers to semantic targets with high accuracy, with human-like timecourse of lexical access and phonological competition. Internal representations emerge that resemble phonetically organized responses in human superior temporal gyrus, suggesting that the model develops a distributed phonological code despite no explicit training on phonetic or phonemic targets. The ability to work with real speech is a major advance for cognitive models of HSR.  相似文献   
22.
The present study addresses the question of how practice in expressing the content to be conveyed in a specific situation influences speech production planning processes. A comparison of slips of the tongue in Japanese collected from spontaneous everyday conversation and those collected from largely preplanned conversation in live-broadcast TV programs reveals that, although there are those aspects of speech production planning that are unaffected by practice, there are various practice effects, most of which can be explained in terms of automatization of the processing of content, resulting in shifts in the loci of errors.  相似文献   
23.
This study assessed intelligibility in a dysarthric patient with Parkinson's disease (PD) across five speech production tasks: spontaneous speech, repetition, reading, repeated singing, and spontaneous singing, using the same phrases for all but spontaneous singing. The results show that this speaker was significantly less intelligible when speaking spontaneously than in the other tasks. Acoustic analysis suggested that relative intensity and word duration were not independently linked to intelligibility, but dysfluencies (from perceptual analysis) and articulatory/resonance patterns (from acoustic records) were related to intelligibility in predictable ways. These data indicate that speech production task may be an important variable to consider during the evaluation of dysarthria. As speech production efficiency was found to vary with task in a patient with Parkinson's disease, these results can be related to recent models of basal ganglia function in motor performance.  相似文献   
24.
This study was focused on the potential influence of task-related factors on oral motor performance in patients with speech disorders. Sentence production was compared with a nonspeech oral motor task, i.e., oral diadochokinesis. Perceptual and acoustic measures of speech impairment were used as dependent variables. Between-task comparisons were made for subsamples of a population of 140 patients with different motor speech syndromes, including apraxia of speech and cerebellar dysarthria. In a further analysis subgroups were matched for speaking rate. Overall, dysdiadochokinesis was correlated with the degree of speech impairment, but there was a strong interaction between task type and motor speech syndrome. In particular, cerebellar pathology affected DDK to a relatively greater extent than sentence production, while apraxic pathology spared the ability of repeating syllables at maximum speed.  相似文献   
25.
Previous studies have found that subjects diagnosed with verbal auditory agnosia (VAA) from bilateral brain lesions may experience difficulties at the prephonemic level of acoustic processing. In this case study, we administered a series of speech and nonspeech discrimination tests to an individual with unilateral VAA as a result of left-temporal-lobe damage. The results indicated that the subject's ability to perceive steady-state acoustic stimuli was relatively intact but his ability to perceive dynamic stimuli was drastically reduced. We conclude that this particular aspect of acoustic processing may be a major contributing factor that disables speech perception in subjects with unilateral VAA.  相似文献   
26.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.  相似文献   
27.
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.  相似文献   
28.
The origin and functions of the hand and arm gestures that accompany speech production are poorly understood. It has been proposed that gestures facilitate lexical retrieval, but little is known about when retrieval is accompanied by gestural activity and how this activity is related to the semantics of the word to be retrieved. Electromyographic (EMG) activity of the dominant forearm was recorded during a retrieval task in which participants tried to identify target words from their definitions. EMG amplitudes were significantly greater for concrete than for abstract words. The relationship between EMG amplitude and other conceptual attributes of the target words was examined. EMG was positively related to a word’s judged spatiality, concreteness, drawability, and manipulability. The implications of these findings for theories of the relation between speech production and gesture are discussed.This experiment was done by the first author under the supervision of the second author in partial completion of the Ph.D. degree at Columbia University. We gratefully acknowledge the advice and comments of Lois Putnam, Robert Remez, James Magnuson, Michele Miozzo, and Robert B. Tallarico, and the assistance of Stephen Krieger, Lauren Walsh, Jennifer Kim, and Jillian White.  相似文献   
29.
Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.  相似文献   
30.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号