首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   451篇
  免费   41篇
  国内免费   51篇
  543篇
  2024年   1篇
  2023年   24篇
  2022年   4篇
  2021年   13篇
  2020年   25篇
  2019年   27篇
  2018年   21篇
  2017年   31篇
  2016年   24篇
  2015年   20篇
  2014年   14篇
  2013年   39篇
  2012年   10篇
  2011年   20篇
  2010年   11篇
  2009年   20篇
  2008年   23篇
  2007年   22篇
  2006年   21篇
  2005年   18篇
  2004年   23篇
  2003年   23篇
  2002年   19篇
  2001年   18篇
  2000年   4篇
  1999年   5篇
  1998年   7篇
  1997年   2篇
  1996年   7篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1992年   7篇
  1991年   3篇
  1990年   4篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有543条查询结果,搜索用时 0 毫秒
21.
This study assessed intelligibility in a dysarthric patient with Parkinson's disease (PD) across five speech production tasks: spontaneous speech, repetition, reading, repeated singing, and spontaneous singing, using the same phrases for all but spontaneous singing. The results show that this speaker was significantly less intelligible when speaking spontaneously than in the other tasks. Acoustic analysis suggested that relative intensity and word duration were not independently linked to intelligibility, but dysfluencies (from perceptual analysis) and articulatory/resonance patterns (from acoustic records) were related to intelligibility in predictable ways. These data indicate that speech production task may be an important variable to consider during the evaluation of dysarthria. As speech production efficiency was found to vary with task in a patient with Parkinson's disease, these results can be related to recent models of basal ganglia function in motor performance.  相似文献   
22.
This study was focused on the potential influence of task-related factors on oral motor performance in patients with speech disorders. Sentence production was compared with a nonspeech oral motor task, i.e., oral diadochokinesis. Perceptual and acoustic measures of speech impairment were used as dependent variables. Between-task comparisons were made for subsamples of a population of 140 patients with different motor speech syndromes, including apraxia of speech and cerebellar dysarthria. In a further analysis subgroups were matched for speaking rate. Overall, dysdiadochokinesis was correlated with the degree of speech impairment, but there was a strong interaction between task type and motor speech syndrome. In particular, cerebellar pathology affected DDK to a relatively greater extent than sentence production, while apraxic pathology spared the ability of repeating syllables at maximum speed.  相似文献   
23.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions.  相似文献   
24.
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.  相似文献   
25.
The origin and functions of the hand and arm gestures that accompany speech production are poorly understood. It has been proposed that gestures facilitate lexical retrieval, but little is known about when retrieval is accompanied by gestural activity and how this activity is related to the semantics of the word to be retrieved. Electromyographic (EMG) activity of the dominant forearm was recorded during a retrieval task in which participants tried to identify target words from their definitions. EMG amplitudes were significantly greater for concrete than for abstract words. The relationship between EMG amplitude and other conceptual attributes of the target words was examined. EMG was positively related to a word’s judged spatiality, concreteness, drawability, and manipulability. The implications of these findings for theories of the relation between speech production and gesture are discussed.This experiment was done by the first author under the supervision of the second author in partial completion of the Ph.D. degree at Columbia University. We gratefully acknowledge the advice and comments of Lois Putnam, Robert Remez, James Magnuson, Michele Miozzo, and Robert B. Tallarico, and the assistance of Stephen Krieger, Lauren Walsh, Jennifer Kim, and Jillian White.  相似文献   
26.
Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.  相似文献   
27.
The main objectives in the present study were to examine meaningful irrelevant speech and road traffic noise effects on episodic and semantic memory, and to evaluate whether gender differences in memory performance interact with noise. A total of 96 subjects, aged 13-14 years (n = 16 boys and 16 girls in each of three groups), were randomly assigned to a silent or two noise conditions. Noise effects found were restricted to impairments from meaningful irrelevant speech on recognition and cued recall of a text in episodic memory and of word comprehension in semantic memory. The obtained noise effect suggests that the meaning of the speech were processed semantically by the pupils, which reduced their ability to comprehend a text that also involved processing of meaning. Meaningful irrelevant speech was also assumed to cause a poorer access to the knowledge base in semantic memory. Girls outperformed boys in episodic and semantic memory materials, but these differences did not interact with noise.  相似文献   
28.
Listeners cannot recognize highly reduced word forms in isolation, but they can do so when these forms are presented in context (Ernestus, Baayen, & Schreuder, 2002). This suggests that not all possible surface forms of words have equal status in the mental lexicon. The present study shows that the reduced forms are linked to the canonical representations in the mental lexicon, and that these latter representations induce reconstruction processes. Listeners restore suffixes that are partly or completely missing in reduced word forms. A series of phoneme-monitoring experiments reveals the nature of this restoration: the basis for suffix restoration is mainly phonological in nature, but orthography has an influence as well.  相似文献   
29.
To explore why noise has reliable effects on delayed recall in a certain text-reading task, this episodic memory task was employed with other memory tests in a study of road traffic noise and meaningful but irrelevant speech. Context-dependent memory was tested and self-reports of affect were taken. Participants were 96 high school students. The results showed that both road traffic noise and meaningful irrelevant speech impaired recall of the text. Retrieval in noise from semantic memory was also impaired. Attention was impaired by both noise sources, but attention did not mediate the noise effects on episodic memory. Recognition was not affected by noise. Context-dependent memory was not shown. The lack of mediation by attention, and road traffic noise being as harmful as meaningful irrelevant speech, are discussed in relation to where in the input/storing/output sequence noise has its effect and what the distinctive feature of the disturbing noise is.  相似文献   
30.
Despite considerable speculation in the research literature regarding the complementarity of functional lateralization of prosodic and linguistic processes in the normal intact brain, few studies have directly addressed this issue. In the present study, behavioral laterality indices of emotional prosodic and traditional linguistic speech functions were obtained for a sample of healthy young adults, using the dichotic listening method. After screening for adequate emotional prosody and linguistic recognition abilities, participants completed the Fused Rhymed Words Test (FRWT; Wexler & Halwes, 1983) and the Dichotic Emotion Recognition Test (DERT; McNeely & Netley, 1998). Examination of the difference in ear asymmetries for these measures within individuals revealed a complementary pattern in 78% of the sample. However, the correlation between laterality quotients for the FRWT and DERT was near zero, supporting Bryden's model of "statistical" complementarity (e.g., Bryden, 1990).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号