全文获取类型
收费全文 | 592篇 |
免费 | 47篇 |
国内免费 | 54篇 |
出版年
2024年 | 1篇 |
2023年 | 25篇 |
2022年 | 8篇 |
2021年 | 16篇 |
2020年 | 34篇 |
2019年 | 33篇 |
2018年 | 27篇 |
2017年 | 40篇 |
2016年 | 27篇 |
2015年 | 20篇 |
2014年 | 19篇 |
2013年 | 57篇 |
2012年 | 10篇 |
2011年 | 28篇 |
2010年 | 12篇 |
2009年 | 30篇 |
2008年 | 28篇 |
2007年 | 31篇 |
2006年 | 25篇 |
2005年 | 20篇 |
2004年 | 26篇 |
2003年 | 25篇 |
2002年 | 26篇 |
2001年 | 22篇 |
2000年 | 9篇 |
1999年 | 8篇 |
1998年 | 8篇 |
1997年 | 2篇 |
1996年 | 7篇 |
1995年 | 2篇 |
1994年 | 4篇 |
1993年 | 2篇 |
1992年 | 7篇 |
1991年 | 5篇 |
1990年 | 4篇 |
1989年 | 7篇 |
1988年 | 7篇 |
1987年 | 2篇 |
1986年 | 1篇 |
1985年 | 4篇 |
1984年 | 4篇 |
1983年 | 3篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1980年 | 3篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1976年 | 4篇 |
1975年 | 4篇 |
排序方式: 共有693条查询结果,搜索用时 46 毫秒
31.
Kawachi K 《Journal of psycholinguistic research》2002,31(4):363-390
The present study addresses the question of how practice in expressing the content to be conveyed in a specific situation influences speech production planning processes. A comparison of slips of the tongue in Japanese collected from spontaneous everyday conversation and those collected from largely preplanned conversation in live-broadcast TV programs reveals that, although there are those aspects of speech production planning that are unaffected by practice, there are various practice effects, most of which can be explained in terms of automatization of the processing of content, resulting in shifts in the loci of errors. 相似文献
32.
Effect of speech task on intelligibility in dysarthria: a case study of Parkinson's disease 总被引:2,自引:0,他引:2
This study assessed intelligibility in a dysarthric patient with Parkinson's disease (PD) across five speech production tasks: spontaneous speech, repetition, reading, repeated singing, and spontaneous singing, using the same phrases for all but spontaneous singing. The results show that this speaker was significantly less intelligible when speaking spontaneously than in the other tasks. Acoustic analysis suggested that relative intensity and word duration were not independently linked to intelligibility, but dysfluencies (from perceptual analysis) and articulatory/resonance patterns (from acoustic records) were related to intelligibility in predictable ways. These data indicate that speech production task may be an important variable to consider during the evaluation of dysarthria. As speech production efficiency was found to vary with task in a patient with Parkinson's disease, these results can be related to recent models of basal ganglia function in motor performance. 相似文献
33.
Ziegler W 《Brain and language》2002,80(3):556-575
This study was focused on the potential influence of task-related factors on oral motor performance in patients with speech disorders. Sentence production was compared with a nonspeech oral motor task, i.e., oral diadochokinesis. Perceptual and acoustic measures of speech impairment were used as dependent variables. Between-task comparisons were made for subsamples of a population of 140 patients with different motor speech syndromes, including apraxia of speech and cerebellar dysarthria. In a further analysis subgroups were matched for speaking rate. Overall, dysdiadochokinesis was correlated with the degree of speech impairment, but there was a strong interaction between task type and motor speech syndrome. In particular, cerebellar pathology affected DDK to a relatively greater extent than sentence production, while apraxic pathology spared the ability of repeating syllables at maximum speed. 相似文献
34.
Sex differences in judgement of facial affect: a multivariate analysis of recognition errors 总被引:4,自引:0,他引:4
The present paper investigated recognition errors in affective judgement of facial emotional expressions. Twenty-eight females and sixteen males participated in the study. The results showed that in both males and females emotional displays could be correctly classified, but females had a higher rate of correct classification; males were more likely to have difficulty distinguishing one emotion from another. Females rated emotions identically regardless of whether the emotion was displayed by a male or female face. Furthermore, the two-factor structure of emotion, based on a valence and an arousal dimension, was only present for female subjects. These results further extend our knowledge about gender differences in affective information processing. 相似文献
35.
Previous studies have found that subjects diagnosed with verbal auditory agnosia (VAA) from bilateral brain lesions may experience difficulties at the prephonemic level of acoustic processing. In this case study, we administered a series of speech and nonspeech discrimination tests to an individual with unilateral VAA as a result of left-temporal-lobe damage. The results indicated that the subject's ability to perceive steady-state acoustic stimuli was relatively intact but his ability to perceive dynamic stimuli was drastically reduced. We conclude that this particular aspect of acoustic processing may be a major contributing factor that disables speech perception in subjects with unilateral VAA. 相似文献
36.
Children who produce one word at a time often use gesture to supplement their speech, turning a single word into an utterance that conveys a sentence-like meaning ('eat'+point at cookie). Interestingly, the age at which children first produce supplementary gesture-speech combinations of this sort reliably predicts the age at which they first produce two-word utterances. Gesture thus serves as a signal that a child will soon be ready to begin producing multi-word sentences. The question is what happens next. Gesture could continue to expand a child's communicative repertoire over development, combining with words to convey increasingly complex ideas. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. We addressed this question in a sample of 40 typically developing children, each observed at 14, 18, and 22 months. The number of supplementary gesture-speech combinations the children produced increased significantly from 14 to 22 months. More importantly, the types of supplementary combinations the children produced changed over time and presaged changes in their speech. Children produced three distinct constructions across the two modalities several months before these same constructions appeared entirely within speech. Gesture thus continues to be at the cutting edge of early language development, providing stepping-stones to increasingly complex linguistic constructions. 相似文献
37.
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception. 相似文献
38.
The origin and functions of the hand and arm gestures that accompany speech production are poorly understood. It has been proposed that gestures facilitate lexical retrieval, but little is known about when retrieval is accompanied by gestural activity and how this activity is related to the semantics of the word to be retrieved. Electromyographic (EMG) activity of the dominant forearm was recorded during a retrieval task in which participants tried to identify target words from their definitions. EMG amplitudes were significantly greater for concrete than for abstract words. The relationship between EMG amplitude and other conceptual attributes of the target words was examined. EMG was positively related to a word’s judged spatiality, concreteness, drawability, and manipulability. The implications of these findings for theories of the relation between speech production and gesture are discussed.This experiment was done by the first author under the supervision of the second author in partial completion of the Ph.D. degree at Columbia University. We gratefully acknowledge the advice and comments of Lois Putnam, Robert Remez, James Magnuson, Michele Miozzo, and Robert B. Tallarico, and the assistance of Stephen Krieger, Lauren Walsh, Jennifer Kim, and Jillian White. 相似文献
39.
Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input. 相似文献
40.