首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   498篇
  免费   10篇
  2023年   2篇
  2022年   1篇
  2021年   16篇
  2020年   7篇
  2019年   9篇
  2018年   4篇
  2017年   10篇
  2016年   9篇
  2015年   10篇
  2014年   26篇
  2013年   62篇
  2012年   20篇
  2011年   34篇
  2010年   5篇
  2009年   26篇
  2008年   34篇
  2007年   23篇
  2006年   13篇
  2005年   8篇
  2004年   18篇
  2003年   10篇
  2002年   7篇
  2001年   5篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   2篇
  1995年   1篇
  1993年   1篇
  1986年   1篇
  1985年   14篇
  1984年   18篇
  1983年   18篇
  1982年   16篇
  1981年   17篇
  1980年   18篇
  1979年   15篇
  1978年   15篇
  1977年   3篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有508条查询结果,搜索用时 93 毫秒
121.
This paper outlines portions of the writer's therapeutics in the field of stuttering with emphasis upon use of masking the hearing of stutterers as a part of a total therapeutic approach. Commencing with devices providing continuous noise to override the voice of the stutterer so that he cannot hear himself and, thus, eliciting fluency. Described is the use of the voice actuated Edinburgh Auditory Masker within an intensive stuttering program in a prison setting since September 1978. It is concluded that the use of masking and the Edinburgh device has been helpful and productive with severe stutterers.  相似文献   
122.
Seven language tests were constructed or adapted to assess the performance of three groups of 10 right-handed adult subjects: a right hemisphere lesion (RHL) group, a left hemisphere lesion (LHL) group, and a neurologically normal (NN) control group. Both the LHL and RHL groups produced poorer scores than the NN group on six of the seven tests. On two of the six significant tests, the RHL performed more poorly than the NN group. Analyses of words uttered during an oral story telling test indicated that the RHL group told significantly fewer complete stories using significantly more nouns, adjectives, and conjunctions than the NN group. On a 7-point scale, three judges rated the overall communication abilities of the RHL group as having “mild problems,” a significantly different rating than the ratings of the LHL and NN groups. The findings suggest that underlying visual spatial and perceptual deficits may be accompanied by clearly recognizable language differences in certain subjects.  相似文献   
123.
124.
This case report describes an unusual combination of speech and language deficits secondary to bilateral infarctions in a 62-year-old woman. The patient was administered an extensive series of speech, language, and audiologic tests and was found to exhibit a fluent aphasia in which reading and writing were extremely well preserved in comparison to auditory comprehension and oral expression, and a severe auditory agnosia. In spite of her auditory processing deficits, the patient exhibited unexpected self-monitoring ability and the capacity to form acoustic images on visual tasks. The manner in which she corrected and attempted to correct her phonemic errors, while ignoring semantic errors, suggests that different mechanisms may underlie the monitoring of these errors.  相似文献   
125.
The method of vibrotactile mangitude production scaling was used to determine the tactile sensory-perceptual integrity for the dorsum of the tongue and thenar eminence of the right hand for 10 fluent speakers and 10 stutterers. It was discovered that both groups performed the task in a similar manner for the thenar eminence of the hand (a nonoral structure) but in a dissimilar manner for the tongue (an oral structure). From these data, it is suggested that the stutterers may maintain a different internal sensory-perceptual process for the tactile system involved in the speech process. The possibility exists that stuttering, for some, may be an “internal disorder” of the tactile-proprioceptive feedback mechanism that is directly involved in speech production.  相似文献   
126.
The ultimate test of the adequacy of linguistic models of fluency breakdown is the degree to which they may account for patterns of dysfluency in the speech of a stutterer who is fluent in more than one language. We present the case of an adult bilingual stutterer (Spanish-English), whose spontaneous language in both Spanish and English was structurally analyzed to assess the relationships of phonological and syntactic structure to the frequency and location of fluency breakdown. Our findings suggest that syntax is probably a greater determinant of stuttered moments than is phonology. Additionally, similarities and differences between English and Spanish sentence structure were associated with similarities and differences in the loci of dysfluencies across the two languages. The need for crosslinguistic research utilizing monolingual and bilingual speakers of languages other than English is emphasized.  相似文献   
127.
Previous research shows that simultaneously executed grasp and vocalization responses are faster when the precision grip is performed with the vowel [i] and the power grip is performed with the vowel [ɑ]. Research also shows that observing an object that is graspable with a precision or power grip can activate the grip congruent with the object. Given the connection between vowel articulation and grasping, this study explores whether grasp‐related size of observed objects can influence not only grasp responses but also vowel pronunciation. The participants had to categorize small and large objects into natural and manufactured categories by pronouncing the vowel [i] or [ɑ]. As predicted, [i] was produced faster when the object's grasp‐related size was congruent with the precision grip while [ɑ] was produced faster when the size was congruent with the power grip (Experiment 1). The effect was not, however, observed when the participants were presented with large objects that are not typically grasped by the power grip (Experiment 2). This study demonstrates that vowel production is systematically influenced by grasp‐related size of a viewed object, supporting the account that sensory‐motor processes related to grasp planning and representing grasp‐related properties of viewed objects interact with articulation processes. The paper discusses these findings in the context of size–sound symbolism, suggesting that mechanisms that transform size‐grasp affordances into corresponding grasp‐ and articulation‐related motor programs might provide a neural basis for size‐sound phenomena that links small objects with closed‐front vowels and large objects with open‐back vowels.  相似文献   
128.
Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant–ant for big–small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.  相似文献   
129.
Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such “quantized” views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.  相似文献   
130.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号