全文获取类型
收费全文 | 451篇 |
免费 | 41篇 |
国内免费 | 51篇 |
专业分类
543篇 |
出版年
2024年 | 1篇 |
2023年 | 24篇 |
2022年 | 4篇 |
2021年 | 13篇 |
2020年 | 25篇 |
2019年 | 27篇 |
2018年 | 21篇 |
2017年 | 31篇 |
2016年 | 24篇 |
2015年 | 20篇 |
2014年 | 14篇 |
2013年 | 39篇 |
2012年 | 10篇 |
2011年 | 20篇 |
2010年 | 11篇 |
2009年 | 20篇 |
2008年 | 23篇 |
2007年 | 22篇 |
2006年 | 21篇 |
2005年 | 18篇 |
2004年 | 23篇 |
2003年 | 23篇 |
2002年 | 19篇 |
2001年 | 18篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 7篇 |
1997年 | 2篇 |
1996年 | 7篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 7篇 |
1991年 | 3篇 |
1990年 | 4篇 |
1989年 | 7篇 |
1988年 | 6篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1985年 | 2篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1981年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1976年 | 2篇 |
1975年 | 2篇 |
排序方式: 共有543条查询结果,搜索用时 0 毫秒
401.
402.
In a severely withdrawn schizophrenic, a combination of instructions, modeling, informational feedback, and noncontingent reinforcement was associated with a low rate of appropriate verbalizations. However, an increase in speech output was obtained using a combination of instructions, modeling, informational feedback, and contingent reinforcement. The design of this study thus permitted the conclusion that contingent reinforcement was crucial in bringing about the increase in appropriate verbalizations. The three verbal behaviors that increased in frequency were: (a) the number of socially appropriate words emitted, (b) declarative statements, and (c) appropriate replies to questions. The verbal behaviors of conversational questions and positive conversational feedback failed to be significantly affected by the experimental procedures. An attempt was also made to establish whether having the subject engage in the observable, information-gathering response of reading aloud would result in increased speech output during a subsequent conversation period. Having the subject read aloud failed to have any discernible positive effect on his speech output. A withdrawal design was used to evaluate the effectiveness of the experimental procedures. The importance of assessing changes in verbal output with a variety of verbal response measures, stereotyped verbal behaviors, and carryover effects are discussed. 相似文献
403.
404.
A prominent hypothesis holds that by speaking to infants in infant-directed speech (IDS) as opposed to adult-directed speech (ADS), parents help them learn phonetic categories. Specifically, two characteristics of IDS have been claimed to facilitate learning: hyperarticulation, which makes the categories more separable, and variability, which makes the generalization more robust. Here, we test the separability and robustness of vowel category learning on acoustic representations of speech uttered by Japanese adults in ADS, IDS (addressed to 18- to 24-month olds), or read speech (RS). Separability is determined by means of a distance measure computed between the five short vowel categories of Japanese, while robustness is assessed by testing the ability of six different machine learning algorithms trained to classify vowels to generalize on stimuli spoken by a novel speaker in ADS. Using two different speech representations, we find that hyperarticulated speech, in the case of RS, can yield better separability, and that increased between-speaker variability in ADS can yield, for some algorithms, more robust categories. However, these conclusions do not apply to IDS, which turned out to yield neither more separable nor more robust categories compared to ADS inputs. We discuss the usefulness of machine learning algorithms run on real data to test hypotheses about the functional role of IDS. 相似文献
405.
The existence of the Language Familiarity Effect (LFE), where talkers of a familiar language are easier to identify than talkers of an unfamiliar language, is well-documented and uncontroversial. However, a closely related phenomenon known as the Other Accent Effect (OAE), where accented talkers are more difficult to recognize, is less well understood. There are several possible explanations for why the OAE exists, but to date, little data exist to adjudicate differences between them. Here, we begin to address this issue by directly comparing listeners’ recognition of talkers who speak in different types of accents, and by examining both the LFE and OAE in the same set of listeners. Specifically, Canadian English listeners were tested on their ability to recognize talkers within four types of voice line-ups: Canadian English talkers, Australian English talkers, Mandarin-accented English talkers, and Mandarin talkers. We predicted that the OAE would be present for talkers of Mandarin-accented English but not for talkers of Australian English—which is precisely what we observed. We also observed a disconnect between listeners’ confidence and performance across different types of accents; that is, listeners performed equally poorly with Mandarin and Mandarin-accented talkers, but they were more confident with their performance with the latter group of talkers. The present findings set the stage for further investigation into the nature of the OAE by exploring a range of potential explanations for the effect, and introducing important implications for forensic scientists’ evaluation of ear witness testimony. 相似文献
406.
407.
Blind Speakers Show Language‐Specific Patterns in Co‐Speech Gesture but Not Silent Gesture 下载免费PDF全文
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co‐speech gesture), not without speech (silent gesture). We ask whether the cross‐linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three‐dimensional motion scenes. We found an effect of language on co‐speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure. 相似文献
408.
An acoustic-perceptual investigation of a phonological phenomenon in which stress is retracted in double-stressed words (e.g., thirTEEN vs THIRteen MEN) was undertaken to identify the locus of functional impairments in speech prosody. Subjects included left-hemisphere-damaged (LHD) and right-hemisphere-damaged (RHD) patients and nonneurological controls. They were instructed to read sentences containing double-stressed target words in the presence of a clause boundary or its absence. Whereas all three groups of subjects were capable of manipulating the acoustic parameters that signal a shift in stress, there were some differences between the performance of the patient groups and that of the normal controls. Further, stress production deficits were more severe in LHD aphasic patients than in RHD patients. LHD speakers exhibited deficits in the control of both temporal and F0 cues. Their F0 disturbance appears to be secondary to a primary deficit in temporal control at the phase or sentence level, as an increased number of continuation rises found for the LHD patients seemed to arise from lengthy pauses within sentences. Findings are highlighted to address the nature of breakdown in speech prosody and the competing views of prosodic lateralization. 相似文献
409.
C. Kitamura C. Thanavishuth D. Burnham S. Luksaneeyanawin 《Infant behavior & development》2001,24(4):566
The aim of this study was to investigate the prosodic characteristics of infant-directed speech (IDS) to boys and girls in a tonal (Thai) and non-tonal (Australian English) language. Speech was collected from mothers speaking to infants at birth, and 3, 6, 9, and 12 months, and also to another adult. Mean-F0, pitch range, and utterance slope-F0 were extracted, and the integrity of the tonal information in Thai investigated. The age trends across the two languages differed for each of these measures but Australian English IDS was generally more exaggerated than Thai IDS. With respect to sex differences, Australian English mothers used higher mean-F0, pitch range, and more rising utterances for girls than boys, but Thai mothers used more subdued mean-F0 and more falling utterances for girls than boys. Despite variations in pitch modifications by Thai and Australian English mothers, overall IDS is more exaggerated than adult-directed speech (ADS) in both languages. Furthermore, tonal information in Thai was only slightly less identifiable in Thai IDS than Thai ADS. The universal features and language-specific differences in IDS are discussed in terms of facilitating infant socialization at younger ages, and language acquisition later in infancy. 相似文献
410.
Barbara E. Esch James E. Carr Laura L. Grow 《Journal of applied behavior analysis》2009,42(2):225-241
Evidence to support stimulus‐stimulus pairing (SSP) in speech acquisition is less than robust, calling into question the ability of SSP to reliably establish automatically reinforcing properties of speech and limiting the procedure's clinical utility for increasing vocalizations. We evaluated the effects of a modified SSP procedure on low‐frequency within‐session vocalizations that were further strengthened through programmed reinforcement. Procedural modifications (e.g., interspersed paired and unpaired trials) were designed to increase stimulus salience during SSP. All 3 participants, preschoolers with autism, showed differential increases of target over nontarget vocal responses during SSP. Results suggested an automatic reinforcement effect of SSP, although alternative interpretations are discussed, and suggestions are made for future research to determine the utility of SSP as a clinical intervention for speech‐delayed children. 相似文献