全文获取类型
收费全文 | 803篇 |
免费 | 45篇 |
国内免费 | 2篇 |
出版年
2023年 | 7篇 |
2022年 | 4篇 |
2021年 | 24篇 |
2020年 | 38篇 |
2019年 | 31篇 |
2018年 | 21篇 |
2017年 | 37篇 |
2016年 | 21篇 |
2015年 | 16篇 |
2014年 | 31篇 |
2013年 | 118篇 |
2012年 | 29篇 |
2011年 | 49篇 |
2010年 | 14篇 |
2009年 | 40篇 |
2008年 | 46篇 |
2007年 | 39篇 |
2006年 | 27篇 |
2005年 | 13篇 |
2004年 | 23篇 |
2003年 | 14篇 |
2002年 | 9篇 |
2001年 | 7篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 3篇 |
1997年 | 10篇 |
1996年 | 2篇 |
1993年 | 1篇 |
1985年 | 15篇 |
1984年 | 19篇 |
1983年 | 18篇 |
1982年 | 19篇 |
1981年 | 17篇 |
1980年 | 27篇 |
1979年 | 20篇 |
1978年 | 17篇 |
1977年 | 4篇 |
1976年 | 7篇 |
1974年 | 2篇 |
1973年 | 2篇 |
排序方式: 共有850条查询结果,搜索用时 15 毫秒
231.
Are speakers equipped with preferences concerning grammatical structures that are absent in their language? We examine this question by investigating the sensitivity of English speakers to the sonority of onset clusters. Linguistic research suggests that certain onset clusters are universally preferred (e.g., bd>lb). We demonstrate that such preferences modulate the perception of unattested onsets by English speakers: Monosyllabic auditory nonwords with onsets that are universally dispreferred (e.g., lbif) are more likely to be classified as disyllabic and misperceived as identical to their disyllabic counterparts (e.g., lebif) compared to onsets that are relatively preferred across languages (e.g., bdif). Consequently, dispreferred onsets benefit from priming by their epenthetic counterpart (e.g., lebif-lbif) as much as they benefit from identity priming (e.g., lbif-lbif). A similar pattern of misperception (e.g., lbif-->lebif) was observed among speakers of Russian, where clusters of this type occur. But unlike English speakers, Russian speakers perceived these clusters accurately on most trials, suggesting that the perceptual illusions of English speakers are partly due to their linguistic experience, rather than phonetic confusion alone. Further evidence against a purely phonetic explanation for our results is offered by the capacity of English speakers to perceive such onsets accurately under conditions that encourage precise phonetic encoding. The perceptual illusions of English speakers are also irreducible to several statistical properties of the English lexicon. The systematic misperception of universally dispreferred onsets might reflect their ill-formedness in the grammars of all speakers, irrespective of linguistic experience. Such universal grammatical preferences implicate constraints on language learning. 相似文献
232.
Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech. 相似文献
233.
The authors qualitatively examine parent experiences in groups for persons seeking parental rights through Child Protective Services (CPS). The study focuses on 16 custody-seeking parent figures who participated in dialogical groups designed from a Collaborative Language Systems perspective. The grounded-theory analysis shows that parents initially described overwhelming emotions and conflictual relationships with CPS. It also identifies five therapeutic group processes that appeared to influence perceptions of hope and personal power and contribute to how parents position themselves relative to CPS: validation, sharing practical information and networking, highlighting strengths and resources, supportive confrontation, and sharing stories of change. The analysis provides insight into CPS parents' experiences, suggests that dialogical approaches may have potential to assist in reshaping experiences in CPS, and draws attention to the need for interventions at the structural and administrative levels. 相似文献
234.
235.
The picture-word interference paradigm was used to shed new light on the debate concerning slow serial versus fast parallel activation of phonology in silent reading. Prereaders, beginning readers (Grades 1-4), and adults named pictures that had words printed on them. Words and pictures shared phonology either at the beginnings of words (e.g., DOLL-DOG) or at the ends of words (e.g., FOG-DOG). The results showed that phonological overlap between primes and targets facilitated picture naming. This facilitatory effect was present even in beginning readers. More important, from Grade 1 onward, end-related facilitation always was as strong as beginning-related facilitation. This result suggests that, from the beginning of reading, the implicit and automatic activation of phonological codes during silent reading is not serial but rather parallel. 相似文献
236.
Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre‐recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence‐final verbs evokes larger task‐evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn‐taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load. 相似文献
237.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays. 相似文献
238.
Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners’ ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals’ learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations. 相似文献
239.
240.
Orienting biases for speech may provide a foundation for language development. Although human infants show a bias for listening to speech from birth, the relation of a speech bias to later language development has not been established. Here, we examine whether infants' attention to speech directly predicts expressive vocabulary. Infants listened to speech or non‐speech in a preferential listening procedure. Results show that infants' attention to speech at 12 months significantly predicted expressive vocabulary at 18 months, while indices of general development did not. No predictive relationships were found for infants' attention to non‐speech, or overall attention to sounds, suggesting that the relationship between speech and expressive vocabulary was not a function of infants' general attentiveness. Potentially ancient evolutionary perceptual capacities such as biases for conspecific vocalizations may provide a foundation for proficiency in formal systems such language, much like the approximate number sense may provide a foundation for formal mathematics. 相似文献