首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   195篇
  免费   4篇
  国内免费   1篇
  2021年   3篇
  2020年   4篇
  2019年   2篇
  2018年   1篇
  2017年   2篇
  2016年   6篇
  2015年   3篇
  2014年   7篇
  2013年   32篇
  2012年   7篇
  2011年   26篇
  2010年   4篇
  2009年   17篇
  2008年   12篇
  2007年   12篇
  2006年   13篇
  2005年   16篇
  2004年   19篇
  2003年   8篇
  2002年   3篇
  1999年   3篇
排序方式: 共有200条查询结果,搜索用时 125 毫秒
191.
Bilingualism research has established language non-selective lexical access in comprehension. However, the evidence for such an effect in production remains sparse and its neural time-course has not yet been investigated. We demonstrate that German-English bilinguals performing a simple picture-naming task exclusively in English spontaneously access the phonological form of –unproduced– German words. Participants were asked to produce English adjective-noun sequences describing the colour and identity of familiar objects presented as line drawings. We associated adjective and picture names such that their onsets phonologically overlapped in English (e.g., green goat), in German through translation (e.g., blue flower – ‘blaue Blume’), or in neither language. As expected, phonological priming in English modulated event-related brain potentials over the frontocentral scalp region from around 440 ms after picture onset. Phonological priming in German was detectable even earlier, from 300 ms, even though German was never produced and in the absence of an interaction between language and phonological repetition priming at any point in time. Overall, these results establish the existence of non-selective access to phonological representations of the two languages in the domain of speech production.  相似文献   
192.
Previous work has suggested that learners are sensitive to phonetic similarity when learning phonological patterns (e.g.,  and ). We tested 12-month-old infants to see if their willingness to generalize newly learned phonological alternations depended on the phonetic similarity of the sounds involved. Infants were exposed to words in an artificial language whose distributions provided evidence for a phonological alternation between two relatively dissimilar sounds ([p ∼ v] or [t ∼ z]). Sounds at one place of articulation (labials or coronals) alternated whereas sounds at the other place of articulation were contrastive. At test, infants generalized the alternation learned during exposure to pairs of sounds that were more similar ([b ∼ v] or [d ∼ z]). Infants in a control group instead learned an alternation between similar sounds ([b ∼ v] or [d ∼ z]). When tested on dissimilar pairs of sounds ([p ∼ v] or [t ∼ z]), the control group did not generalize their learning to the novel sounds. The results are consistent with a learning bias favoring alternations between similar sounds over alternations between dissimilar sounds.  相似文献   
193.
This study provides new experimental evidence that people learn phonological alternations in a biased way. Adult participants were exposed to alternations between phonetically dissimilar sounds (i.e., those differing in both voicing and manner, such as [p] and [v]). After learning these alternations, participants assumed, without evidence in the input, that more similar sounds (e.g., [b] and [v]) also alternated (Exp. 1). Even when provided with explicit evidence that dissimilar sounds (e.g., [p] and [v]) alternated but similar sounds ([b] and [v]) did not, participants tended to make errors in assuming that the similar sounds also alternated (Exp. 2). By comparison, a control group of participants found it easier to learn the opposite pattern, where similar sounds alternated but dissimilar sounds did not. The results are taken as evidence that learners have a soft bias, considering alternations between perceptually similar sounds to be more likely.  相似文献   
194.
At 14 months, children appear to struggle to apply their fairly well-developed speech perception abilities to learning similar sounding words (e.g., bih/dih; Stager & Werker, 1997). However, variability in nonphonetic aspects of the training stimuli seems to aid word learning at this age. Extant theories of early word learning cannot account for this benefit of variability. We offer a simple explanation for this range of effects based on associative learning. Simulations suggest that if infants encode both noncontrastive information (e.g., cues to speaker voice) and meaningful linguistic cues (e.g., place of articulation or voicing), then associative learning mechanisms predict these variability effects in early word learning. Crucially, this means that despite the importance of task variables in predicting performance, this body of work shows that phonological categories are still developing at this age, and that the structure of noninformative cues has critical influences on word learning abilities.  相似文献   
195.
The self-teaching hypothesis suggests that children learn orthographic structure of words through the experience of phonologically recoding them. The current study is an individual differences analysis of the self-teaching hypothesis. A total of 40 children in Grades 2 and 3 (7-9 years of age) completed tests of phonological recoding, word identification, and orthographic knowledge. The relation of phonological recoding and word identification was significantly mediated by orthographic knowledge. Furthermore, two aspects of orthographic knowledge (perhaps word-specific and general orthographic knowledge) mediated different variance shared between phonological recoding and word identification. Results support an individual differences version of the self-teaching hypothesis and emphasize the importance of phonological recoding in the primary curriculum.  相似文献   
196.
Speech perception of four phonetic categories (voicing, place, manner, and nasality) was investigated in children with specific language impairment (SLI) (n = 20) and age-matched controls (n = 19) in quiet and various noise conditions using an AXB two-alternative forced-choice paradigm. Children with SLI exhibited robust speech perception deficits in silence, stationary noise, and amplitude-modulated noise. Comparable deficits were obtained for fast, intermediate, and slow modulation rates, and this speaks against the various temporal processing accounts of SLI. Children with SLI exhibited normal “masking release” effects (i.e., better performance in fluctuating noise than in stationary noise), again suggesting relatively spared spectral and temporal auditory resolution. In terms of phonetic categories, voicing was more affected than place, manner, or nasality. The specific nature of this voicing deficit is hard to explain with general processing impairments in attention or memory. Finally, speech perception in noise correlated with an oral language component but not with either a memory or IQ component, and it accounted for unique variance beyond IQ and low-level auditory perception. In sum, poor speech perception seems to be one of the primary deficits in children with SLI that might explain poor phonological development, impaired word production, and poor word comprehension.  相似文献   
197.
Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices.  相似文献   
198.
A dissociation between phonological and visual attention (VA) span disorders has been reported in dyslexic children. This study investigates whether this cognitively-based dissociation has a neurobiological counterpart through the investigation of two cases of developmental dyslexia. LL showed a phonological disorder but preserved VA span whereas FG exhibited the reverse pattern. During a phonological rhyme judgement task, LL showed decreased activation of the left inferior frontal gyrus whereas this region was activated at the level of the controls in FG. Conversely, during a visual categorization task, FG demonstrated decreased activation of the parietal lobules whereas these regions were activated in LL as in the controls. These contrasted patterns of brain activation thus mirror the cognitive disorders’ dissociation. These findings provide the first evidence for an association between distinct brain mechanisms and distinct cognitive deficits in developmental dyslexia, emphasizing the importance of taking into account the heterogeneity of the reading disorder.  相似文献   
199.
We report a patient showing isolated phonological agraphia after an ischemic stroke involving the left supramarginal gyrus (SMG). In this patient, we investigated the effects of focal repetitive transcranial magnetic stimulation (rTMS) given as theta burst stimulation (TBS) over the left SMG, corresponding to the Brodmann area (BA) 40. The patient and ten control subjects performed a dictational words and nonwords writing task before, and 5 and 30 min after they received excitatory intermittent TBS (iTBS) over the left BA 40, the right hemisphere homologous to BA 40, the Wernicke’s area, or the primary visual cortex.ITBS over the left SMG lead to a brief facilitation of phonological non-words writing to dictation. This case study report illustrates that rTMS is able to influence, among other language functions, the phonological loading processes during the written language production in stroke patients.  相似文献   
200.
While cognitive changes in aging and neurodegenerative disease have been widely studied, language changes in these populations are less well understood. Inflecting novel words in a language with complex inflectional paradigms provides a good opportunity to observe how language processes change in normal and abnormal aging. Studies of language acquisition suggest that children inflect novel words based on their phonological similarity to real words they already know. It is unclear whether speakers continue to use the same strategy when encountering novel words throughout the lifespan or whether adult speakers apply symbolic rules. We administered a simple speech elicitation task involving Finnish-conforming pseudo-words and real Finnish words to healthy older adults, individuals with mild cognitive impairment, and individuals with Alzheimer's disease (AD) to investigate inflectional choices in these groups and how linguistic variables and disease severity predict inflection patterns. Phonological resemblance of novel words to both a regular and an irregular inflectional type, as well as bigram frequency of the novel words, significantly influenced participants' inflectional choices for novel words among the healthy elderly group and people with AD. The results support theories of inflection by phonological analogy (single-route models) and contradict theories advocating for formal symbolic rules (dual-route models).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号