首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the phonemic in English condition, the speech sounds represented two different phonemic categories in English, but represented the same phonemic category in Spanish. In the phonemic in Spanish condition, the speech sounds represented two different phonemic categories in Spanish, but represented the same phonemic categories in English. Results showed pre-attentive discrimination when the acoustics/phonetics of the speech sounds match the language context (e.g., phonemic in English condition during the English language context). The results suggest that language contexts can affect pre-attentive auditory change detection. Specifically, bilinguals’ mental processing of stop consonants relies on contextual linguistic information.  相似文献   

2.
Perceptual discrimination between speech sounds belonging to different phoneme categories is better than that between sounds falling within the same category. This property, known as "categorical perception," is weaker in children affected by dyslexia. Categorical perception develops from the predispositions of newborns for discriminating all potential phoneme categories in the world's languages. Predispositions that are not relevant for phoneme perception in the ambient language are usually deactivated during early childhood. However, the current study shows that dyslexic children maintain a higher sensitivity to phonemic distinctions irrelevant in their linguistic environment. This suggests that dyslexic children use an allophonic mode of speech perception that, although without straightforward consequences for oral communication, has obvious implications for the acquisition of alphabetic writing. Allophonic perception specifically affects the mapping between graphemes and phonemes, contrary to other manifestations of dyslexia, and may be a core deficit.  相似文献   

3.
Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish). Participants classified the 1st syllable of disyllabic stimuli embedded in lists where the 2nd, task-irrelevant, syllable could contain a Catalan contrastive variation (/epsilon/-/e/) or no variation. Catalan dominants responded more slowly in lists where the 2nd syllable could vary from trial to trial, suggesting an indirect effect of the /epsilon/-/e/ discrimination. Spanish dominants did not suffer this interference, performing indistinguishably from Spanish monolinguals. The present findings provide implicit evidence that even proficient bilinguals categorize L2 sounds according to their L1 representations.  相似文献   

4.
We investigated the effects of visual speech information (articulatory gestures) on the perception of second language (L2) sounds. Previous studies have demonstrated that listeners often fail to hear the difference between certain non-native phonemic contrasts, such as in the case of Spanish native speakers regarding the Catalan sounds /ɛ/ and /e/. Here, we tested whether adding visual information about the articulatory gestures (i.e., lip movements) could enhance this perceptual ability. We found that, for auditory-only presentations, Spanish-dominant bilinguals failed to show sensitivity to the /ɛ/–/e/ contrast, whereas Catalan-dominant bilinguals did. Yet, when the same speech events were presented audiovisually, Spanish-dominants (as well as Catalan-dominants) were sensitive to the phonemic contrast. Finally, when the stimuli were presented only visually (in the absence of sound), none of the two groups presented clear signs of discrimination. Our results suggest that visual speech gestures enhance second language perception at the level of phonological processing especially by way of multisensory integration.  相似文献   

5.
Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language‐specific phoneme categories, but how these categories are learned largely remains a mystery. Peperkamp, Le Calvez, Nadal, and Dupoux (2006) present an algorithm that can discover phonemes using the distributions of allophones as well as the phonetic properties of the allophones and their contexts. We show that a third type of information source, the occurrence of pairs of minimally differing word forms in speech heard by the infant, is also useful for learning phonemic categories and is in fact more reliable than purely distributional information in data containing a large number of allophones. In our model, learners build an approximation of the lexicon consisting of the high‐frequency n‐grams present in their speech input, allowing them to take advantage of top‐down lexical information without needing to learn words. This may explain how infants have already begun to exhibit sensitivity to phonemic categories before they have a large receptive lexicon.  相似文献   

6.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

7.
言语产生的语音加工单元具有跨语言的特异性。在印欧语言中, 音位是语音加工的重要功能单元。音位指具体语言中能够区别意义的最小语音单位, 如“big”包含三个音位/b/, /i/, /g/。目前, 在汉语言语产生中, 对音位的研究较少。本项目拟采用事件相关电位技术, 对汉语言语产生中的音位加工进行探讨, 试图考察:在汉语言语产生中, 1)音位加工的心理现实性, 以及音位表征是否受第二语言、汉语拼音习得、拼音使用经验的影响?2)音位的加工机制是怎样的?具体而言, 音位加工的特异性、位置编码、组合方式、时间进程是怎样的?对这些问题的回答, 将有助于深化对汉语言语产生的认识, 为建立汉语言语产生计算模型提供基础; 为比较印欧语言与汉语在机制上的异同提供基础; 为制定汉语语音教育教学方法提供心理学依据。  相似文献   

8.
Lexical context strongly influences listeners’ identification of ambiguous sounds. For example, a sound midway between /f/ and /s/ is reported as /f/ in “sheri_’” but as /s/ in “Pari_.” Norris, McQueen, and Cutler (2003) have demonstrated that after hearing such lexically determined phonemes, listeners expand their phonemic categories to include more ambiguous tokens than before. We tested whether listeners adjust their phonemic categories for a specific speaker: Do listeners learn a particular speaker’s “accent”? Similarly, we examined whether perceptual learning is specific to the particular ambiguous phonemes that listeners hear, or whether the adjustments generalize to related sounds. Participants heard ambiguous /d/ or /t/ phonemes during a lexical decision task. They then categorized sounds on /d/-/t/ and /b/-/p/ continua, either in the same voice that they had heard for lexical decision, or in a different voice. Perceptual learning generalized across both speaker and test continua: Changes in perceptual representations are robust and broadly tuned.  相似文献   

9.
When learning language, young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and we propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two‐stage computational analysis of a large corpus of English child‐directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges – the beginning and ending phonemes of words – to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child‐directed speech is possible by attending to the statistics of single phoneme transitions and word‐initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure.  相似文献   

10.
The language environment modifies the speech perception abilities found in early development. In particular, adults have difficulty perceiving many nonnative contrasts that young infants discriminate. The underlying perceptual reorganization apparently occurs by 10-12 months. According to one view, it depends on experiential effects on psychoacoustic mechanisms. Alternatively, phonological development has been held responsible, with perception influenced by whether the nonnative sounds occur allophonically in the native language. We hypothesized that a phonemic process appears around 10-12 months that assimilates speech sounds to native categories whenever possible; otherwise, they are perceived in auditory or phonetic (articulatory) terms. We tested this with English-speaking listeners by using Zulu click contrasts. Adults discriminated the click contrasts; performance on the most difficult (80% correct) was not diminished even when the most obvious acoustic difference was eliminated. Infants showed good discrimination of the acoustically modified contrast even by 12-14 months. Together with earlier reports of developmental change in perception of nonnative contrasts, these findings support a phonological explanation of language-specific reorganization in speech perception.  相似文献   

11.
Forty bilinguals from several language backgrounds were contrasted to a group of English-speaking monolinguals on a verbal-manual interference paradigm. For the monolinguals, concurrent finger-tapping rate during speech output tasks was disrupted only for the right hand, indicating left-hemisphere language dominance. Bilingual laterality patterns were a function of language used: native (L1) versus second acquired (L2), and age of L2 acquisition. Early bilinguals (L1 + L2 acquisition prior to age 6) revealed left-hemisphere dominance for both languages, whereas late bilinguals (L2 acquired beyond age 6) revealed left-hemisphere dominance only for L1 and symmetrical hemispheric involvement for L2.  相似文献   

12.
Earlier experiments have shown that when one or more speech sounds in a sentence are replaced by a noise meeting certain criteria, the listener mislocalizes the extraneous sound and believes he hears the missing phoneme(s) clearly. The present study confirms and extends these earlier reports of phonemic restorations under a variety of novel conditions. All stimuli had some of the context necessary for the appropriate phonemic restoration following the missing sound, and all sentences had the missing phoneme deliberately mispronounced before electronic deletion (so that the neighboring phonemes could not provide acoustic cues to aid phonemic restorations). The results are interpreted in terms of mechanisms normally aiding veridical perception of speech and nonspeech sounds.  相似文献   

13.
Räsänen O 《Cognition》2011,(2):149-176
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this work, a computational model for word segmentation and learning of primitive lexical items from continuous speech is presented. The model does not utilize any a priori linguistic or phonemic knowledge such as phones, phonemes or articulatory gestures, but computes transitional probabilities between atomic acoustic events in order to detect recurring patterns in speech. Experiments with the model show that word segmentation is possible without any knowledge of linguistically relevant structures, and that the learned ungrounded word models show a relatively high selectivity towards specific words or frequently co-occurring combinations of short words.  相似文献   

14.
Ward J  Simner J 《Cognition》2003,89(3):237-261
This study documents an unusual case of developmental synaesthesia, in which speech sounds induce an involuntary sensation of taste that is subjectively located in the mouth. JIW shows a highly structured, non-random relationship between particular combinations of phonemes (rather than graphemes) and the resultant taste, and this is influenced by a number of fine-grained phonemic properties (e.g. allophony, phoneme ordering). The synaesthesia is not found for environmental sounds. The synaesthesia, in its current form, is likely to have originated during vocabulary acquisition, since it is guided by learned linguistic and conceptual knowledge. The phonemes that trigger a given taste tend to also appear in the name of the corresponding foodstuff (e.g. /I/, /n/ and /s/ can trigger a taste of mince /mIns/) and there is often a semantic association between the triggering word and taste (e.g. the word blue tastes "inky"). The results suggest that synaesthesia does not simply reflect innate connections from one perceptual system to another, but that it can be mediated and/or influenced by a symbolic/conceptual level of representation.  相似文献   

15.
《Learning and motivation》1987,18(3):235-260
Pigeons were trained on a successive discrimination task using complex visual stimuli. In Experiment 1, each photographic slide that contained a person had a corresponding “matched background” slide, one that showed the same scene with the person removed. Birds trained on a human positive discrimination acquired the matched pairs problem, but birds trained on a human negative discrimination performed poorly. This suggests a feature-positive effect for complex stimulus categories. Memorization control groups that were trained on a human-irrelevant discrimination also performed poorly with matched slides. However, subsequent experiments demonstrated that these effects depended on the use of matched pairs of slides. The human-as-feature effect was not obtained when human positive and human negative groups were subsequently trained with non-matched slides (Experiment 2), and memorization control groups acquired a human-irrelevant discrimination when trained with nonmatched slides (Experiment 3). Additional tests conducted in Experiments 2 and 3 found that performance was not disrupted when either the reinforced or nonreinforced slides were replaced. This effect was obtained when the category was relevant to the discrimination (Experiment 2) and when the category was irrelevant to the discrimination (Experiment 3). Finally, Experiment 4 demonstrated that memorization of a set of slides is possible when slides are sufficiently dissimilar, (i.e., nonmatched) but performance is not as good when the category exemplars are irrelevant to the discrimination.  相似文献   

16.
The current study investigated the effects of the semantic context, cognate status of the word and second language age of acquisition and proficiency in two word naming experiments (Experiment 1 and 2). Three groups of Bodo–Assamese bilinguals named cognate and non-cognate words in their first language, Bodo and second language, Assamese, which were presented in categorized and randomized lists. Experiment 1 demonstrated significant category interference for both cognate and non-cognate words; whereas, in Experiment 2, category interference was observed only in case of cognate words, indicating that naming in L2 was more prone to semantic effects. In Experiment 1, the magnitude of the category interference effect was larger for the low proficient bilinguals, but in Experiment 2, only the high proficient bilinguals demonstrated category interference effect. Further, cognate facilitation effect was not observed in both experiments which is in line with the findings of previous studies. The findings are discussed in light of the predictions of the Revised Hierarchical Model.  相似文献   

17.
Although mnemonics have been shown to be effective in remembering letter-sound associations, the use of foreign words as cues for English phonemes had not been investigated. Learning phonemes in Japan is challenging because the Japanese language is based on a different sound unit called mora (mostly consonant-vowel combinations). This study investigated the effectiveness of using mnemonic images utilizing Japanese words as cues for the phonemes, and explicit sound contrasting of phonemic sounds with morae they could be confused with, in facilitating children's acquisition of knowledge about alphabet letter-sound correspondence. The participants were 140 6th-grade Japanese students who were taught phoneme-consonant correspondence, with or without the use of mnemonics or explicit sound contrasting. Analysis of the students’ pre- and post-instruction assessments revealed significant interaction effects between types of instruction provided and instruction phase, indicating better performance in letter-sound association as a consequence of the inclusion of both mnemonics and explicit sound contrasting.  相似文献   

18.
English, French, and bilingual English-French 17-month-old infants were compared for their performance on a word learning task using the Switch task. Object names presented a /b/ vs. /g/ contrast that is phonemic in both English and French, and auditory strings comprised English and French pronunciations by an adult bilingual. Infants were habituated to two novel objects labeled 'bowce' or 'gowce' and were then presented with a switch trial where a familiar word and familiar object were paired in a novel combination, and a same trial with a familiar word–object pairing. Bilingual infants looked significantly longer to switch vs. same trials, but English and French monolinguals did not, suggesting that bilingual infants can learn word–object associations when the phonetic conditions favor their input. Monolingual infants likely failed because the bilingual mode of presentation increased phonetic variability and did not match their real-world input. Experiment 2 tested this hypothesis by presenting monolingual infants with nonce word tokens restricted to native language pronunciations. Monolinguals succeeded in this case. Experiment 3 revealed that the presence of unfamiliar pronunciations in Experiment 2, rather than a reduction in overall phonetic variability was the key factor to success, as French infants failed when tested with English pronunciations of the nonce words. Thus phonetic variability impacts how infants perform in the switch task in ways that contribute to differences in monolingual and bilingual performance. Moreover, both monolinguals and bilinguals are developing adaptive speech processing skills that are specific to the language(s) they are learning.  相似文献   

19.
The perceptual restoration of musical sounds was investigated in 5 experiments with Samuel's (1981a) discrimination methodology. Restoration in familiar melodies was compared to phonemic restoration in Experiment 1. In the remaining experiments, we examined the effect of expectations (generated by familiarity, predictability, and musical schemata) on musical restoration. We investigated restoration in melodies by comparing familiar and unfamiliar melodies (Experiment 2), as well as unfamiliar melodies varying in tonal and rhythmic predictability (Experiment 3). Expectations based on both familiarity and predictability were found to reduce restoration at the melodic level. Restoration at the submelodic level was investigated with scales and chords in Experiments 4 and 5. At this level, key-based expectations were found to increase restoration. Implications for music perception, as well as similarities between restoration in music and speech, are discussed.  相似文献   

20.
There is an ongoing debate whether phonological deficits in dyslexics should be attributed to (a) less specified representations of speech sounds, like suggested by studies in young children with a familial risk for dyslexia, or (b) to an impaired access to these phonemic representations, as suggested by studies in adults with dyslexia. These conflicting findings are rooted in between study differences in sample characteristics and/or testing techniques. The current study uses the same multivariate functional MRI (fMRI) approach as previously used in adults with dyslexia to investigate phonemic representations in 30 beginning readers with a familial risk and 24 beginning readers without a familial risk of dyslexia, of whom 20 were later retrospectively classified as dyslexic. Based on fMRI response patterns evoked by listening to different utterances of /bA/ and /dA/ sounds, multivoxel analyses indicate that the underlying activation patterns of the two phonemes were distinct in children with a low family risk but not in children with high family risk. However, no group differences were observed between children that were later classified as typical versus dyslexic readers, regardless of their family risk status, indicating that poor phonemic representations constitute a risk for dyslexia but are not sufficient to result in reading problems. We hypothesize that poor phonemic representations are trait (family risk) and not state (dyslexia) dependent, and that representational deficits only lead to reading difficulties when they are present in conjunction with other neuroanatomical or—functional deficits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号