首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Previous research has shown that the perception of speech sounds is strongly influenced by the internal structure of maternal language categories. Specifically, it has been shown that stimuli judged as good exemplars of a phonemic category are more difficult to discriminate from similar sounds than bad exemplars from equally similar sounds. This effect seems to be restricted to phonemes present in the maternal language, and is acquired in the first months of life. The present study investigates the malleability of speech acquisition by analysing the discrimination capacities for L2 phonemes in highly proficient Spanish-Catalan bilinguals born in monolingual families. In Experiment I subjects were required to give goodness of fit judgments to establish the best exemplars corresponding to three different vowel categories (Catalan /e/ and /ε/ Spanish /e/). In Experiments 2 and 3, bilinguals were asked to perform a discrimination task with materials in their maternal language (Exp. 2) and in their second language (Exp. 3). Results reveal that bilinguals show a reduced discrimination capacity only for good exemplars of their maternal language, but not for good exemplars of their second language. The same pattern of results was obtained in Experiment 4, using a within-subjects design and a bias-free discrimination measure (d'). These findings support the hypothesis that phonemic categories are not only acquired early in life, but under some circumstances, the acquisition of new phonemic categories can be seriously compromised, in spite of early and extensive exposure to L2.  相似文献   

2.
We investigated the effects of visual speech information (articulatory gestures) on the perception of second language (L2) sounds. Previous studies have demonstrated that listeners often fail to hear the difference between certain non-native phonemic contrasts, such as in the case of Spanish native speakers regarding the Catalan sounds /ɛ/ and /e/. Here, we tested whether adding visual information about the articulatory gestures (i.e., lip movements) could enhance this perceptual ability. We found that, for auditory-only presentations, Spanish-dominant bilinguals failed to show sensitivity to the /ɛ/–/e/ contrast, whereas Catalan-dominant bilinguals did. Yet, when the same speech events were presented audiovisually, Spanish-dominants (as well as Catalan-dominants) were sensitive to the phonemic contrast. Finally, when the stimuli were presented only visually (in the absence of sound), none of the two groups presented clear signs of discrimination. Our results suggest that visual speech gestures enhance second language perception at the level of phonological processing especially by way of multisensory integration.  相似文献   

3.
The present study examined the extent to which verbal auditory agnosia (VAA) is primarily a phonemic decoding disorder, as contrasted to a more global defect in acoustic processing. Subjects were six young adults who presented with VAA in childhood and who, at the time of testing, showed varying degrees of residual auditory discrimination impairment. They were compared to a group of young adults with normal language development matched for age and gender. Cortical event-related potentials (ERPs) were recorded to tones and to consonant-vowel stimuli presented in an "oddball" discrimination paradigm. In addition to cortical ERPs, auditory brainstem responses (ABRs) and middle latency responses (MLRs) were recorded. Cognitive and language assessments were obtained for the VAA subjects. ABRs and MLRs were normal. In comparison with the control group, the cortical ERPs of the VAA subjects showed a delay in the N1 component recorded over lateral temporal cortex both to tones and to speech sounds, despite an N1 of normal latency overlying the frontocentral region of the scalp. These electrophysiologic findings indicate a slowing of processing of both speech and nonspeech auditory stimuli and suggest that the locus of this abnormality is within the secondary auditory cortex in the lateral surface of the temporal lobes.  相似文献   

4.
Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language‐specific phoneme categories, but how these categories are learned largely remains a mystery. Peperkamp, Le Calvez, Nadal, and Dupoux (2006) present an algorithm that can discover phonemes using the distributions of allophones as well as the phonetic properties of the allophones and their contexts. We show that a third type of information source, the occurrence of pairs of minimally differing word forms in speech heard by the infant, is also useful for learning phonemic categories and is in fact more reliable than purely distributional information in data containing a large number of allophones. In our model, learners build an approximation of the lexicon consisting of the high‐frequency n‐grams present in their speech input, allowing them to take advantage of top‐down lexical information without needing to learn words. This may explain how infants have already begun to exhibit sensitivity to phonemic categories before they have a large receptive lexicon.  相似文献   

5.
Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish). Participants classified the 1st syllable of disyllabic stimuli embedded in lists where the 2nd, task-irrelevant, syllable could contain a Catalan contrastive variation (/epsilon/-/e/) or no variation. Catalan dominants responded more slowly in lists where the 2nd syllable could vary from trial to trial, suggesting an indirect effect of the /epsilon/-/e/ discrimination. Spanish dominants did not suffer this interference, performing indistinguishably from Spanish monolinguals. The present findings provide implicit evidence that even proficient bilinguals categorize L2 sounds according to their L1 representations.  相似文献   

6.
Perceptual discrimination between speech sounds belonging to different phoneme categories is better than that between sounds falling within the same category. This property, known as "categorical perception," is weaker in children affected by dyslexia. Categorical perception develops from the predispositions of newborns for discriminating all potential phoneme categories in the world's languages. Predispositions that are not relevant for phoneme perception in the ambient language are usually deactivated during early childhood. However, the current study shows that dyslexic children maintain a higher sensitivity to phonemic distinctions irrelevant in their linguistic environment. This suggests that dyslexic children use an allophonic mode of speech perception that, although without straightforward consequences for oral communication, has obvious implications for the acquisition of alphabetic writing. Allophonic perception specifically affects the mapping between graphemes and phonemes, contrary to other manifestations of dyslexia, and may be a core deficit.  相似文献   

7.
The language environment modifies the speech perception abilities found in early development. In particular, adults have difficulty perceiving many nonnative contrasts that young infants discriminate. The underlying perceptual reorganization apparently occurs by 10-12 months. According to one view, it depends on experiential effects on psychoacoustic mechanisms. Alternatively, phonological development has been held responsible, with perception influenced by whether the nonnative sounds occur allophonically in the native language. We hypothesized that a phonemic process appears around 10-12 months that assimilates speech sounds to native categories whenever possible; otherwise, they are perceived in auditory or phonetic (articulatory) terms. We tested this with English-speaking listeners by using Zulu click contrasts. Adults discriminated the click contrasts; performance on the most difficult (80% correct) was not diminished even when the most obvious acoustic difference was eliminated. Infants showed good discrimination of the acoustically modified contrast even by 12-14 months. Together with earlier reports of developmental change in perception of nonnative contrasts, these findings support a phonological explanation of language-specific reorganization in speech perception.  相似文献   

8.
Event-related brain potentials (ERPs) were used to determine whether low left-hemisphere arousal or unusual cortical responses to speech stimuli might be associated with anomalies in language function that reportedly occur when psychopaths perform lateralized information-processing tasks. ERPs to phonemic stimuli were recorded while 11 psychopathic (P) and 13 nonpsychopathic (NP) male prison inmates performed a Single-Task and a Dual-Task. In the Single-Task, a speech discrimination ‘oddball’ paradigm, the subject was required to respond whenever a target stimulus (the less frequent of two phonemes) occured. In the Dual-Task, he had to respond to target stimuli while simultaneously performing a perceptual-motor (distractor) task. There were no group differences in ERP measures of central arousal (N100) during performance of the Single- and Dual-Tasks. For both groups, the P300 component of the ERP to the target stimulus was smaller and had longer latency during the Dual-Task than during the Single-Task, indicating that in the Dual-Task phonemic discrimination and the perceptual-motor task completed for similar perceptual resources. Overlapping Group P's P300 responses to the target stimulus during the Dual-Task was a vertex and asymmetric (left-hemisphere) positive slow wave (SW), suggesting unusual speech processing in psychopaths under conditions of distraction, perhaps related to reduced sensitivity to the sequential probabilities associated with events presented in an auditory channel. The results were consistent with the hypothesis that psychopaths have limited left-hemisphere resources for processing linguistic stimuli.  相似文献   

9.
Spanish-English coordinate bilinguals were subjects in a GSR linguistic conditioning experiment using strong and mild buzzer conditions and spoken stimuli. Each subject was randomly assigned to one of two lists of words and one of two levels of buzzer sounds. A Spanish word from the Spanish list and an English word from the English list functioned as a conditioned word (CS). The lists were Spanish and English words related semantically and phonemically and unrelated to the CS. Generalization was studied under conscious and unconscious conditions. We found that both buzzer conditions resulted in significantly greater GSR responses to semantic and phonemic words than to words unrelated to the CS. Generalization to semantic words was not significantly greater than to phonemic words. There was a tendency toward greater phonemic than semantic generalization in the strong buzzer condition. The opposite was observed regarding the mild buzzer. The results were the same in both lists and languages. Under a conscious and unstressful condition, generalization to semantic words was found to be more prominent than to phonemic words. This suggests that under normal condition semantic generalization is mediated by conscious cognition. We concluded that strong emotion produces an increase in phonemic, as compared to semantic, generalization in both languages. Hence, primitivization of the subjects' cognitive and linguistic functioning is assumed to have occurred. These results are important in understanding the deleterious effect that stressful situations may have on linguistic functioning and cognition in bilinguals.  相似文献   

10.
The ‘automatic letter‐sound integration hypothesis’ (Blomert, 2011 ) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio‐visual objects. We tested this hypothesis in a sample of English‐speaking children with dyslexic difficulties (= 13) and samples of chronological‐age‐matched (CA; N = 17) and reading‐age‐matched controls (RA;= 17) aged 7–13 years. Each child took part in two priming experiments in which speech sounds were preceded by congruent visual letters (congruent condition) or Greek letters (baseline). In a behavioural experiment, responses to speech sounds in the two conditions were compared using reaction times. These data revealed faster reaction times in the congruent condition in all three groups. In a second electrophysiological experiment, responses to speech sounds in the two conditions were compared using event‐related potentials (ERPs). These data revealed a significant effect of congruency on (1) the P1 ERP over left frontal electrodes in the CA group and over fronto‐central electrodes in the dyslexic group and (2) the P2 ERP in the dyslexic and RA control groups. These findings suggest that our sample of English‐speaking children with dyslexic difficulties demonstrate a degree of letter‐sound integration that is appropriate for their reading level, which challenges the letter‐sound integration hypothesis.  相似文献   

11.
Different kinds of speech sounds are used to signify possible word forms in every language. For example, lexical stress is used in Spanish (/‘be.be/, ‘he/she drinks’ versus /be.’be/, ‘baby’), but not in French (/‘be.be/ and /be.’be/ both mean ‘baby’). Infants learn many such native language phonetic contrasts in their first year of life, likely using a number of cues from parental speech input. One such cue could be parents’ object labeling, which can explicitly highlight relevant contrasts. Here we ask whether phonetic learning from object labeling is abstract—that is, if learning can generalize to new phonetic contexts. We investigate this issue in the prosodic domain, as the abstraction of prosodic cues (like lexical stress) has been shown to be particularly difficult. One group of 10-month-old French-learners was given consistent word labels that contrasted on lexical stress (e.g., Object A was labeled /‘ma.bu/, and Object B was labeled /ma.’bu/). Another group of 10-month-olds was given inconsistent word labels (i.e., mixed pairings), and stress discrimination in both groups was measured in a test phase with words made up of new syllables. Infants trained with consistently contrastive labels showed an earlier effect of discrimination compared to infants trained with inconsistent labels. Results indicate that phonetic learning from object labeling can indeed generalize, and suggest one way infants may learn the sound properties of their native language(s).  相似文献   

12.
Language experience 'narrows' speech perception by the end of infants' first year, reducing discrimination of non-native phoneme contrasts while improving native-contrast discrimination. Previous research showed that declines in non-native discrimination were reversed by second-language experience provided at 9-10 months, but it is not known whether second-language experience affects first-language speech sound processing. Using event-related potentials (ERPs), we examined learning-related changes in brain activity to Spanish and English phoneme contrasts in monolingual English-learning infants pre- and post-exposure to Spanish from 9.5-10.5 months of age. Infants showed a significant discriminatory ERP response to the Spanish contrast at 11 months (post-exposure), but not at 9 months (pre-exposure). The English contrast elicited an earlier discriminatory response at 11 months than at 9 months, suggesting improvement in native-language processing. The results show that infants rapidly encode new phonetic information, and that improvement in native speech processing can occur during second-language learning in infancy.  相似文献   

13.
English‐monolingual children develop a shape bias early in language acquisition, such that they more often generalize a novel label based on shape than other features. Spanish‐monolingual children, however, do not show this bias to the same extent (Hahn & Cantrell, 2012). Studying children who are simultaneously learning both Spanish and English presents a unique opportunity to further investigate how this word‐learning bias develops. Thus, we asked how Spanish–English bilingual children (Mage = 21.31 months) perform in a novel‐noun generalization (NNG) task, specifically examining how past language experience (i.e. language exposure and vocabulary size) and present language context (i.e. whether the NNG task was conducted in Spanish or English) influence the strength of the shape bias. Participants completed the NNG task either entirely in English (N = 16) or entirely in Spanish (N = 16), as well as language understanding tasks in both English and Spanish to ensure that they understood what the experimenter was asking them to do. Parents completed a language exposure survey and vocabulary checklists in Spanish and English. There was a significant interaction between condition and choice type: Bilingual children in the English condition showed a shape bias in the NNG task, but bilingual children in the Spanish condition showed no reliable biases. No measures of past language experience were related to NNG task performance. These results suggest that when learning new words, bilingual children are attuned to the regularities of the present language context, and prior language experiences may play a more secondary role.  相似文献   

14.
Though bilinguals know many more words than monolinguals, within each language bilinguals exhibit some processing disadvantages, extending to sublexical processes specifying the sound structure of words (Gollan & Goldrick, Cognition, 125(3), 491–497, 2012). This study investigated the source of this bilingual disadvantage. Spanish–English bilinguals, Mandarin–English bilinguals, and English monolinguals repeated tongue twisters composed of English nonwords. Twister materials were made up of sound sequences that are unique to the English language (nonoverlapping) or sound sequences that are highly similar—yet phonetically distinct—in the two languages for the bilingual groups (overlapping). If bilingual disadvantages in tongue-twister production result from competition between phonetic representations in their two languages, bilinguals should have more difficulty selecting an intended target when similar sounds are activated in the overlapping sound sequences. Alternatively, if bilingual disadvantages reflect the relatively reduced frequency of use of sound sequences, bilinguals should have greater difficulty in the nonoverlapping condition (as the elements of such sound sequences are limited to a single language). Consistent with the frequency-lag account, but not the competition account, both Spanish–English and Mandarin–English bilinguals were disadvantaged in tongue-twister production only when producing twisters with nonoverlapping sound sequences. Thus, the bilingual disadvantage in tongue-twister production likely reflects reduced frequency of use of sound sequences specific to each language.  相似文献   

15.
ABSTRACT

The purpose of the following study was to investigate the extent to which phoneme and grapheme‐phoneme contrasts between English and Spanish are associated with word‐decoding problems in Spanish‐speaking adults who are early readers of English. To test whether such a co‐occurrence exists, word‐decoding exercises were administered to four groups of Spanish‐speaking subjects. The groups were organized on the basis of English‐language proficiency. Significant differences in word‐decoding performance appeared between words containing letter‐sounds found in both Spanish and English and those with letter‐sounds not found in Spanish.  相似文献   

16.
Speech sounds are judged reliably and absolutely, while the judgment of nonspeech stimuli, such as tones, is thought to be unreliable and dependent on contextual cues. Here we demonstrated that the judgment of tonal stimuli may also be reliable and absolute, provided that the subjects are trained musicians. In Experiment 1, musicians with relative pitch identified 21 tonal intervals ranging from unison to major third, and the resulting identification functions were similar to those that have been previously obtained for speech. In Experiment 3, the judgment of intervals by musicians was shown to be free of context effects, since the best subjects gave virtually identical judgments to the same intervals in two stimulus contexts. Similar results were obtained in Experiments 2 and 4 for the judgment of single tones by possessors of absolute pitch. Performance with both notes and intervals by nonmusicians, however, was unreliable and greatly influenced by context. These findings suggest that musicians acquire categories for pitch that are functionally similar to phonemic categories for speech  相似文献   

17.
Current models of reading and speech perception differ widely in their assumptions regarding the interaction of orthographic and phonological information during language perception. The present experiments examined this interaction through a two-alternative, forced-choice paradigm, and explored the nature of the connections between graphemic and phonemic processing subsystems. Experiments 1 and 2 demonstrated a facilitation-dominant influence (i.e., benefits exceed costs) of graphemic contexts on phoneme discrimination, which is interpreted as a sensitivity effect. Experiments 3 and 4 demonstrated a symmetrical influence (i.e., benefits equal costs) of phonemic contexts on grapheme discrimination, which can be interpreted as either a bias effect, or an equally facilitative/inhibitory sensitivity effect. General implications for the functional architecture of language processing models are discussed, as well as specific implications for models of visual word recognition and speech perception.  相似文献   

18.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   

19.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

20.
Many words have more than one meaning, and these meanings vary in their degree of relatedness. In the present experiment, we examined whether this degree of relatedness is influenced by whether or not the two meanings share a translation in a bilingual’s other language. Native English speakers with Spanish as a second language (i.e., English-Spanish bilinguals) and native Spanish speakers with English as a second language (i.e., Spanish-English bilinguals) were presented with pairs of phrases instantiating different senses of ambiguous English words (e.g., dinner dateexpiration date) and were asked to decide whether the two senses were related in meaning. Critically, for some pairs of phrases, a single Spanish translation encompassed both meanings of the ambiguous word (joint-translation condition; e.g., mercado in Spanish refers to both a flea market and the housing market), but for others, each sense corresponded to a different Spanish translation (split-translation condition; e.g., cita in Spanish refers to a dinner date, but fecha refers to an expiration date). The proportions of “yes” (related) responses revealed that, relative to monolingual English speakers, Spanish–English bilinguals consider joint-translation senses to be less related than split-translation senses. These findings exemplify semantic cross-language influences from a first to a second language and reveal the semantic structure of the bilingual lexicon.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号