首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This event-related potential (ERP) study examined the impact of phonological variation resulting from a vowel merger on phoneme perception. The perception of the /e/-/ε/ contrast which does not exist in Southern French-speaking regions, and which is in the process of merging in Northern French-speaking regions, was compared to the /ø/-/y/ contrast, which is stable in all French-speaking regions. French-speaking participants from Switzerland for whom the /e/-/ε/ contrast is preserved, but who are exposed to different regional variants, had to perform a same-different task. They first heard four phonemically identical but acoustically different syllables (e.g., /be/-/be/-/be/-/be/), and then heard the test syllable which was either phonemically identical to (/be/) or phonemically different from (/bε/) the preceding context stimuli. The results showed that the unstable /e/-/ε/ contrast only induced a mismatch negativity (MMN), whereas the /ø/-/y/ contrast elicited both a MMN and electrophysiological differences on the P200. These findings were in line with the behavioral results in which responses were slower and more error-prone in the /e/-/ε/ deviant condition than in the /ø/-/y/ deviant condition. Together these findings suggest that the regional variability in the speech input to which listeners are exposed affects the perception of speech sounds in their own accent.  相似文献   

2.
The aim of this study was to determine whether the type of bilingualism affects neural organisation. We performed identification experiments and mismatch negativity (MMN) registrations in Finnish and Swedish language settings to see, whether behavioural identification and neurophysiological discrimination of vowels depend on the linguistic context, and whether there is a difference between two kinds of bilinguals. The stimuli were two vowels, which differentiate meaning in Finnish, but not in Swedish. The results indicate that Balanced Bilinguals are inconsistent in identification performance, and they have a longer MMN latency. Moreover, their MMN amplitude is context-independent, while Dominant Bilinguals show a larger MMN in the Finnish context. These results indicate that Dominant Bilinguals inhibit the preattentive discrimination of native contrast in a context where the distinction is non-phonemic, but this is not possible for Balanced Bilinguals. This implies that Dominant Bilinguals have separate systems, while Balanced Bilinguals have one inseparable system.  相似文献   

3.
The auditory temporal deficit hypothesis predicts that children with reading disability (RD) will exhibit deficits in the perception of speech and nonspeech acoustic stimuli in discrimination and temporal ordering tasks when the interstimulus interval (ISI) is short. Initial studies testing this hypothesis did not account for the potential presence of attention deficit hyperactivity disorder (ADHD). Temporal order judgment and discrimination tasks were administered to children with (1) RD/no-ADHD (n=38), (2) ADHD (n=29), (3) RD and ADHD (RD/ADHD; n=32), and (4) no impairment (NI; n=43). Contrary to predictions, children with RD showed no specific sensitivity to ISI and performed worse relative to children without RD on speech but not nonspeech tasks. Relationships between perceptual tasks and phonological processing measures were stronger and more consistent for speech than nonspeech stimuli. These results were independent of the presence of ADHD and suggest that children with RD have a deficit in phoneme perception that correlates with reading and phonological processing ability. (c) 2002 Elsevier Science (USA).  相似文献   

4.
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the phonemic in English condition, the speech sounds represented two different phonemic categories in English, but represented the same phonemic category in Spanish. In the phonemic in Spanish condition, the speech sounds represented two different phonemic categories in Spanish, but represented the same phonemic categories in English. Results showed pre-attentive discrimination when the acoustics/phonetics of the speech sounds match the language context (e.g., phonemic in English condition during the English language context). The results suggest that language contexts can affect pre-attentive auditory change detection. Specifically, bilinguals’ mental processing of stop consonants relies on contextual linguistic information.  相似文献   

5.
Using the mismatch negativity (MMN) response, we examined how Standard French and Southern French speakers access the meaning of words ending in /e/ or /ε/ vowels which are contrastive in Standard French but not in Southern French. In Standard French speakers, there was a significant difference in the amplitude of the brain response after the deviant-minus-standard subtraction between the frontocentral (FC) and right lateral (RL) recording sites for the final-/ε/ word but not the final-/e/ word. In contrast, the difference in the amplitude of the brain response between the FC and RL recording sites did not significantly vary as a function of the word’s final vowel in Southern French speakers. Our findings provide evidence that access to lexical meaning in spoken word recognition depends on the speaker’s native regional accent.  相似文献   

6.
    
Previous studies showed that manipulating the speech production system influenced speech perception. This influence was mediated by task difficulty, listening conditions, and attention. In the present study we investigated the specificity of a somatosensory manipulation – a spoon over the tongue – in passive listening. We measured the mismatch negativity (MMN) while participants listened to vowels that differ in their articulation – the tongue height – and familiarity – native and unknown vowels. The same participants heard the vowels in a spoon and no-spoon block. The order of the blocks was counterbalanced across participants. Results showed no effect of the spoon. Instead, starting with the spoon enhanced the MMN amplitude. A second experiment showed the same MMN enhancement for starting with a somatosensory manipulation applied to a non-articulator – the hand. This result suggests that starting a study with a somatosensory manipulation raises attention to the task.  相似文献   

7.
Mitterer H  Ernestus M 《Cognition》2008,109(1):168-173
This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.  相似文献   

8.
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.  相似文献   

9.
McMurray B  Aslin RN 《Cognition》2005,95(2):B15-B26
Previous research on speech perception in both adults and infants has supported the view that consonants are perceived categorically; that is, listeners are relatively insensitive to variation below the level of the phoneme. More recent work, on the other hand, has shown adults to be systematically sensitive to within category variation [McMurray, B., Tanenhaus, M., & Aslin, R. (2002). Gradient effects of within-category phonetic variation on lexical access, Cognition, 86 (2), B33-B42.]. Additionally, recent evidence suggests that infants are capable of using within-category variation to segment speech and to learn phonetic categories. Here we report two studies of 8-month-old infants, using the head-turn preference procedure, that examine more directly infants' sensitivity to within-category variation. Infants were exposed to 80 repetitions of words beginning with either /b/ or /p/. After exposure, listening times to tokens of the same category with small variations in VOT were significantly different than to both the originally exposed tokens and to the cross-category-boundary competitors. Thus infants, like adults, show systematic sensitivity to fine-grained, within-category detail in speech perception.  相似文献   

10.
We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age.  相似文献   

11.
    
The Language Familiarity Effect (LFE)—where listeners are better at processing talker‐voice information in their native language than in an unfamiliar language—has received renewed attention in the past 10 years. Numerous studies have sought to probe the underlying causes of this advantage by cleverly manipulating aspects of the stimuli (using phonologically related languages, backwards speech, nonwords) and by examining individual differences across listeners (testing reading ability and pitch perception). Most of these studies find evidence for the importance of phonological information or phonological processing as a supporting mechanism for the LFE. What has not been carefully examined, however, are how other methodological considerations such as task effects and stimulus length can change performance on talker‐voice processing tasks. In this review, I provide an overview of the literature on the LFE and examine how methodological decisions affect the presence or absence of the LFE. This article is categorized under:
  • Linguistics > Language in Mind and Brain
  • Psychology > Language
  相似文献   

12.
A man, woman or child saying the same vowel do so with very different voices. The auditory system solves the complex problem of extracting what the man, woman or child has said despite substantial differences in the acoustic properties of their voices. Much of the acoustic variation between the voices of men and woman is due to changes in the underlying anatomical mechanisms for producing speech. If the auditory system knew the sex of the speaker then it could potentially correct for speaker sex related acoustic variation thus facilitating vowel recognition. This study measured the minimum stimulus duration necessary to accurately discriminate whether a brief vowel segment was spoken by a man or woman, and the minimum stimulus duration necessary to accuately recognise what vowel was spoken. Results showed that reliable vowel recognition precedesreliable speaker sex discrimination, thus questioning the use of speaker sex information in compensating for speaker sex related acoustic variation in the voice. Furthermore, the pattern of performance across experiments where the fundamental frequency and formant frequency information of speaker's voices were systematically varied, was markedly different depending on whether the task was speaker-sex discrimination or vowel recognition. This argues for there being little relationship between perception of speaker sex (indexical information) and perception of what has been said (linguistic information) at short durations.  相似文献   

13.
This article aims to provide a theoretical framework to elucidate the neurophysiological underpinnings of deviance detection as reflected by mismatch negativity. A six-step model of the information processing necessary for deviance detection is proposed. In this model, predictive coding of learned regularities is realized by means of long-term potentiation with a crucial role for NMDA receptors. Mismatch negativity occurs at the last stage of the model, reflecting the increase in free energy associated with the switching on of silent synapses and the formation of new neural circuits required for adaptation to the environmental deviance. The model is discussed with regard to the pathological states most studied in relation to mismatch negativity: alcohol intoxication, alcohol withdrawal, and schizophrenia.  相似文献   

14.
为考察听觉失匹配负波是否反映自动加工,实验改进了视觉和听觉刺激同时呈现的感觉道间选择性注意实验模式,更好地控制了非注意听觉条件。结果发现,在注意与非注意听觉条件下,听觉偏离刺激均诱发出失匹配负波;注意听觉刺激时140-180ms的偏离相关负波与非注意时该时程负波的平均波幅之间无显著差异,而注意时180-220ms的偏离相关负波的平均波幅大于非注意时同一时程之负波;非注意听觉时失匹配负波的平均波幅和峰潜伏期不受视觉道任务难度的影响,该结果为听觉失匹配负波反映自动加工的观点提供了进一步证据。  相似文献   

15.
    
PurposeRecent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS).MethodsParticipants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC.ResultsThere were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS.ConclusionsThe results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology.Educational objectives: The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter.  相似文献   

16.
Recent research with cotton-top tamarin monkeys has revealed language discrimination abilities similar to those found in human infants, demonstrating that these perceptual abilities are not unique to humans but are also present in non-human primates. Specifically, tamarins could discriminate forward but not backward sentences of Dutch from Japanese, using both natural and synthesized utterances. The present study was designed as a conceptual replication of the work on tamarins. Results show that rats trained in a discrimination learning task readily discriminate forward, but not backward sentences of Dutch from Japanese; the results are particularly robust for synthetic utterances, a pattern that shows greater parallels with newborns than with tamarins. Our results extend the claims made in the research with tamarins that the capacity to discriminate languages from different rhythmic classes depends on general perceptual abilities that evolved at least as far back as the rodents. Electronic Publication  相似文献   

17.
Previous studies have shown that children suffering from developmental dyslexia have a deficit in categorical perception of speech sounds. The aim of the current study was to better understand the nature of this categorical perception deficit. In this study, categorical perception skills of children with dyslexia were compared with those of chronological age and reading level controls. Children identified and discriminated /do-to/ syllables along a voice onset time (VOT) continuum. Results showed that children with dyslexia discriminated among phonemically contrastive pairs less accurately than did chronological age and reading level controls and also showed higher sensitivity in the discrimination of allophonic contrasts. These results suggest that children with dyslexia perceive speech with allophonic units rather than phonemic units. The origin of allophonic perception in the course of perceptual development and its implication for reading acquisition are discussed.  相似文献   

18.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

19.
Listeners must cope with a great deal of variability in the speech signal, and thus theories of speech perception must also account for variability, which comes from a number of sources, including variation between accents. It is well known that there is a processing cost when listening to speech in an accent other than one's own, but recent work has suggested that this cost is reduced when listening to a familiar accent widely represented in the media, and/or when short amounts of exposure to an accent are provided. Little is known, however, about how these factors (long-term familiarity and short-term familiarization with an accent) interact. The current study tested this interaction by playing listeners difficult-to-segment sentences in noise, before and after a familiarization period where the same sentences were heard in the clear, allowing us to manipulate short-term familiarization. Listeners were speakers of either Glasgow English or Standard Southern British English, and they listened to speech in either their own or the other accent, thereby allowing us to manipulate long-term familiarity. Results suggest that both long-term familiarity and short-term familiarization mitigate the perceptual processing costs of listening to an accent that is not one's own, but seem not to compensate for them entirely, even when the accent is widely heard in the media.  相似文献   

20.
Humans often look at other people in natural scenes, and previous research has shown that these looks follow the conversation and that they are sensitive to sound in audiovisual speech perception. In the present experiment, participants viewed video clips of four people involved in a discussion. By removing the sound, we asked whether auditory information would affect when speakers were fixated, how fixations between different observers were synchronized, and whether the eyes or mouth were looked at most often. The results showed that sound changed the timing of looks—by alerting observers to changes in conversation and attracting attention to the speaker. Clips with sound also led to greater attentional synchrony, with more observers fixating the same regions at the same time. However, looks towards the eyes of the people continued to dominate and were unaffected by removing the sound. These findings provide a rich example of multimodal social attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号