首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 764 毫秒
1.
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.  相似文献   

2.
Recent work on perceptual learning shows that listeners' phonemic representations dynamically adjust to reflect the speech they hear (Norris, McQueen, & Cutler, 2003). We investigate how the perceptual system makes such adjustments, and what (if anything) causes the representations to return to their pre-perceptual learning settings. Listeners are exposed to a speaker whose pronunciation of a particular sound (either /s/ or /integral/) is ambiguous (e.g., halfway between /s/ and /integral/). After exposure, participants are tested for perceptual learning on two continua that range from /s/ to /integral/, one in the Same voice they heard during exposure, and one in a Different voice. To assess how representations revert to their prior settings, half of Experiment 1's participants were tested immediately after exposure; the other half performed a 25-min silent intervening task. The perceptual learning effect was actually larger after such a delay, indicating that simply allowing time to pass does not cause learning to fade. The remaining experiments investigate different ways that the system might unlearn a person's pronunciations: listeners hear the Same or a Different speaker for 25 min with either: no relevant (i.e., 'good') /s/ or /integral/ input (Experiment 2), one of the relevant inputs (Experiment 3), or both relevant inputs (Experiment 4). The results support a view of phonemic representations as dynamic and flexible, and suggest that they interact with both higher- (e.g., lexical) and lower-level (e.g., acoustic) information in important ways.  相似文献   

3.
It is well established that variation in caregivers’ speech is associated with language outcomes, yet little is known about the learning principles that mediate these effects. This longitudinal study (n = 27) explores whether Spanish‐learning children's early experiences with language predict efficiency in real‐time comprehension and vocabulary learning. Measures of mothers’ speech at 18 months were examined in relation to children's speech processing efficiency and reported vocabulary at 18 and 24 months. Children of mothers who provided more input at 18 months knew more words and were faster in word recognition at 24 months. Moreover, multiple regression analyses indicated that the influences of caregiver speech on speed of word recognition and vocabulary were largely overlapping. This study provides the first evidence that input shapes children's lexical processing efficiency and that vocabulary growth and increasing facility in spoken word comprehension work together to support the uptake of the information that rich input affords the young language learner.  相似文献   

4.
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non‐speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.  相似文献   

5.
Bradlow AR  Bent T 《Cognition》2008,106(2):707-729
This study investigated talker-dependent and talker-independent perceptual adaptation to foreign-accent English. Experiment 1 investigated talker-dependent adaptation by comparing native English listeners' recognition accuracy for Chinese-accented English across single and multiple talker presentation conditions. Results showed that the native listeners adapted to the foreign-accented speech over the course of the single talker presentation condition with some variation in the rate and extent of this adaptation depending on the baseline sentence intelligibility of the foreign-accented talker. Experiment 2 investigated talker-independent perceptual adaptation to Chinese-accented English by exposing native English listeners to Chinese-accented English and then testing their perception of English produced by a novel Chinese-accented talker. Results showed that, if exposed to multiple talkers of Chinese-accented English during training, native English listeners could achieve talker-independent adaptation to Chinese-accented English. Taken together, these findings provide evidence for highly flexible speech perception processes that can adapt to speech that deviates substantially from the pronunciation norms in the native talker community along multiple acoustic-phonetic dimensions.  相似文献   

6.
A class of selective attention models often applied to speech perception is used to study effects of training on the perception of an unfamiliar phonetic contrast. Attention-to-dimension (A2D) models of perceptual learning assume that the dimensions that structure listeners' perceptual space are constant and that learning involves only the reweighting of existing dimensions to emphasize or de-emphasize different sensory dimensions. Multidimensional scaling is used to identify the acoustic-phonetic dimensions listeners use before and after training to recognize the 3 classes of Korean stop consonants. Results suggest that A2D models can account for some observed restructuring of listeners' perceptual space, but listeners also show evidence of directing attention to a previously unattended dimension of phonetic contrast.  相似文献   

7.
This study shows that the ratio of voice onset time (VOT) to syllable duration for /t/ and /d/ presents distributions with a stable boundary across speaking rates and that this boundary constitutes a perceptual criterion by which listeners judge the category affiliation of VOT. In Experiment 1, best-fit regression lines for VOT ratios of intervocalic /t/ and /d/ against speaking rate had zero slopes, and there was an inferable boundary between the distributions. In Experiment 2, listeners' identifications of syllable-initial stops conformed to this boundary ratio. In Experiment 3, VOT was held constant, while VOT ratios were altered by modifying the duration of the following vowel. As VOT ratios exceeded the boundary estimated from the data of Experiment 1, listeners' identifications shifted from /d/ to /t/. Timing relations in speech production can determine the identification of voicing categories across speaking rates.  相似文献   

8.
Adult language users have an enormous amount of experience with speech in their native language. As a result, they have very well-developed processes for categorizing the sounds of speech that they hear. Despite this very high level of experience, recent research has shown that listeners are capable of redeveloping their speech categorization to bring it into alignment with new variation in their speech input. This reorganization of phonetic space is a type of perceptual learning, or recalibration, of speech processes. In this article, we review several recent lines of research on perceptual learning for speech.  相似文献   

9.
The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants’ subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants’ prior experience with the distribution of sounds that make up words in natural languages.  相似文献   

10.
How does the development and consolidation of perceptual, attentional, and higher cognitive abilities interact with language acquisition and processing? We explored children's (ages 5–17) and adults’ (ages 18–51) comprehension of morphosyntactically varied sentences under several competing speech conditions that varied in the degree of attentional demands, auditory masking, and semantic interference. We also evaluated the relationship between subjects’ syntactic comprehension and their word reading efficiency and general ‘speed of processing’. We found that the interactions between perceptual and attentional processes and complex sentence interpretation changed considerably over the course of development. Perceptual masking of the speech signal had an early and lasting impact on comprehension, particularly for more complex sentence structures. In contrast, increased attentional demand in the absence of energetic auditory masking primarily affected younger children's comprehension of difficult sentence types. Finally, the predictability of syntactic comprehension abilities by external measures of development and expertise is contingent upon the perceptual, attentional, and semantic milieu in which language processing takes place.  相似文献   

11.
Attention orienting effects of hesitations in speech: evidence from ERPs   总被引:1,自引:0,他引:1  
Filled-pause disfluencies such as um and er affect listeners' comprehension, possibly mediated by attentional mechanisms (J. E. Fox Tree, 2001). However, there is little direct evidence that hesitations affect attention. The current study used an acoustic manipulation of continuous speech to induce event-related potential components associated with attention (mismatch negativity [MMN] and P300) during the comprehension of fluent and disfluent utterances. In fluent cases, infrequently occurring acoustically manipulated target words gave rise to typical MMN and P300 components when compared to nonmanipulated controls. In disfluent cases, where targets were preceded by natural sounding hesitations culminating in the filled pause er, an MMN (reflecting a detection of deviance) was still apparent for manipulated words, but there was little evidence of a subsequent P300. This suggests that attention was not reoriented to deviant words in disfluent cases. A subsequent recognition test showed that nonmanipulated words were more likely to be remembered if they had been preceded by a hesitation. Taken together, these results strongly implicate attention in an account of disfluency processing: Hesitations orient listeners' attention, with consequences for the immediate processing and later representation of an utterance.  相似文献   

12.
The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension of less intelligible/distorted speech with more intelligible speech. Meta-analysis 2 (21 studies) identified areas associated with speech production. The results indicate that difficult comprehension involves increased reliance of cortical regions in which comprehension and production overlapped (bilateral anterior Superior Temporal Sulcus (STS) and anterior Supplementary Motor Area (pre-SMA)) and in an area associated with intelligibility processing (left posterior MTG), and second involves increased reliance on cortical areas associated with general executive processes (bilateral anterior insulae). Comprehension of distorted speech may be supported by a hybrid neural mechanism combining increased involvement of areas associated with general executive processing and areas shared between comprehension and production.  相似文献   

13.
In two experiments, the nature of the relation between attention available at learning and subsequent automatic and controlled influences of memory was explored. Participants studied word lists in full and divided encoding conditions. Memory for the word lists was then tested with a perceptually driven task (stem completion) in Experiment 1 and with a conceptually driven task (category association) in Experiment 2. For recall cued with word stems, automatic influences of memory derived using the process-dissociation procedure remained invariant across a manipulation of attention that substantially reduced conscious recollection for the learning episode. In contrast, for recall cued with category names, dividing attention at learning significantly reduced the parameter estimates representing both controlled and automatic memory processes. These findings were similar to those obtained using indirect test instructions. The results suggest that, in contrast to perceptual priming, conceptual priming may be enhanced by semantic processing, and this effect is not an artifact of contamination from conscious retrieval processes.  相似文献   

14.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

15.
Previous research suggests that exposure to accent variability can affect toddlers’ familiar word recognition and word comprehension. The current preregistered study addressed the gap in knowledge on early language development in infants exposed to two dialects from birth and assessed the role of dialect similarity in infants’ word recognition and comprehension. A 12-month-old Norwegian-learning infants, exposed to native Norwegian parents speaking the same or two Norwegian dialects, took part in two eye-tracking tasks, assessing familiar word form recognition and word comprehension. Their parents’ speech was assessed for similarity by native Norwegian speakers. First, in contrast to previous research, our results revealed no listening preference for words over nonwords in both monodialectal and bidialectal infants, suggesting potential language-specific differences in the onset of word recognition. Second, the results showed evidence for word comprehension in monodialectal infants, but not in bidialectal infants, suggesting that exposure to dialectal variability impacts early word acquisition. Third, perceptual similarity between parental dialects tendentially facilitated bidialectal infants’ word recognition and comprehension. Forth, the results revealed a strong correlation between the raters and parents’ assessment of similarity between dialects, indicating that parental estimations can be reliably used to assess infants’ speech variability at home. Finally, our results revealed a strong relationship between word recognition and comprehension in monodialectal infants and the absence of such a relationship in bidialectal infants, suggesting that either these two skills do not necessarily align in infants exposed to more variable input, or that the alignment might occur at a later stage.  相似文献   

16.
Previous studies have suggested that perceptual information regarding to-be-remembered words in the study phase affects the accuracy of judgement of learning (JOL). However, few have investigated whether the perceptual information in the JOL phase influences JOL accuracy. This study examined the influence of cue word perceptual information in the JOL phase on immediate and delayed JOL accuracy through changes in cue word font size. In Experiment 1, large-cue word pairs had significantly higher mean JOL magnitude than small-cue word pairs in immediate JOLs and higher relative accuracy than small-cue pairs in delayed JOLs, but font size had no influence on recall performance. Experiment 2 increased the JOL time, and mean JOL magnitude did not reliably differ for large-cue compared with small-cue pairs in immediate JOLs. However, the influence on relative accuracy still existed in delayed JOLs. Experiment 3 increased the familiarity of small-cue words in the delayed JOL phase by adding a lexical decision task. The results indicated that cue word font size no longer affected relative accuracy in delayed JOLs. The three experiments in our study indicated that the perceptual information regarding cue words in the JOL phase affects immediate and delayed JOLs in different ways.  相似文献   

17.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners' perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener's orientation to speech stimuli.  相似文献   

18.
言语产生和言语理解都涉及同音词在通达过程的表征。言语产生研究中,Levelt和Caramazza分别从通达的两阶段分离激活和独立网络模型推出了同音词的分享和独立表征模型,并用语言实验的频率效应和病人的语音治疗效应给予检验。本文评述了研究的新进展,探讨了同音词表征模型的分歧,认为同音词词汇表征与语言差异、加工范式、知觉通道等有关。从言语理解(言语知觉和词汇再认)的研究表明,这两种表征模型可能难以概括同音词尤其是汉语同音词的表征。本文根据言语理解研究的新近发现建议了一些可能的表征模型。  相似文献   

19.
Young children are frequently exposed to sounds such as speech and music in noisy listening conditions, which have the potential to disrupt their learning. Missing input that is masked by louder sounds can, under the right conditions, be ‘filled in’ by the perceptual system using a process known as perceptual restoration. This experiment compared the ability of 4‐ to 6‐year‐old children, 9‐ to 11‐year‐old children and adults to complete a melody identification task using perceptual restoration. Melodies were presented either intact (complete input), with noise‐filled gaps (partial input; perceptual restoration can occur) or with silence‐filled gaps (partial input; perceptual restoration cannot occur). All age groups could use perceptual restoration to help them interpret partial input, yet perception was the most detrimentally affected by the presentation of partial input for the youngest children. This implies that they might have more difficulty perceiving sounds in noisy environments than older children or adults. Young children had particular difficulty using partial input for identification under conditions where perceptual restoration could not occur. These findings suggest that perceptual restoration is a crucial mechanism in young children, where processes that fill in missing sensory input represent an important part of the way meaning is extracted from a complex sensory world. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
Susca M  Healey EC 《Journal of Fluency Disorders》2002,27(2):135-60; quiz 160-1
The purpose of this study was to conduct a phenomenological analysis (a qualitative research method) of unbiased listeners' perceptions to six speech samples across a fluency-disfluency continuum. A sample of 60 individuals heard only one sample chosen from three levels of fluent or three levels of disfluent speech. Listeners were interviewed following the presentation of the speech sample and their comments were analyzed with respect to the perception of the speaker's communicative effectiveness. Communicative effectiveness was supported by three phenomenological categories: speaker attributes, listener attributes, and story attributes. Five theme clusters further supported these categories: speech production, context, speaker identity, listener comfort, and story comprehension. The results showed that listener perceptions within theme clusters varied across the six speech samples. The results also showed that listeners differentially respond to a broad array of information in the speech signal (not simply fluency or disfluency). These findings support Traunmuller's (1994) modulation theory associated with information that can be obtained from the speech signal. Implications for the treatment of stuttering are also discussed. EDUCATIONAL OBJECTIVES: The reader will learn (1) how listeners may have multiple and varying perceptual experiences depending upon where along a fluency-disfluency continuum a speech sample is heard; (2) how perceptual experiences are influenced by speaker, listener, and story attributes; and (3) how phenomenological analysis may expand our understanding of multifactorial issues associated with stuttering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号