首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Languages differ in their phonological structure and physcholinguists have begun to explore the conseqence, of this fact for speech perception. We review research documenting that listeners attune their perceptual processes finaly to exploit the phonological regularities of their nativ language. As a consequence, these perceptual process are fill-adapted to listening to languages that do not display such, regularities. Thus, not only do late language-learners have trouble speaking a second language, also they do not hear it as native speakers do; worse, they apply their native language listening prosedures which may actually interfere with successful processing of the non-native input. We also present data from studies on infants showing that the initial attuning occurs early in life; very yong infants are sensitive to the relevant phonological regularities which distinguish different languages, and quickly distinguish the native language of their environment from languages with different regularities.  相似文献   

2.
Listening in aging adults: from discourse comprehension to psychoacoustics.   总被引:4,自引:0,他引:4  
Older adults, whether or not they have clinically significant hearing loss, have more trouble than their younger counterparts understanding speech in everyday life. These age-related difficulties in speech understanding may be attributed to changes in higher-level cognitive processes such as language comprehension, memory, attention, and cognitive slowing, or to lower-level sensory and perceptual processes. A complicating factor in determining how these sources might contribute to age-related declines in speech understanding is that they are highly correlated. Experimenters have typically focused either on cognitive declines or sensory declines in artificially optimized test conditions. In contrast, our approach focuses on the complex interactions between age-related changes in cognitive and perceptual factors that affect spoken language comprehension, especially in nonideal, realistic conditions. In this article, we describe our attempts to systematically investigate sensory-cognitive interactions in controlled experimental situations. We begin by looking at experimental conditions that closely approximate everyday listening, and show that older adults do indeed experience deficits in spoken language comprehension relative to younger adults in these conditions. We then review further experiments designed to isolate more precisely the cognitive and perceptual sources of these age-related differences and how they vary with listening condition. In large part, we find that age-related changes in speech understanding are a consequence of auditory declines.  相似文献   

3.
Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners’ ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals’ learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations.  相似文献   

4.
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.  相似文献   

5.
言语生成中内隐学习现象的研究   总被引:3,自引:0,他引:3  
杨金鑫 《心理科学》2002,25(3):322-324,331
言语生成中的结构启动和语误现象,通常分别用瞬时激活和语音的约束范围来解释它们产生的机制。20世纪60年代末发端的内隐学习的研究,为深入理解言语生成机制提供了崭新的视角。言语生成的各种现象中,语言经验和学习机制发挥着重要作用,结构启动和语误更倾向于是一种内隐的序列学习。  相似文献   

6.
ABSTRACT— Early theories of how infants develop spatial concepts focused on the perceptual and cognitive abilities that contribute to this ability. More recent research, however, has centered on whether experience with spatial language might also play a role. The present article reviews how infants learn to form spatial categories, outlining the perceptual and cognitive abilities that drive this learning, and examines the role played by spatial language. I argue that infants' spatial concepts initially are the result of nonlinguistic perceptual and cognitive abilities, but that, as infants build a spatial lexicon, spatial language becomes an important tool in the spatial categories infants learn to form.  相似文献   

7.
Bilingual and monolingual infants differ in how they process linguistic aspects of the speech signal. But do they also differ in how they process non‐linguistic aspects of speech, such as who is talking? Here, we addressed this question by testing Canadian monolingual and bilingual 9‐month‐olds on their ability to learn to identify native Spanish‐speaking females in a face‐voice matching task. Importantly, neither group was familiar with Spanish prior to participating in the study. In line with our predictions, bilinguals succeeded in learning the face‐voice pairings, whereas monolinguals did not. We consider multiple explanations for this finding, including the possibility that simultaneous bilingualism enhances perceptual attentiveness to talker‐specific speech cues in infancy (even in unfamiliar languages), and that early bilingualism delays perceptual narrowing to language‐specific talker recognition cues. This work represents the first evidence that multilingualism in infancy affects the processing of non‐linguistic aspects of the speech signal, such as talker identity.  相似文献   

8.
Infant speech perception bootstraps word learning   总被引:2,自引:0,他引:2  
By their first birthday, infants can understand many spoken words. Research in cognitive development has long focused on the conceptual changes that accompany word learning, but learning new words also entails perceptual sophistication. Several developmental steps are required as infants learn to segment, identify and represent the phonetic forms of spoken words, and map those word forms to different concepts. We review recent research on how infants' perceptual systems unfold in the service of word learning, from initial sensitivity for speech to the learning of language-specific sound patterns. Building on a recent theoretical framework and emerging new methodologies, we show how speech perception is crucial for word learning, and suggest that it bootstraps the development of a separate but parallel phonological system that links sound to meaning.  相似文献   

9.
ABSTRACT— Previous research suggests that a language learned during early childhood is completely forgotten when contact to that language is severed. In contrast with these findings, we report leftover traces of early language exposure in individuals in their adult years, despite a complete absence of explicit memory for the language. Specifically, native English individuals under age 40 selectively relearned subtle Hindi or Zulu sound contrasts that they once knew. However, individuals over 40 failed to show any relearning, and young control participants with no previous exposure to Hindi or Zulu showed no learning. This research highlights the lasting impact of early language experience in shaping speech perception, and the value of exposing children to foreign languages even if such exposure does not continue into adulthood.  相似文献   

10.
ABSTRACT

The perceptual brain is designed around multisensory input. Areas once thought dedicated to a single sense are now known to work with multiple senses. It has been argued that the multisensory nature of the brain reflects a cortical architecture for which task, rather than sensory system, is the primary design principle. This supramodal thesis is supported by recent research on human echolocation and multisensory speech perception. In this review, we discuss the behavioural implications of a supramodal architecture, especially as they pertain to auditory perception. We suggest that the architecture implies a degree of perceptual parity between the senses and that cross-sensory integration occurs early and completely. We also argue that a supramodal architecture implies that perceptual experience can be shared across modalities and that this sharing should occur even without bimodal experience. We finish by briefly suggesting areas of future research.  相似文献   

11.
12.
Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception, can be enhanced through an implicit task-irrelevant learning procedure that has been shown to produce visual perceptual learning. The single-formant sounds were paired at subthreshold levels with the attended targets in an auditory identification task. Results showed that task-irrelevant learning occurred for the unattended stimuli. Surprisingly, the magnitude of this learning effect was similar to that following explicit training on auditory formant transition detection using discriminable stimuli in an adaptive procedure, whereas explicit training on the subthreshold stimuli produced no learning. These results suggest that in adults learning of speech parts can occur at least partially through implicit mechanisms.  相似文献   

13.
ABSTRACT

Since beginning readers rely on their oral language to gain meaning from text, oral reading is the preferred mode of reading for these students. While there are several reasons for this, one is that reading development seems to parallel Vygotsky's theory of language development. His theory states that language proceeds from a social speech to the development of inner speech. As children internalize language, which allows for abstract thought processes to develop, they go through a period of egocentric speech. During this time, the children use language overtly to control and monitor their learning. Children's reading behavior may go through a similar process.  相似文献   

14.
Werker JF  Pons F  Dietrich C  Kajikawa S  Fais L  Amano S 《Cognition》2007,103(1):147-162
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7, 49-63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories.  相似文献   

15.
语音感知的发展状况对个体的语言发展有着深远影响。生命的第一年中, 在语言经验的作用下, 婴儿的语音感知从最初的普遍性感知逐渐发展为对母语的特异性感知。研究者们提出统计学习机制对这一过程加以解释, 即婴儿对语言环境中语音的频次分布十分敏感, 可以通过对频次分布的计算, 从语音的连续体中区分出在母语中起区别意义作用的各个语音范畴。同时, 功能性重组机制和一些社会性线索也会对婴儿语音感知的发展产生重要影响。  相似文献   

16.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners' perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener's orientation to speech stimuli.  相似文献   

17.
Four children demonstrating speech and language impairments were examined with respect to their ability to learn to identify certain auditory temporal perceptual information. These children listened to six-element temporal patterns and made judgments about the temporal proximity of two of the elements. Subjects listened to the patterns over a number of exposures ranging from 6 to 14, depending on the subject. Performance on the task improved significantly with repeated exposures. However, the disordered subjects' best performance was still significantly poorer than normal children who had only 1 exposure to the task. These results suggest that, in part, performance differences on temporal perceptual tasks between speech and language disordered children and normal children can be accounted for by differences in perceptual learning. However, because the disordered children never reached normal levels, learning differences may be associated with a fundamental deficit in temporal processing or some other mechanism such as impaired attention.  相似文献   

18.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners’ perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener’s orientation to speech stimuli.  相似文献   

19.
Speech processing requires sensitivity to long-term regularities of the native language yet demands listeners to flexibly adapt to perturbations that arise from talker idiosyncrasies such as nonnative accent. The present experiments investigate whether listeners exhibit dimension-based statistical learning of correlations between acoustic dimensions defining perceptual space for a given speech segment. While engaged in a word recognition task guided by a perceptually unambiguous voice-onset time (VOT) acoustics to signal beer, pier, deer, or tear, listeners were exposed incidentally to an artificial "accent" deviating from English norms in its correlation of the pitch onset of the following vowel (F0) to VOT. Results across four experiments are indicative of rapid, dimension-based statistical learning; reliance on the F0 dimension in word recognition was rapidly down-weighted in response to the perturbation of the correlation between F0 and VOT dimensions. However, listeners did not simply mirror the short-term input statistics. Instead, response patterns were consistent with a lingering influence of sensitivity to the long-term regularities of English. This suggests that the very acoustic dimensions defining perceptual space are not fixed and, rather, are dynamically and rapidly adjusted to the idiosyncrasies of local experience, such as might arise from nonnative-accent, dialect, or dysarthria. The current findings extend demonstrations of "object-based" statistical learning across speech segments to include incidental, online statistical learning of regularities residing within a speech segment.  相似文献   

20.
Previous research suggests that infant speech perception reorganizes in the first year: young infants discriminate both native and non‐native phonetic contrasts, but by 10–12 months difficult non‐native contrasts are less discriminable whereas performance improves on native contrasts. In the current study, four experiments tested the hypothesis that, in addition to the influence of native language experience, acoustic salience also affects the perceptual reorganization that takes place in infancy. Using a visual habituation paradigm, two nasal place distinctions that differ in relative acoustic salience, acoustically robust labial‐alveolar [ma]–[na] and acoustically less salient alveolar‐velar [na]–[?a], were presented to infants in a cross‐language design. English‐learning infants at 6–8 and 10–12 months showed discrimination of the native and acoustically robust [ma]–[na] (Experiment 1), but not the non‐native (in initial position) and acoustically less salient [na]–[?a] (Experiment 2). Very young (4–5‐month‐old) English‐learning infants tested on the same native and non‐native contrasts also showed discrimination of only the [ma]–[na] distinction (Experiment 3). Filipino‐learning infants, whose ambient language includes the syllable‐initial alveolar (/n/)–velar (/?/) contrast, showed discrimination of native [na]–[?a] at 10–12 months, but not at 6–8 months (Experiment 4). These results support the hypothesis that acoustic salience affects speech perception in infancy, with native language experience facilitating discrimination of an acoustically similar phonetic distinction [na]–[?a]. We discuss the implications of this developmental profile for a comprehensive theory of speech perception in infancy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号