首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Traditionally, models of speech comprehension and production do not depend on concepts and processes from the phonological short-term memory (pSTM) literature. Likewise, in working memory research, pSTM is considered to be a language-independent system that facilitates language acquisition rather than speech processing per se. We discuss couplings between pSTM, speech perception and speech production, and we propose that pSTM arises from the cycling of information between two phonological buffers, one involved in speech perception and one in speech production. We discuss the specific role of these processes in speech processing, and argue that models of speech perception and production, and our understanding of their neural bases, will benefit from incorporating them.  相似文献   

2.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

3.
Phonetic awareness and reading acquisition   总被引:1,自引:0,他引:1  
Summary Three issues are dealt with: the relationship between phonetic awareness and reading acquisition, the nature of the cognitive capacities that make phonetic awareness possible, and the potential influence of phonetic awareness on language perception and comprehension. Data on the relationships between phonetic awareness and reading acquisition support an interactive view. For most people, learning to read in the alphabetic system stimulates phonetic awareness. On the other hand, phonetic awareness is a critical factor for success in reading acquisition. Indications about the nature of the cognitive capacities that underlie phonetic awareness may be obtained by inspecting its development. Two capacities at least seem to be required: the capacity to ignore meaning and focus on the sound properties of speech and the capacity of segmentation.Lack of phonetic awareness does not imply that phonetic processing does not take place during speech perception. Recent data suggest that speech perception includes a stage of extraction of phonetic features regardless of whether or not the subject is able to segment speech into phones explicitly. However, phonetic awareness may influence perceptual strategies and the relative weight of meaning expectancies in identification.Our work received support from the Belgian Fonds de la Recherche fondamentale collective (F.R.F.C.) under contracts No. 2.4505.76 and 2.4505.80 as well as from the Belgian Ministère de la Politique et de la Programmation scientifiques (Action de Recherche concertée Processus cognitifs dans la lecture)  相似文献   

4.
The perceptual system for speech is highly organized from early infancy. This organization bootstraps young human learners’ ability to acquire their native speech and language from speech input. Here, we review behavioral and neuroimaging evidence that perceptual systems beyond the auditory modality are also specialized for speech in infancy, and that motor and sensorimotor systems can influence speech perception even in infants too young to produce speech-like vocalizations. These investigations complement existing literature on infant vocal development and on the interplay between speech perception and production systems in adults. We conclude that a multimodal speech and language network is present before speech-like vocalizations emerge.  相似文献   

5.
Preparation of speech in advance of actual production has consistently been shown to result in greater speech fluency. This observation is important given the impact of speech fluency in social perception; however, it raises questions concerning the nature of the processes by which communicative behaviors are prepared and of the representation of those behaviors in the cognitive system. The current research represents an attempt to address these issues. In Experiment I subjects provided with an abstract problem-solution sequence exhibited less silent pausing during speech than a control group which was not given such a sequence. A second experimental group provided with an abstract solution-problem sequence exhibited less pausing than the control group, but not significantly so. In Experiment II, increasing practice with the solution-problem sequence was found to lead a decreasing linear trend in silent pausing. These findings are discussed in terms of their implications for understanding the nature of production of communicative behavior.  相似文献   

6.
Speech perception (SP) most commonly refers to the perceptual mapping from the highly variable acoustic speech signal to a linguistic representation, whether it be phonemes, diphones, syllables, or words. This is an example of categorization, in that potentially discriminable speech sounds are assigned to functionally equivalent classes. In this tutorial, we present some of the main challenges to our understanding of the categorization of speech sounds and the conceptualization of SP that has resulted from these challenges. We focus here on issues and experiments that define open research questions relevant to phoneme categorization, arguing that SP is best understood as perceptual categorization, a position that places SP in direct contact with research from other areas of perception and cognition.  相似文献   

7.
Speech segments are highly context-dependent and acoustically variable. One factor that contributes heavily to the variability of speech is speaking rate. Some speech cues are temporal in nature-that is, the distinctions that they signify are defined over time. How can temporal speech cues keep their distinctiveness in the face of extrinsic transformations, such as those wrought by different speaking rates? This issue is explored with respect to the perception, in Icelandic, of Voice Onset Time as a cue for word-initial stop voicing, wordinitial aspiration as a cue for \[h], and Voice Offset Time as a cue for pre-aspiration. All the speech cues show rate-dependent perception though to different degrees, with Voice Offset Time being most sensitive to rate changes and Voice Onset Time least sensitive. The differences in the behaviour of these speech cues are related to their different positions in the syllable.  相似文献   

8.
The temporal structure of speech has been shown to be highly variable. Speaking rate, stress, and other factors influence the duration of individual speech sounds. The highly elastic nature of speech would seem to pose a problem for the listener, especially with respect to the perception of temporal speech cues such as voice-onset time (VOT) and quantity: How does the listener disentangle those temporal changes whicqh are linguistically significant from those which are extrinsic to the linguistic message? This paper reports data on the behavior of two Icelandic speech cues at different speaking rates. The results show that manipulations of rate have the effect of slightly blurring the distinction between unaspirated and aspirated stops. Despite great changes in the absolute durations of vowels and consonants, the two categories of syllables-V:C and VC:-are nonetheless kept totally distinct. In two perceptual experiments, it is shown that while the ratio of vowel to rhyme duration is the primary cue to quantity and remains invariant at different rates, no such ratio can be defined for VOT. These results imply that quantity is the only one of these two speech cues that is selfnormalizing for rate. Models of rate-dependent speech processing need to address this difference.  相似文献   

9.
Previous research has shown that the perception of speech sounds is strongly influenced by the internal structure of maternal language categories. Specifically, it has been shown that stimuli judged as good exemplars of a phonemic category are more difficult to discriminate from similar sounds than bad exemplars from equally similar sounds. This effect seems to be restricted to phonemes present in the maternal language, and is acquired in the first months of life. The present study investigates the malleability of speech acquisition by analysing the discrimination capacities for L2 phonemes in highly proficient Spanish-Catalan bilinguals born in monolingual families. In Experiment I subjects were required to give goodness of fit judgments to establish the best exemplars corresponding to three different vowel categories (Catalan /e/ and /ε/ Spanish /e/). In Experiments 2 and 3, bilinguals were asked to perform a discrimination task with materials in their maternal language (Exp. 2) and in their second language (Exp. 3). Results reveal that bilinguals show a reduced discrimination capacity only for good exemplars of their maternal language, but not for good exemplars of their second language. The same pattern of results was obtained in Experiment 4, using a within-subjects design and a bias-free discrimination measure (d'). These findings support the hypothesis that phonemic categories are not only acquired early in life, but under some circumstances, the acquisition of new phonemic categories can be seriously compromised, in spite of early and extensive exposure to L2.  相似文献   

10.
Research has identified bivariate correlations between speech perception and cognitive measures gathered during infancy as well as correlations between these individual measures and later language outcomes. However, these correlations have not all been explored together in prospective longitudinal studies. The goal of the current research was to compare how early speech perception and cognitive skills predict later language outcomes using a within-participant design. To achieve this goal, we tested 97 5- to 7-month-olds on two speech perception tasks (stress pattern preference, native vowel discrimination) and two cognitive tasks (visual recognition memory, A-not-B) and later assessed their vocabulary outcomes at 18 and 24 months. Frequentist statistical analyses showed that only native vowel discrimination significantly predicted vocabulary. However, Bayesian analyses suggested that evidence was ambiguous between null and alternative hypotheses for all infant predictors. These results highlight the importance of recognizing and addressing challenges related to infant data collection, interpretation, and replication in the developmental field, a roadblock in our route to understanding the contribution of domain-specific and domain-general skills for language acquisition. Future methodological development and research along similar lines is encouraged to assess individual differences in infant speech perception and cognitive skills and their predictability for language development.  相似文献   

11.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners' perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener's orientation to speech stimuli.  相似文献   

12.
Previous research in speech perception has yielded two sets of findings which are brought together in the present study. First, it has been shown that normal hearing listeners use visible as well as acoustical information when processing speech. Second, it has been shown that there is an effect of specific language experience on speech perception such that adults often have difficulty identifying and discriminating non-native phones. The present investigation was designed to extend and combine these two sets of findings. Two studies were conducted using six consonant-vowel syllables (/ba/, /va/, /alpha a/, /da/, /3a/, and /ga/ five of which occur in French and English, and one (the interdental fricative /alpha a/) which occurs only in English. In Experiment 1, an effect of specific linguistic experience was evident for the auditory identification of the non-native interdental stimulus by French-speakers. In Experiment 2, it was shown that the effect of specific language experience extends to the perception of the visible information in speech. These findings are discussed in terms of their implications for our understanding of cross-language processes in speech perception and for our understanding of the development of bimodal speech perception.  相似文献   

13.
We examined categorical speech perception in school‐age children with developmental dyslexia or Specific Language Impairment (SLI), compared to age‐matched and younger controls. Stimuli consisted of synthetic speech tokens in which place of articulation varied from ‘b’ to ‘d’. Children were tested on categorization, categorization in noise, and discrimination. Phonological awareness skills were also assessed to examine whether these correlated with speech perception measures. We observed similarly good baseline categorization rates across all groups; however, when noise was added, the SLI group showed impaired categorization relative to controls, whereas dyslexic children showed an intact profile. The SLI group showed poorer than expected between‐category discrimination rates, whereas this pattern was only marginal in the dyslexic group. Impaired phonological awareness profiles were observed in both the SLI and dyslexic groups; however, correlations between phonological awareness and speech perception scores were not significant. The results of the study suggest that in children with language and reading impairments, there is a significant relationship between receptive language and speech perception, there is at best a weak relationship between reading and speech perception, and indeed the relationship between phonological and speech perception deficits is highly complex.  相似文献   

14.
Perception of visual speech and the influence of visual speech on auditory speech perception is affected by the orientation of a talker's face, but the nature of the visual information underlying this effect has yet to be established. Here, we examine the contributions of visually coarse (configural) and fine (featural) facial movement information to inversion effects in the perception of visual and audiovisual speech. We describe two experiments in which we disrupted perception of fine facial detail by decreasing spatial frequency (blurring) and disrupted perception of coarse configural information by facial inversion. For normal, unblurred talking faces, facial inversion had no influence on visual speech identification or on the effects of congruent or incongruent visual speech movements on perception of auditory speech. However, for blurred faces, facial inversion reduced identification of unimodal visual speech and effects of visual speech on perception of congruent and incongruent auditory speech. These effects were more pronounced for words whose appearance may be defined by fine featural detail. Implications for the nature of inversion effects in visual and audiovisual speech are discussed.  相似文献   

15.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners’ perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener’s orientation to speech stimuli.  相似文献   

16.
Bradlow AR  Bent T 《Cognition》2008,106(2):707-729
This study investigated talker-dependent and talker-independent perceptual adaptation to foreign-accent English. Experiment 1 investigated talker-dependent adaptation by comparing native English listeners' recognition accuracy for Chinese-accented English across single and multiple talker presentation conditions. Results showed that the native listeners adapted to the foreign-accented speech over the course of the single talker presentation condition with some variation in the rate and extent of this adaptation depending on the baseline sentence intelligibility of the foreign-accented talker. Experiment 2 investigated talker-independent perceptual adaptation to Chinese-accented English by exposing native English listeners to Chinese-accented English and then testing their perception of English produced by a novel Chinese-accented talker. Results showed that, if exposed to multiple talkers of Chinese-accented English during training, native English listeners could achieve talker-independent adaptation to Chinese-accented English. Taken together, these findings provide evidence for highly flexible speech perception processes that can adapt to speech that deviates substantially from the pronunciation norms in the native talker community along multiple acoustic-phonetic dimensions.  相似文献   

17.
18.
Timing cues present in the acoustic waveform of speech provide critical information for the recognition and segmentation of the ongoing speech signal. Research has demonstrated that deficient temporal perception rates, that have been shown to specifically disrupt acoustic processing of speech, are related to specific language-based learning impairments (LLI). Temporal processing deficits correlate highly with the phonological discrimination and processing deficits of these children. Electrophysiological single cell mapping studies of sensory cortex in brains of primates have shown that neural circuitry can be remapped after specific, temporally cohesive training regimens, demonstrating the dynamic plasticity of the brain. Recently, we combined these two lines of research in a series of studies that addressed whether the temporal processing deficits seen in LLIs can be significantly modified through adaptive training aimed at reducing temporal integration thresholds. Simultaneously, we developed a computer algorithm that expanded and enhanced the brief, rapidly changing acoustic segments within ongoing speech and used this to provide intensive speech and language training exercises to these children. Results to date from two independent laboratory experiments, as well as a large national clinical efficacy trial, demonstrate that dramatic improvements in temporal integration thresholds, together with speech and language comprehension abilities of LLI children, results from training with these new computer-based training procedures.  相似文献   

19.
Using a developmental approach, two aspects of debate in the speech perception literature were tested, (a) the nature of adult speech processing, the dichotomy being along nonlinguistic versus linguistic lines, and (b) the nature of speech processing by children of different ages, the hypotheses here implying in infancy detector-like processes and at age four "adult-like" speech perception reorganizations. Children ranging in age from 4 up to 18 years discriminated native and foreign speech contrasts. Results confirm the hypotheses for adults. It is clear that different processes are operating at different ages; however, more complex processes may come into play around the ages of 6 to 10 years; boys may use different strategies than girls, and with age, a multiplicity of processes may be concurrently active.  相似文献   

20.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号