首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

2.
Speech perception deficits are commonly reported in dyslexia but longitudinal evidence that poor speech perception compromises learning to read is scant. We assessed the hypothesis that phonological skills, specifically phoneme awareness and RAN, mediate the relationship between speech perception and reading. We assessed longitudinal predictive relationships between categorical speech perception, phoneme awareness, RAN, language, attention and reading at ages 5½ and 6½ years in 237 children many of whom were at high risk of reading difficulties. Speech perception at 5½ years correlated with language, attention, phoneme awareness and RAN concurrently and was a predictor of reading at 6½ years. There was no significant indirect effect of speech perception on reading via phoneme awareness, suggesting that its effects are separable from those of phoneme awareness. Children classified with dyslexia at 8 years had poorer speech perception than age‐controls at 5½ years and children with language disorders (with or without dyslexia) had more severe difficulties with both speech perception and attention control. Categorical speech perception tasks tap factors extraneous to perception, including decision‐making skills. Further longitudinal studies are needed to unravel the complex relationships between categorical speech perception tasks and measures of reading and language and attention.  相似文献   

3.
The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N \(=\) 71) completed a sentence-picture matching task. Children performed the control, simulated pSTM deficit, simulated speech perception deficit, or simulated double deficit condition. On long sentences, the double deficit group had lower scores than the control and speech perception deficit groups, and the pSTM deficit group had lower scores than the control group and marginally lower scores than the speech perception deficit group. The pSTM and speech perception groups performed similarly to groups with real deficits in these areas, who completed the control condition. Overall, scores were lowest on noncanonical long sentences. Results show pSTM has a greater effect than speech perception on sentence comprehension, at least in the tasks employed here.  相似文献   

4.
We examined categorical speech perception in school‐age children with developmental dyslexia or Specific Language Impairment (SLI), compared to age‐matched and younger controls. Stimuli consisted of synthetic speech tokens in which place of articulation varied from ‘b’ to ‘d’. Children were tested on categorization, categorization in noise, and discrimination. Phonological awareness skills were also assessed to examine whether these correlated with speech perception measures. We observed similarly good baseline categorization rates across all groups; however, when noise was added, the SLI group showed impaired categorization relative to controls, whereas dyslexic children showed an intact profile. The SLI group showed poorer than expected between‐category discrimination rates, whereas this pattern was only marginal in the dyslexic group. Impaired phonological awareness profiles were observed in both the SLI and dyslexic groups; however, correlations between phonological awareness and speech perception scores were not significant. The results of the study suggest that in children with language and reading impairments, there is a significant relationship between receptive language and speech perception, there is at best a weak relationship between reading and speech perception, and indeed the relationship between phonological and speech perception deficits is highly complex.  相似文献   

5.
Speech perception deficits in developmental dyslexia were investigated in quiet and various noise conditions. Dyslexics exhibited clear speech perception deficits in noise but not in silence. Place‐of‐articulation was more affected than voicing or manner‐of‐articulation. Speech‐perception‐in‐noise deficits persisted when performance of dyslexics was compared to that of much younger children matched on reading age, underscoring the fundamental nature of speech‐perception‐in‐noise deficits. The deficits were not due to poor spectral or temporal resolution because dyslexics exhibited normal ‘masking release’ effects (i.e. better performance in fluctuating than in stationary noise). Moreover, speech‐perception‐in‐noise predicted significant unique variance in reading even after controlling for low‐level auditory, attentional, speech output, short‐term memory and phonological awareness processes. Finally, the presence of external noise did not seem to be a necessary condition for speech perception deficits to occur because similar deficits were obtained when speech was degraded by eliminating temporal fine‐structure cues without using external noise. In conclusion, the core deficit of dyslexics seems to be a lack of speech robustness in the presence of external or internal noise.  相似文献   

6.
A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech perception is unclear. Using paired-coil focal transcranial magnetic stimulation (TMS) in healthy subjects, we demonstrate that Tpj → M1 and pIFG → M1 effective connectivity increased when listening to speech compared to white noise. A virtual lesion induced by continuous theta-burst TMS (cTBS) of the pIFG abolished the task-dependent increase in pIFG → M1 but not Tpj → M1 effective connectivity during speech perception, whereas cTBS of Tpj abolished the task-dependent increase of both effective connectivities. We conclude that speech perception enhances effective connectivity between areas of the auditory dorsal stream and M1. Tpj is situated at a hierarchically high level, integrating speech perception into motor activation through the pIFG.  相似文献   

7.
语音感知的发展状况对个体的语言发展有着深远影响。生命的第一年中, 在语言经验的作用下, 婴儿的语音感知从最初的普遍性感知逐渐发展为对母语的特异性感知。研究者们提出统计学习机制对这一过程加以解释, 即婴儿对语言环境中语音的频次分布十分敏感, 可以通过对频次分布的计算, 从语音的连续体中区分出在母语中起区别意义作用的各个语音范畴。同时, 功能性重组机制和一些社会性线索也会对婴儿语音感知的发展产生重要影响。  相似文献   

8.
Perception of visual speech and the influence of visual speech on auditory speech perception is affected by the orientation of a talker's face, but the nature of the visual information underlying this effect has yet to be established. Here, we examine the contributions of visually coarse (configural) and fine (featural) facial movement information to inversion effects in the perception of visual and audiovisual speech. We describe two experiments in which we disrupted perception of fine facial detail by decreasing spatial frequency (blurring) and disrupted perception of coarse configural information by facial inversion. For normal, unblurred talking faces, facial inversion had no influence on visual speech identification or on the effects of congruent or incongruent visual speech movements on perception of auditory speech. However, for blurred faces, facial inversion reduced identification of unimodal visual speech and effects of visual speech on perception of congruent and incongruent auditory speech. These effects were more pronounced for words whose appearance may be defined by fine featural detail. Implications for the nature of inversion effects in visual and audiovisual speech are discussed.  相似文献   

9.
ABSTRACT

One important contribution of Carol Fowler's direct approach to speech perception is its account of multisensory perception. This supramodal account proposes a speech function that detects supramodal information available across audition, vision, and touch. This detection allows for the recovery of articulatory primitives that provide the basis of a common currency shared between modalities as well as between perception and production. Common currency allows for perceptual experience to be shared between modalities and supports perceptually guided speaking as well as production-guided perception. In this report, we discuss the contribution and status of the supramodal approach relative to recent research in multisensory speech perception. We argue that the approach has helped motivate a multisensory revolution in perceptual psychology. We then review the new behavioral and neurophysiological research on (a) supramodal information, (b) cross-sensory sharing of experience, and (c) perceptually guided speaking as well as production guided speech perception. We conclude that Fowler's supramodal theory has fared quite well in light of this research.  相似文献   

10.
The visible movement of a talker's face is an influential component of speech perception. However, the ability of this influence to function when large areas of the face (~50%) are covered by simple substantial occlusions, and so are not visible to the observer, has yet to be fully determined. In Experiment 1, both visual speech identification and the influence of visual speech on identifying congruent and incongruent auditory speech were investigated using displays of a whole (unoccluded) talking face and of the same face occluded vertically so that the entire left or right hemiface was covered. Both the identification of visual speech and its influence on auditory speech perception were identical across all three face displays. Experiment 2 replicated and extended these results, showing that visual and audiovisual speech perception also functioned well with other simple substantial occlusions (horizontal and diagonal). Indeed, displays in which entire upper facial areas were occluded produced performance levels equal to those obtained with unoccluded displays. Occluding entire lower facial areas elicited some impairments in performance, but visual speech perception and visual speech influences on auditory speech perception were still apparent. Finally, implications of these findings for understanding the processes supporting visual and audiovisual speech perception are discussed.  相似文献   

11.
This paper presents evidence for a new model of the functional anatomy of speech/language (Hickok & Poeppel, 2000) which has, at its core, three central claims: (1) Neural systems supporting the perception of sublexical aspects of speech are essentially bilaterally organized in posterior superior temporal lobe regions; (2) neural systems supporting the production of phonemic aspects of speech comprise a network of predominately left hemisphere systems which includes not only frontal regions, but also superior temporal lobe regions; and (3) the neural systems supporting speech perception and production partially overlap in left superior temporal lobe. This model, which postulates nonidentical but partially overlapping systems involved in the perception and production of speech, explains why psycho- and neurolinguistic evidence is mixed regarding the question of whether input and output phonological systems involve a common network or distinct networks.  相似文献   

12.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   

13.
A critical property of the perception of spoken words is the transient ambiguity of the speech signal. In localist models of speech perception this ambiguity is captured by allowing the parallel activation of multiple lexical representations. This paper examines how a distributed model of speech perception can accommodate this property. Statistical analyses of vector spaces show that coactivation of multiple distributed representations is inherently noisy, and depends on parameters such as sparseness and dimensionality. Furthermore, the characteristics of coactivation vary considerably, depending on the organization of distributed representations within the mental lexicon. This view of lexical access is supported by analyses of phonological and semantic word representations, which provide an explanation of a recent set of experiments on coactivation in speech perception (Gaskell & Marslen–Wilson, 1999).  相似文献   

14.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

15.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

16.

When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception. Covert imitation of speech has been studied with interference paradigms, including the stimulus–response compatibility paradigm (SRC). The SRC paradigm measures covert imitation by comparing articulation of a prompt following exposure to a distracter. Responses tend to be faster for congruent than for incongruent distracters; thus, showing evidence of covert imitation. Simulation accounts propose a key role for covert imitation in speech perception. However, covert imitation has thus far only been demonstrated for a select class of speech sounds, namely consonants, and it is unclear whether covert imitation extends to vowels. We aimed to demonstrate that covert imitation effects as measured with the SRC paradigm extend to vowels, in two experiments. We examined whether covert imitation occurs for vowels in a consonant–vowel–consonant context in visual, audio, and audiovisual modalities. We presented the prompt at four time points to examine how covert imitation varied over the distracter’s duration. The results of both experiments clearly demonstrated covert imitation effects for vowels, thus supporting simulation theories of speech perception. Covert imitation was not affected by stimulus modality and was maximal for later time points.

  相似文献   

17.
Boatman D 《Cognition》2004,92(1-2):47-65
Functional lesion studies have yielded new information about the cortical organization of speech perception in the human brain. We will review a number of recent findings, focusing on studies of speech perception that use the techniques of electrocortical mapping by cortical stimulation and hemispheric anesthetization by intracarotid amobarbital. Implications for recent developments in neuroimaging studies of speech perception will be discussed. This discussion will provide the framework for a developing model of the cortical circuitry critical for speech perception.  相似文献   

18.
言语与手部运动关系的研究回顾   总被引:1,自引:0,他引:1  
言语与手部运动之间存在复杂的联系。该文总结了两类手部运动(伴随言语发生的手势运动和抓握运动)与言语之间关系的行为和脑科学研究成果。发现:(1)伴随言语产生的意义手势可促进言语加工,特别是词汇的提取过程;(2)观察手的抓握运动影响言语产生时唇的运动和声音成分;(3)对词语的知觉影响抓握运动的早期计划阶段;(4)言语产生可增加手运动皮层的兴奋性。作者由此认为,言语加工与手势间的联系不仅表现为神经通路的重叠和相互激活,而且可能在外显行为上也相互影响  相似文献   

19.
A number of studies reported that developmental dyslexics are impaired in speech perception, especially for speech signals consisting of rapid auditory transitions. These studies mostly made use of a categorical-perception task with synthetic-speech samples. In this study, we show that deficits in the perception of synthetic speech do not generalise to the perception of more naturally sounding speech, even if the same experimental paradigm is used. This contrasts with the assumption that dyslexics are impaired in the perception of rapid auditory transitions.  相似文献   

20.
口语感知是当前心理语言学的研究热点, 然而以往研究大多以婴儿和成人为被试, 缺乏对幼儿口语感知的研究。此外, 现有口语感知模型主要是基于非声调语言研究建立起来的, 对汉语并不完全适用。汉语是一种声调语言, 在语音构成上不同于非声调语言。本项目将立足于汉语口语特点, 以3~5岁幼儿为研究对象, 考察幼儿汉语口语感知特点及神经机制。综合使用眼动方法、ERP方法和LORETA源定位技术探讨以下问题:(1)幼儿在前注意阶段和注意阶段的听觉语音辨别特点; (2)幼儿汉语口语词汇识别过程中音段信息和超音段信息的作用; (3)幼儿汉语口语感知的神经机制。本项目研究结果将揭示幼儿汉语口语感知特点, 为完善现有的口语感知模型提供新的实验证据。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号