首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Speech Perception Within an Auditory Cognitive Science Framework   总被引:1,自引:0,他引:1  
ABSTRACT— The complexities of the acoustic speech signal pose many significant challenges for listeners. Although perceiving speech begins with auditory processing, investigation of speech perception has progressed mostly independently of study of the auditory system. Nevertheless, a growing body of evidence demonstrates that cross-fertilization between the two areas of research can be productive. We briefly describe research bridging the study of general auditory processing and speech perception, showing that the latter is constrained and influenced by operating characteristics of the auditory system and that our understanding of the processes involved in speech perception is enhanced by study within a more general framework. The disconnect between the two areas of research has stunted the development of a truly interdisciplinary science, but there is an opportunity for great strides in understanding with the development of an integrated field of auditory cognitive science.  相似文献   

2.
The present study examined the extent to which verbal auditory agnosia (VAA) is primarily a phonemic decoding disorder, as contrasted to a more global defect in acoustic processing. Subjects were six young adults who presented with VAA in childhood and who, at the time of testing, showed varying degrees of residual auditory discrimination impairment. They were compared to a group of young adults with normal language development matched for age and gender. Cortical event-related potentials (ERPs) were recorded to tones and to consonant-vowel stimuli presented in an "oddball" discrimination paradigm. In addition to cortical ERPs, auditory brainstem responses (ABRs) and middle latency responses (MLRs) were recorded. Cognitive and language assessments were obtained for the VAA subjects. ABRs and MLRs were normal. In comparison with the control group, the cortical ERPs of the VAA subjects showed a delay in the N1 component recorded over lateral temporal cortex both to tones and to speech sounds, despite an N1 of normal latency overlying the frontocentral region of the scalp. These electrophysiologic findings indicate a slowing of processing of both speech and nonspeech auditory stimuli and suggest that the locus of this abnormality is within the secondary auditory cortex in the lateral surface of the temporal lobes.  相似文献   

3.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

4.
In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.  相似文献   

5.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

6.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   

7.
万璇  董世华  蒋存梅 《心理科学》2014,37(1):217-224
自闭症是一种神经发展障碍,主要表现为社会互动障碍,语言交流困难以及刻板行为等症状。已有研究表明,在音乐方面,自闭症者不仅表现出较强的音乐音高知觉能力,而且在音乐表演方面也体现出某些优势。然而,在言语方面,自闭症者除了在言语音高轮廓分辨任务中得分较高外,对言语语调的知觉能力明显比正常人更差,同时,他们对言语语调的产生也存在障碍。本研究不仅可以推进音乐和言语对比研究,而且也为自闭症者言语康复提供借鉴。  相似文献   

8.
婴儿听觉感知能力的发展对于他们未来的语言学习和社会化都具有重要意义。过去大量的研究主要关注语音感知方面,只有较少的研究将非语音感知纳入考虑之中,但了解非语音感知的特征和机制将有助于增加研究者对听觉加工以及儿童发育的认识。该文分别介绍了婴儿语音感知中的三种偏好——对语音、“婴儿语”和母语的偏好,并尝试着将非语音分为音乐、人类的非言语发声、环境声音三类进行阐述。通过对比这两大类声音的感知得到婴儿可能存在语音感知的左脑偏侧化和音乐感知的右脑偏侧化现象,但这也尚存争议,目前有特定领域模型、特定线索模型和脑网络模型三种理论对偏侧化现象的认知机制进行解释。  相似文献   

9.
听力正常的老年人常常受到噪声环境中言语感知能力下降的困扰。延缓这一类型的听觉老龄化具有重要的心理和社会意义。经历过音乐训练的老年人在噪声环境中的言语识别成绩显著高于听力相当但是未经历过音乐训练的老年人。此外, 音乐训练也伴随着老年人的听觉脑干对言语信号的加工速度和时间精度的明显提高。这一言语信号神经表征的增强可能在音乐训练延缓老年人言语感知能力下降的过程中发挥了重要作用。  相似文献   

10.
Adults and infants can differentiate communicative messages using the nonlinguistic acoustic properties of infant‐directed (ID) speech. Although the distinct prosodic properties of ID speech have been explored extensively, it is currently unknown whether the visual properties of the face during ID speech similarly convey communicative intent and thus represent an additional source of social information for infants during early interactions. To examine whether the dynamic facial movement associated with ID speech confers affective information independent of the acoustic signal, adults' differentiation of the visual properties of speakers' communicative messages was examined in two experiments in which the adults rated silent videos of approving and comforting ID and neutral adult‐directed speech. In Experiment 1, adults differentiated the facial speech groups on ratings of the intended recipient and the speaker's message. In Experiment 2, an original coding scale identified facial characteristics of the speakers. Discriminant correspondence analysis revealed two factors differentiating the facial speech groups on various characteristics. Implications for perception of ID facial movements in relation to speakers' communicative intent are discussed for both typically and atypically developing infants. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

12.
Newborns are able to extract and learn repetition-based regularities from the speech input, that is, they show greater brain activation in the bilateral temporal and left inferior frontal regions to trisyllabic pseudowords of the form AAB (e.g., “babamu”) than to random ABC sequences (e.g., “bamuge”). Whether this ability is specific to speech or also applies to other auditory stimuli remains unexplored. To investigate this, we tested whether newborns are sensitive to regularities in musical tones. Neonates listened to AAB and ABC tones sequences, while their brain activity was recorded using functional Near-Infrared Spectroscopy (fNIRS). The paradigm, the frequency of occurrence and the distribution of the tones were identical to those of the syllables used in previous studies with speech. We observed a greater inverted (negative) hemodynamic response to AAB than to ABC sequences in the bilateral temporal and fronto-parietal areas. This inverted response was caused by a decrease in response amplitude, attributed to habituation, over the course of the experiment in the left fronto-temporal region for the ABC condition and in the right fronto-temporal region for both conditions. These findings show that newborns’ ability to discriminate AAB from ABC sequences is not specific to speech. However, the neural response to musical tones and spoken language is markedly different. Tones gave rise to habituation, whereas speech was shown to trigger increasing responses over the time course of the study. Relatedly, the repetition regularity gave rise to an inverted hemodynamic response when carried by tones, while it was canonical for speech. Thus, newborns’ ability to detect repetition is not speech-specific, but it engages distinct brain mechanisms for speech and music.

Research Highlights

  • The ability of newborns’ to detect repetition-based regularities is not specific to speech, but also extends to other auditory modalities.
  • The brain mechanisms underlying speech and music processing are markedly different.
  相似文献   

13.
脑干诱发电位是一种考察听觉脑干加工声音信号时神经活动的非侵入性技术, 近年来被广泛用于探索言语感知的神经基础。相关研究主要集中在考察成年人和正常发展儿童语音编码时脑干的活动特征及发展模式, 探讨发展性阅读障碍及其他语言损伤的语音编码缺陷及其神经表现等方面。在已有研究的基础上进一步探索初级语音编码和高级言语加工之间的相互作用机制, 考察阅读障碍的底层神经基础将成为未来该技术在言语感知研究中应用的重点。  相似文献   

14.
通过两个词汇识别任务,考察词汇加工过程中的无关言语效应。实验1采用真假词判断任务,考察有意义言语、无意义言语、白噪音和安静的背景声音对不同具体性的词汇识别的影响。结果发现,仅有意义言语干扰了词汇识别,且主要体现在对低具体性词汇判断的反应时显著增加。实验2采用了语义范畴判断任务,同样发现有意义言语条件下被试的反应时显著大于其他声音条件。结果表明,中文词汇加工过程中存在无关言语效应,且当任务强调语义加工时,干扰主要源于无关言语的语义成分,支持了语义干扰假说。  相似文献   

15.
Cognitive systems face a tension between stability and plasticity. The maintenance of long-term representations that reflect the global regularities of the environment is often at odds with pressure to flexibly adjust to short-term input regularities that may deviate from the norm. This tension is abundantly clear in speech communication when talkers with accents or dialects produce input that deviates from a listener's language community norms. Prior research demonstrates that when bottom-up acoustic information or top-down word knowledge is available to disambiguate speech input, there is short-term adaptive plasticity such that subsequent speech perception is shifted even in the absence of the disambiguating information. Although such effects are well-documented, it is not yet known whether bottom-up and top-down resolution of ambiguity may operate through common processes, or how these information sources may interact in guiding the adaptive plasticity of speech perception. The present study investigates the joint contributions of bottom-up information from the acoustic signal and top-down information from lexical knowledge in the adaptive plasticity of speech categorization according to short-term input regularities. The results implicate speech category activation, whether from top-down or bottom-up sources, in driving rapid adjustment of listeners' reliance on acoustic dimensions in speech categorization. Broadly, this pattern of perception is consistent with dynamic mapping of input to category representations that is flexibly tuned according to interactive processing accommodating both lexical knowledge and idiosyncrasies of the acoustic input.  相似文献   

16.
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized “model-matched” stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk .

Research Highlights

  • Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI.
  • Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants.
  • Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus.
  • Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
  相似文献   

17.
Recent research suggests an auditory temporal deficit as a possible contributing factor to poor phonemic awareness skills. This study investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in children with a reading disability, aged 8-12 years, using Tallal's tone-order judgement task. Normal performance on the tone-order task was established for 36 normal readers. Forty-two children with developmental reading disability were then subdivided by their performance on the tone-order task. Average and poor tone-order subgroups were then compared on their ability to process speech sounds and visual symbols, and on phonological awareness and reading. The presence of a tone-order deficit did not relate to performance on the order processing of speech sounds, to poorer phonological awareness or to more severe reading difficulties. In particular, there was no evidence of a group by interstimulus interval interaction, as previously described in the literature, and thus little support for a general auditory temporal processing difficulty as an underlying problem in poor readers. In this study, deficient order judgement on a nonverbal auditory temporal order task (tone task) did not underlie phonological awareness or reading difficulties.  相似文献   

18.
Vocal Expression and Perception of Emotion   总被引:3,自引:0,他引:3  
Speech is an acoustically rich signal that provides considerable personal information about talkers. The expression of emotions in speech sounds and corresponding abilities to perceive such emotions are both fundamental aspects of human communication. Findings from studies seeking to characterize the acoustic properties of emotional speech indicate that speech acoustics provide an external cue to the level of nonspecific arousal associated with emotionalprocesses and to a lesser extent, the relative pleasantness of experienced emotions. Outcomes from perceptual tests show that listeners are able to accurately judge emotions from speech at rates far greater than expected by chance. More detailed characterizations of these production and perception aspects of vocal communication will necessarily involve knowledge aboutdifferences among talkers, such as those components of speech that provide comparatively stable cues to individual talkers identities.  相似文献   

19.
本研究考察了50~80岁说普通话的中老年人对普通话声调T2—T4的范畴化感知表现,探究影响声调范畴化感知老化的因素。采用经典范畴感知范式。结果显示,(1)中老年组所有年龄段(50~60岁、60~70岁、70~80岁)的范畴边界宽度都显著大于年轻组,但中老年组间差异不显著。(2)中老年人范畴边界宽度与记忆广度测试得分呈显著负相关,而与年龄的相关性不显著。(3)和年轻组相比,中老年组范畴内识别函数的斜率差异显著,而范畴间差异不显著。结果表明,中老年人声调感知范畴化程度下降,音系层面的加工能力发生衰退,记忆广度的衰退与声调范畴化感知老年化之间存在关联。此外,50到80岁间,年龄不会直接影响声调感知范畴化程度。  相似文献   

20.
Candidate brain regions constituting a neural network for preattentive phonetic perception were identified with fMRI and multivariate multiple regression of imaging data. Stimuli contrasted along speech/nonspeech, acoustic, or phonetic complexity (three levels each) and natural/synthetic dimensions. Seven distributed brain regions' activity correlated with speech and speech complexity dimensions, including five left-sided foci [posterior superior temporal gyrus (STG), angular gyrus, ventral occipitotemporal cortex, inferior/posterior supramarginal gyrus, and middle frontal gyrus (MFG)] and two right-sided foci (posterior STG and anterior insula). Only the left MFG discriminated natural and synthetic speech. The data also supported a parallel rather than serial model of auditory speech and nonspeech perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号