首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
婴儿听觉感知能力的发展对于他们未来的语言学习和社会化都具有重要意义。过去大量的研究主要关注语音感知方面,只有较少的研究将非语音感知纳入考虑之中,但了解非语音感知的特征和机制将有助于增加研究者对听觉加工以及儿童发育的认识。该文分别介绍了婴儿语音感知中的三种偏好——对语音、“婴儿语”和母语的偏好,并尝试着将非语音分为音乐、人类的非言语发声、环境声音三类进行阐述。通过对比这两大类声音的感知得到婴儿可能存在语音感知的左脑偏侧化和音乐感知的右脑偏侧化现象,但这也尚存争议,目前有特定领域模型、特定线索模型和脑网络模型三种理论对偏侧化现象的认知机制进行解释。  相似文献   

2.
<正>音乐是听觉的艺术,在音乐表演过程中,演唱者与钢琴伴奏者的完美配合有赖于伴奏者自觉运用音乐听觉达到理想的伴奏效果,并使自己的伴奏与演唱者的内容和情绪保持一致。因此在钢琴伴奏中注重音乐听觉的训练,培养良好的音乐感知能力,养成良好的倾听习惯是十分必要的。音乐听觉是指对音乐作品中各音存在意义的认识,音与音之间微妙关系的感觉以及在音乐方面的记忆能力、模仿能力等[1]。音乐是听觉的艺术,是通过声音向观众传达音乐形象的,所  相似文献   

3.
以初中二年级的学生为考察对象,探讨了音乐经验对不同层次的言语加工能力的促进作用,发现:在声音信号的加工层面,音乐经验对音高和时长信息感知能力的发展都有积极的促进作用;在语音意识层面,音乐经验的促进作用只局限于声调意识;在言语记忆方面,音乐经验可以促进言语材料在长时记忆中的保持。  相似文献   

4.
老龄化导致听觉系统、认知功能有所衰退。老年人群的言语理解能力减弱, 韵律信息解析存在困难。老年人对重音、语调、语速等语言韵律感知能力退化, 对情感韵律的加工也出现问题, 特别是消极情绪韵律加工减退较快。老年疾病进一步加深韵律加工难度, 韵律感知呈现出与特定疾病的相关性。未来研究需考察不同语言背景老年人群的韵律感知表现与机制、复杂交流环境的影响、韵律感知障碍对老年疾病的预测、韵律感知的早期干预与复健等问题。  相似文献   

5.
吴梅红 《心理学报》2023,55(1):94-105
动态基频(F0)轮廓有助于嘈杂环境下的言语识别,可以作为将目标语音从背景声中分离的知觉线索。本研究通过评估老年人与年轻人在言语掩蔽下聆听具有自然动态F0轮廓与对F0轮廓操作调节后的汉语语句的言语识别能力,探讨老龄化对F0轮廓线索在汉语言语识别去掩蔽作用中的影响。结果显示,在言语掩蔽下自然动态的F0轮廓比压扁或拉伸的F0轮廓更能帮助年轻人抵抗信息掩蔽识别目标言语;而老年人在言语掩蔽下却难以从动态F0轮廓线索中受益。研究结果揭示了老年人利用F0轮廓线索促进掩蔽下言语感知能力的老化特点。  相似文献   

6.
视唱练耳是一门系统发展音乐听觉的基础学科,它通过唱和听训练学生的音准、节奏感、音乐感和内心听觉,旨在培养学生对音乐的感知、听辨记忆和识谱视唱能力,理解音乐各要素在音乐表现中的作用,积累音乐语言.同时它又涉及并实践着基本乐理、初级和声、曲式等音乐基础理论,并为作曲、声乐、钢琴等课程的学习打下坚实的基础.它是一个多学科结合交叉、知识结构相互渗透的音乐教育基础学科.  相似文献   

7.
听觉是人的本能,而学习音乐的人除具有一般听觉能力外,还必须具有音乐的听觉。音乐的听觉要靠科学的指导,系统的、循序渐进的学习,一点一滴的积累和艰苦的训练才能获得。视唱、练耳就是培养音乐听觉的基础。在视唱、练耳教学中最重要的就是使学生形成良好的内部听觉。  相似文献   

8.
统计学习能力作为快速习得环境中信息规则的先决条件之一,有助于个体以较小的消耗代价适应环境。音乐训练作为一种由多个感官共同参与的强化活动,被广泛认为是有助于提升认知能力的有效手段。随着统计学习在音乐适应中核心地位的确认以及音乐训练效应的反复验证,近年来,陆续有研究在行为表现和脑机制上证实了音乐训练能够增强个体对输入信息中统计规则的敏感性、促进听觉统计学习能力的提升。然而,关于音乐训练能否促进视觉等其他模态的统计学习能力,目前还存在不一致的结果。未来研究还应对跨模态统计学习能力的音乐训练促进效应做进一步探索;也可选用新手被试进行音乐训练干预,以明确音乐训练与各模态统计学习能力增强之间的因果关系。  相似文献   

9.
脑干诱发电位是一种考察听觉脑干加工声音信号时神经活动的非侵入性技术, 近年来被广泛用于探索言语感知的神经基础。相关研究主要集中在考察成年人和正常发展儿童语音编码时脑干的活动特征及发展模式, 探讨发展性阅读障碍及其他语言损伤的语音编码缺陷及其神经表现等方面。在已有研究的基础上进一步探索初级语音编码和高级言语加工之间的相互作用机制, 考察阅读障碍的底层神经基础将成为未来该技术在言语感知研究中应用的重点。  相似文献   

10.
王沛  张蓝心 《心理科学》2013,36(5):1078-1084
音乐和语言加工神经基础的关系研究,近年来发展迅速,获得了越来越多的关注。“共享结构整合资源假说”主张音乐的句法加工和语言的句法加工具有较大程度的神经资源的共享。在听觉语言实验中反映句法违例的ERP ELAN与音乐句法违例引发的ERAN极为相似,唯一的区别仅在于它们的分布有所不同——ERAN就像是一个两半球对称的ELAN。而且ERAN的引发不受是否接受过音乐训练这一因素的影响,虽然音乐人被试引发的ERAN波幅更大。一些研究发现音乐语义加工的神经基础为N400和N500。前者可以由音乐和语言两种刺激引发,后者只能由音乐意义的加工引发。然而,音乐的音调感知和语言的音调感知是否共享了神经资源,却还没有确定的结论。  相似文献   

11.
Age-related decline in auditory perception reflects changes in the peripheral and central auditory systems. These age-related changes include a reduced ability to detect minute spectral and temporal details in an auditory signal, which contributes to a decreased ability to understand speech in noisy environments. Given that musical training in young adults has been shown to improve these auditory abilities, we investigated the possibility that musicians experience less age-related decline in auditory perception. To test this hypothesis we measured auditory processing abilities in lifelong musicians (N = 74) and nonmusicians (N = 89), aged between 18 and 91. Musicians demonstrated less age-related decline in some auditory tasks (i.e., gap detection and speech in noise), and had a lifelong advantage in others (i.e., mistuned harmonic detection). Importantly, the rate of age-related decline in hearing sensitivity, as measured by pure-tone thresholds, was similar between both groups, demonstrating that musicians experience less age-related decline in central auditory processing.  相似文献   

12.
The current study addressed the question whether audiovisual (AV) speech can improve speech perception in older and younger adults in a noisy environment. Event-related potentials (ERPs) were recorded to investigate age-related differences in the processes underlying AV speech perception. Participants performed an object categorization task in three conditions, namely auditory-only (A), visual-only (V), and AVspeech. Both age groups revealed an equivalent behavioral AVspeech benefit over unisensory trials. ERP analyses revealed an amplitude reduction of the auditory P1 and N1 on AVspeech trials relative to the summed unisensory (A + V) response in both age groups. These amplitude reductions are interpreted as an indication of multisensory efficiency as fewer neural resources were recruited to achieve better performance. Of interest, the observed P1 amplitude reduction was larger in older adults. Younger and older adults also showed an earlier auditory N1 in AVspeech relative to A and A + V trials, an effect that was again greater in the older adults. The degree of multisensory latency shift was predicted by basic auditory functioning (i.e., higher hearing thresholds were associated with larger latency shifts) in both age groups. Together, the results show that AV speech processing is not only intact in older adults, but that the facilitation of neural responses occurs earlier in and to a greater extent than in younger adults. Thus, older adults appear to benefit more from additional visual speech cues than younger adults, possibly to compensate for more impoverished unisensory inputs because of sensory aging.  相似文献   

13.
We investigated whether musical competence was associated with the perception of foreign-language phonemes. The sample comprised adult native-speakers of English who varied in music training. The measures included tests of general cognitive abilities, melody and rhythm perception, and the perception of consonantal contrasts that were phonemic in Zulu but not in English. Music training was associated positively with performance on the tests of melody and rhythm perception, but not with performance on the phoneme-perception task. In other words, we found no evidence for transfer of music training to foreign-language speech perception. Rhythm perception was not associated with the perception of Zulu clicks, but such an association was evident when the phonemes sounded more similar to English consonants. Moreover, it persisted after controlling for general cognitive abilities and music training. By contrast, there was no association between melody perception and phoneme perception. The findings are consistent with proposals that music- and speech-perception rely on similar mechanisms of auditory temporal processing, and that this overlap is independent of general cognitive functioning. They provide no support, however, for the idea that music training improves speech perception.  相似文献   

14.
The natural rhythms of speech help a listener follow what is being said, especially in noisy conditions. There is increasing evidence for links between rhythm abilities and language skills; however, the role of rhythm-related expertise in perceiving speech in noise is unknown. The present study assesses musical competence (rhythmic and melodic discrimination), speech-in-noise perception and auditory working memory in young adult percussionists, vocalists and non-musicians. Outcomes reveal that better ability to discriminate rhythms is associated with better sentence-in-noise (but not words-in-noise) perception across all participants. These outcomes suggest that sensitivity to rhythm helps a listener understand unfolding speech patterns in degraded listening conditions, and that observations of a “musician advantage” for speech-in-noise perception may be mediated in part by superior rhythm skills.  相似文献   

15.
To what extent do infants represent the absolute pitches of complex auditory stimuli? Two experiments with 8-month-old infants examined the use of absolute and relative pitch cues in a tone-sequence statistical learning task. The results suggest that, given unsegmented stimuli that do not conform to the rules of musical composition, infants are more likely to track patterns of absolute pitches than of relative pitches. A 3rd experiment tested adults with or without musical training on the same statistical learning tasks used in the infant experiments. Unlike the infants, adult listeners relied primarily on relative pitch cues. These results suggest a shift from an initial focus on absolute pitch to the eventual dominance of relative pitch, which, it is argued, is more useful for both music and speech processing.  相似文献   

16.
Following findings that musical rhythmic priming enhances subsequent speech perception, we investigated whether rhythmic priming for spoken sentences can enhance phonological processing – the building blocks of speech – and whether audio–motor training enhances this effect. Participants heard a metrical prime followed by a sentence (with a matching/mismatching prosodic structure), for which they performed a phoneme detection task. Behavioural (RT) data was collected from two groups: one who received audio–motor training, and one who did not. We hypothesised that 1) phonological processing would be enhanced in matching conditions, and 2) audio–motor training with the musical rhythms would enhance this effect. Indeed, providing a matching rhythmic prime context resulted in faster phoneme detection, thus revealing a cross-domain effect of musical rhythm on phonological processing. In addition, our results indicate that rhythmic audio–motor training enhances this priming effect. These results have important implications for rhythm-based speech therapies, and suggest that metrical rhythm in music and speech may rely on shared temporal processing brain resources.  相似文献   

17.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   

18.
In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners’ auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants’ susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners’ McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.  相似文献   

19.
Four experiments are reported investigating previous findings that speech perception interferes with concurrent verbal memory but difficult nonverbal perceptual tasks do not, to any great degree. The forgetting produced by processing noisy speech could not be attributed to task difficulty, since equally difficult nonspeech tasks did not produce forgetting, and the extent of forgetting produced by speech could be manipulated independently of task difficulty. The forgetting could not be attributed to similarity between memory material and speech stimuli, since clear speech, analyzed in a simple and probably acoustically mediated discrimination task, produced little forgetting. The forgetting could not be attributed to a combination of similarity and difficutly since a very easy speech task involving clear speech produced as much forgetting as noisy speech tasks, as long as overt reproduction of the stimuli was required. By assuming that noisy speech and overtly reproduced speech are processed at a phonetic level but that clear, repetitive speech can be processed at a purely acoustic level, the forgetting produced by speech perception could be entirely attributed to the level at which the speech was processed. In a final experiment, results were obtained which suggest that if prior set induces processing of noisy and clear speech at comparable levels, the difference between the effects of noisy speech processing and clear speech processing on concurrent memory is completely eliminated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号