首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non‐speech sounds. In this study, we investigated rhythmic perception of non‐linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants’ biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non‐linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.  相似文献   

2.
Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217–223. doi:10.1016/j.cognition.2010.08.008], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction – vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.  相似文献   

3.
What are the object properties that serve as a basis for the musical instrument classification system, and how do general and specific experience affect knowledge of these properties? In the first study, the multimodal quality of properties underlying children's and adults' perception was investigated. Subjects listened to solos and identified instruments producing the sounds. Even children who did not have experience with all the instruments correctly identified the family of instruments they were listening to. The hypothesis of the second study, that musical instrument families function as a "basic level" in the instrument taxonomy, was confirmed. Variation in the basic level with varying expertise was documented in the third study with musicians. In the fourth study, children and adults identified the source of sounds from unfamiliar objects, Chinese musical instruments. It is suggested that the concept of affordances may be relevant for understanding the importance for behavior of different levels of abstraction of category systems.  相似文献   

4.
Rhythm and pitch in music cognition   总被引:7,自引:0,他引:7  
Rhythm and pitch are the 2 primary dimensions of music. They are interesting psychologically because simple, well-defined units combine to form highly complex and varied patterns. This article brings together the major developments in research on how these dimensions are perceived and remembered, beginning with psychophysical results on time and pitch perception. Progressively larger units are considered, moving from basic psychological categories of temporal and frequency ratios, to pulse and scale, to metrical and tonal hierarchies, to the formation of musical rhythms and melodies, and finally to the cognitive representation of large-scale musical form. Interactions between the dimensions are considered, and major theoretical proposals are described. The article identifies various links between musical structure and perceptual and cognitive processes, suggesting psychological influences on how sounds are patterned in music.  相似文献   

5.
The perceptual restoration of musical sounds was investigated in 5 experiments with Samuel's (1981a) discrimination methodology. Restoration in familiar melodies was compared to phonemic restoration in Experiment 1. In the remaining experiments, we examined the effect of expectations (generated by familiarity, predictability, and musical schemata) on musical restoration. We investigated restoration in melodies by comparing familiar and unfamiliar melodies (Experiment 2), as well as unfamiliar melodies varying in tonal and rhythmic predictability (Experiment 3). Expectations based on both familiarity and predictability were found to reduce restoration at the melodic level. Restoration at the submelodic level was investigated with scales and chords in Experiments 4 and 5. At this level, key-based expectations were found to increase restoration. Implications for music perception, as well as similarities between restoration in music and speech, are discussed.  相似文献   

6.
A software system developed with HyperCard has been designed to support research relying on musical stimuli. The software accesses a set of digitized sounds in memory that consist of chromatic scale notes for the piano, harp, and guitar. Menu options allow experimenters to create monophonic melodies, to arrange melodies in a prescribed order, and to present them in a flexible format for a variety of psychological tasks. In addition to supporting experimental research projects, the software can also be used to demonstrate certain fundamental principles of auditory perception and cognitive psychology.  相似文献   

7.
Six musicians with relative pitch judged 13 tonal intervals in a magnitude estimation task. Stimuli were spaced in .2-semitone increments over a range of three standard musical categories (fourth, tritone, fifth,). The judged magnitude of the intervals did not increase regularly with stimulus magnitude. Rather, the psychophysical functions showed three discrete steps cor-responding to the musically defined intervals. Although all six subjects had identified in-tune intervals with >95% accuracy, all were very poor at differentiating within a musical category— they could not reliably tell “sharp” from “flat.” After the experiment, they judged 63% of the stimuli to be “in tune,” but in fact only 23% were musically accurate. In a subsequent labeling task, subjects produced identification functions with sharply defined boundaries between each of the three musical categories. Our results parallel those associated with the identification and scaling of speech sounds, and we interpret them as evidence for categorical perception of music.  相似文献   

8.
The INDSCAL multidimensional scaling model was used to investigate the distinctive features involved in the perception of 16 complex nonspeech sounds. The signals differed along four physical dimensions: fundamental frequency, waveform, formant frequency, and number of formants. Scaling results indicated that subjects’ similarity ratings could be accounted for by three psychological or perceptual dimensions. A statistically reliable correspondence was observed between these perceptual dimensions and the physical characteristics of fundamental frequency, waveform, and a combination of the two formant parameters. These results were further explored with Johnson’s (1967) hierarchical clustering analysis. Large differences in featural saliency occurred in the group data with fundamental accounting for more variability than the remaining dimensions. Further analysis of individual subject data revealed large individual differences in featural saliency. These differences were related to past musical experience of the subject and to earlier findings using similar signals. It was concluded that (1) the INDSCAL model provides a useful method for the analysis of auditory perception in the nonspeech mode, and (2) featural saliency in such sounds is likely to be determined by an unspecified attentional mechanism.  相似文献   

9.
In this article I consider the relationship between natural sounds and music. I evaluate two prominent accounts of this relationship. These accounts satisfy an important condition, the difference condition: musical sounds are different from natural sounds. However, they fail to meet an equally important condition, the interaction condition: musical sounds and natural sounds can interact in aesthetically important ways to create unified aesthetic objects. I then propose an alternative account of the relationship between natural sounds and music that meets both conditions. I argue that natural sounds are distinct from music in that they express a kind of alterity or “otherness,” which occurs in two ways. It occurs referentially, because the sources of natural sounds are natural objects rather than artifactual objects, such as instruments; it also occurs acoustically, because natural sounds tend to contain more microtones than macrotones. On my account, the distinction between music and natural sounds is both conventional and vague; it therefore allows music and natural sounds to come together.  相似文献   

10.
It has been suggested that the basic building blocks of music mimic sounds of moving humans, and because the brain was primed to exploit such sounds, they eventually became incorporated in human culture. However, that raises further questions. Why do genetically close, culturally well-developed apes lack musical abilities? Did our switch to bipedalism influence the origins of music? Four hypotheses are raised: (1) Human locomotion and ventilation can mask critical sounds in the environment. (2) Synchronization of locomotion reduces that problem. (3) Predictable sounds of locomotion may stimulate the evolution of synchronized behavior. (4) Bipedal gait and the associated sounds of locomotion influenced the evolution of human rhythmic abilities. Theoretical models and research data suggest that noise of locomotion and ventilation may mask critical auditory information. People often synchronize steps subconsciously. Human locomotion is likely to produce more predictable sounds than those of non-human primates. Predictable locomotion sounds may have improved our capacity of entrainment to external rhythms and to feel the beat in music. A sense of rhythm could aid the brain in distinguishing among sounds arising from discrete sources and also help individuals to synchronize their movements with one another. Synchronization of group movement may improve perception by providing periods of relative silence and by facilitating auditory processing. The adaptive value of such skills to early ancestors may have been keener detection of prey or stalkers and enhanced communication. Bipedal walking may have influenced the development of entrainment in humans and thereby the evolution of rhythmic abilities.  相似文献   

11.
Rhythm perception seems to be crucial to language development. Many studies have shown that children with developmental dyslexia and developmental language disorder have difficulties in processing rhythmic structures. In this study, we investigated the relationships between prosody and musical processing in Italian children with typical and atypical development. The tasks aimed to reproduce linguistic prosodic structures through musical sequences, offering a direct comparison between the two domains without violating the specificities of each one. About 16 Typically Developing children, 16 children with a diagnosis of Developmental Dyslexia, and 16 with a diagnosis of developmental language disorder (age 10–13 years) participated in the experimental study. Three tasks were administered: an association task between a sentence and its humming version, a stress discrimination task (between couples of sounds reproducing the intonation of Italian trisyllabic words), and an association task between trisyllabic nonwords with different stress position and three‐notes musical sequences with different musical stress. Children with developmental language disorder perform significantly lower than Typically Developing children on the humming test. By contrast, children with developmental dyslexia are significantly slower than TD in associating nonwords with musical sequences. Accuracy and speed in the experimental tests correlate with metaphonological, language, and word reading scores. Theoretical and clinical implications are discussed within a multidimensional model of neurodevelopmental disorders including prosodic and rhythmic skills at word and sentence level.  相似文献   

12.
This study explored the influence of several factors, physical and human, on anisochrony's thresholds measured with an adaptive two alternative forced choice paradigm. The effect of the number and duration of sounds on anisochrony discrimination was tested in the first experiment as well as potential interactions between each of these factors and tempo. In the second experiment, the tempo or the inter onset interval (IOI) was varied systematically between 80 and 1000 ms. The results showed that just noticeable differences increase linearly and proportionally with IOI in accordance with Weber's law except for quickest tempo (IOI of 80 ms). The third experiment investigated the role of musical training on anisochrony thresholds obtained for different IOI. It focused on differential effects of musical experiences by comparing non-musicians, instrumentalists, and percussionists thresholds. The results of the present study replicated the findings of previous experiments regarding the adequacy of Weber's law for slow rhythm and provided evidence for its departure for fast tempos. Moreover, thresholds from percussionists seem distinguishable from the ones of other listeners by their highest sensitivity to temporal shifts suggesting therefore the necessity to control the nature of musical experiences. The results are discussed according to current models of time perception.  相似文献   

13.
近年来听觉表象开始得到关注,相关研究包括言语声音、音乐声音、环境声音的听觉表象三类。本文梳理了认知神经科学领域对上述三种听觉表象所激活的脑区研究,比较了听觉表象和听觉对应脑区的异同,并展望了听觉表象未来的研究方向。  相似文献   

14.
This essay analyzes the historical development of otology in relation to music. It illustrates the integral role of music perception and appreciation in the study of hearing, where hearing operates not simply as a scientific phenomenon but signifies particular meaningful experiences in society. The four historical moments considered—Helmholtz’s piano-keyed cochlea, the ear phonautograph, the hearing aid, and the cochlear implant—show how the sounds, perceptions, and instruments of music have mediated and continue to mediate our relationships with hearing. To have an ear, one does not just bear a physiological hearing mechanism; one experiences the aesthetics of musical sound.  相似文献   

15.
Music listening often entails spontaneous perception and body movement to a periodic pulse-like meter. There is increasing evidence that this cross-cultural ability relates to neural processes that selectively enhance metric periodicities, even when these periodicities are not prominent in the acoustic stimulus. However, whether these neural processes emerge early in development remains largely unknown. Here, we recorded the electroencephalogram (EEG) of 20 healthy 5- to 6-month-old infants, while they were exposed to two rhythms known to induce the perception of meter consistently across Western adults. One rhythm contained prominent acoustic periodicities corresponding to the meter, whereas the other rhythm did not. Infants showed significantly enhanced representations of meter periodicities in their EEG responses to both rhythms. This effect is unlikely to reflect the tracking of salient acoustic features in the stimulus, as it was observed irrespective of the prominence of meter periodicities in the audio signals. Moreover, as previously observed in adults, the neural enhancement of meter was greater when the rhythm was delivered by low-pitched sounds. Together, these findings indicate that the endogenous enhancement of metric periodicities beyond low-level acoustic features is a neural property that is already present soon after birth. These high-level neural processes could set the stage for internal representations of musical meter that are critical for human movement coordination during rhythmic musical behavior.

Research Highlights

  • 5- to 6-month-old infants were presented with auditory rhythms that induce the perception of a periodic pulse-like meter in adults.
  • Infants showed selective enhancement of EEG activity at meter-related frequencies irrespective of the prominence of these frequencies in the stimulus.
  • Responses at meter-related frequencies were boosted when the rhythm was conveyed by bass sounds.
  • High-level neural processes that transform rhythmic auditory stimuli into internal meter templates emerge early after birth.
  相似文献   

16.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

17.
以小四、初二和高二年级中有音乐经验和无音乐经验的学生为被试,采用等级量表评定法,通过两个实验,分别在无概念提示和有概念提示的条件下,要求被试对音乐旋律片段的张力进行判断,探讨概念提示和音乐经验对于音乐张力感知的影响。结果发现:(1)音乐张力感知是一个随个体成熟而不断发展的过程,小学四年级到初中二年级之间音乐张力感知变化较大,到高中二年级趋于稳定;音乐训练经验仅对小学四年级被试的音乐张力感知有促进作用;(2)概念提示有助于小学四年级被试对音乐张力概念的理解,而对高二被试无显著促进作用,说明音乐张力概念的掌握  相似文献   

18.
Neurocognitive studies have shown that extensive musical training enhances P3a and P3b event-related potentials for infrequent target sounds, which reflects stronger attention switching and stimulus evaluation in musicians than in nonmusicians. However, it is unknown whether the short-term plasticity of P3a and P3b responses is also enhanced in musicians. We compared the short-term plasticity of P3a and P3b responses to infrequent target sounds in musicians and nonmusicians during auditory perceptual learning tasks. Target sounds, deviating in location, pitch, and duration with three difficulty levels, were interspersed among frequently presented standard sounds in an oddball paradigm. We found that during passive exposure to sounds, musicians had habituation of the P3a, while nonmusicians showed enhancement of the P3a between blocks. Between active tasks, P3b amplitudes for duration deviants were reduced (habituated) in musicians only, and showed a more posterior scalp topography for habituation when compared to P3bs of nonmusicians. In both groups, the P3a and P3b latencies were shortened for deviating sounds. Also, musicians were better than nonmusicians at discriminating target deviants. Regardless of musical training, better discrimination was associated with higher working memory capacity. We concluded that music training enhances short-term P3a/P3b plasticity, indicating training-induced changes in attentional skills.  相似文献   

19.
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory–motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual–motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice.  相似文献   

20.
This paper first reviews briefly the literature on the acoustics of infant cry sounds and then presents two empirical studies on the perception of cry and noncry sounds in their social-communicative context. Acoustic analysis of cry sounds has undergone dramatic changes in the last 35 years, including the introduction of more than a hundred different acoustic measures. The study of cry acoustics, however, remains largely focused on neonates who have various medical problems or are at risk for developmental delays. Relatively little is known about how cry sounds and cry perception change developmentally, or about how they compare with noncry sounds. The data presented here support the notion that both auditory and visual information are important in caregivers' interpretations of infant sounds in naturalistic contexts. When only auditory information is available (Study 1), cry sounds become generally more recognizable from 3 to 12 months of age; perception of noncry sounds, however, generally does not change over age. When auditory and visual information contradict each other (Study 2), adults tend to perform at chance levels, with a few interesting exceptions. It is suggested that broadening studies of acoustic analysis and perception to include both cry and noncry sounds should increase our understanding of the development of communication in infancy. Finally, we suggest that examining the cry in its developmental context holds great possibility for delineating the factors that underlie adults' responses to crying.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号