首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is an exploration of how we do things with music—that is, the way that we use music as an “esthetic technology” to enact micro-practices of emotion regulation, communicative expression, identity construction, and interpersonal coordination that drive core aspects of our emotional and social existence. The main thesis is: from birth, music is directly perceived as an affordance-laden structure. Music, I argue, affords a sonic world, an exploratory space or “nested acoustic environment” that further affords possibilities for, among other things, (1) emotion regulation and (2) social coordination. When we do things with music, we are engaged in the work of creating and cultivating the self, as well as creating and cultivating a shared world that we inhabit with others. I develop this thesis by first introducing the notion of a “musical affordance”. Next, I look at how “emotional affordances” in music are exploited to construct and regulate emotions. I summon empirical research on neonate music therapy to argue that this is something we emerge from the womb knowing how to do. I then look at “social affordances” in music, arguing that joint attention to social affordances in music alters how music is both perceived and appropriated by joint attenders within social listening contexts. In support, I describe the experience of listening to and engaging with music in a live concert setting. Thinking of music as an affordance-laden structure thus reaffirms the crucial role that music plays in constructing and regulating emotional and social experiences in everyday life.  相似文献   

2.
音乐是高级意识活动产生的声音艺术, 对人类的情感表达和交流具有重要意义。作为连接音乐与情绪的核心要素, 协和性的形成原理至今仍然未有定论。人类如何加工多个音符构成的和声?为什么一些和声听起来协和(愉悦), 一些和声听起来不协和(不愉悦)?协和感究竟是自下而上的声学感知还是自上而下的审美体验?从古希腊时代至今, 这些问题就一直吸引着学者的目光。物理学家从协和与不协和的声学区别中寻找答案, 生理学家从听觉生理机制方面分析协和感的产生, 心理学家研究协和音程偏好是与生俱来的还是后天形成的。目前, 音乐协和性的理论内容主要以西方音乐为主, 中国传统民族音乐迫切需要开展相关的实证研究。  相似文献   

3.
Music is a cross-cultural universal, a ubiquitous activity found in every known human culture. Individuals demonstrate manifestly different preferences in music, and yet relatively little is known about the underlying structure of those preferences. Here, we introduce a model of musical preferences based on listeners' affective reactions to excerpts of music from a wide variety of musical genres. The findings from 3 independent studies converged to suggest that there exists a latent 5-factor structure underlying music preferences that is genre free and reflects primarily emotional/affective responses to music. We have interpreted and labeled these factors as (a) a Mellow factor comprising smooth and relaxing styles; (b) an Unpretentious factor comprising a variety of different styles of sincere and rootsy music such as is often found in country and singer-songwriter genres; (c) a Sophisticated factor that includes classical, operatic, world, and jazz; (d) an Intense factor defined by loud, forceful, and energetic music; and (e) a Contemporary factor defined largely by rhythmic and percussive music, such as is found in rap, funk, and acid jazz. The findings from a fourth study suggest that preferences for the MUSIC factors are affected by both the social and the auditory characteristics of the music.  相似文献   

4.
This article investigates how auditory attention affects inattentional blindness (IB), a failure of conscious awareness in which an observer does not notice an unexpected event because their attention is engaged elsewhere. Previous research using the attentional blink paradigm has indicated that listening to music can reduce failures of conscious awareness. It was proposed that listening to music would decrease IB by reducing observers’ frequency of task-unrelated thoughts (TUTs). Observers completed an IB task that varied both visual and auditory demands. Listening to music was associated with significantly lower IB, but only when observers actively attended to the music. Follow-up experiments suggest this was due to the distracting qualities of the audio task. The results also suggest a complex relationship between IB and TUTs: during demanding tasks, as predicted, noticers of the unexpected stimulus reported fewer TUTs than non-noticers. During less demanding tasks, however, noticers reported more TUTs than non-noticers.  相似文献   

5.
The human central auditory system has a remarkable ability to establish memory traces for invariant features in the acoustic environment despite continual acoustic variations in the sounds heard. By recording the memory-related mismatch negativity (MMN) component of the auditory electric and magnetic brain responses as well as behavioral performance, we investigated how subjects learn to discriminate changes in a melodic pattern presented at several frequency levels. In addition, we explored whether musical expertise facilitates this learning. Our data show that especially musicians who perform music primarily without a score learn easily to detect contour changes in a melodic pattern presented at variable frequency levels. After learning, their auditory cortex detects these changes even when their attention is directed away from the sounds. The present results thus show that, after perceptual learning during attentive listening has taken place, changes in a highly complex auditory pattern can be detected automatically by the human auditory cortex and, further, that this process is facilitated by musical expertise.  相似文献   

6.
Two studies examine the experience of ‘earworms’, unwanted catchy tunes that repeat. Survey data show that the experience is widespread but earworms are not generally considered problematic, although those who consider music to be important to them report earworms as longer, and harder to control, than those who consider music as less important. The tunes which produce these experiences vary considerably between individuals but are always familiar to those who experience them. A diary study confirms these findings and also indicates that, although earworm recurrence is relatively uncommon and unlikely to persist for longer than 24 h, the length of both the earworm and the earworm experience frequently exceed standard estimates of auditory memory capacity. Active attempts to block or eliminate the earworm are less successful than passive acceptance, consistent with Wegner's theory of ironic mental control.  相似文献   

7.
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non‐speech sounds. In this study, we investigated rhythmic perception of non‐linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants’ biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non‐linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.  相似文献   

8.
沉迷于电子产品所诱发的积极情绪体验严重影响青少年的学习与生活,引起社会各界高度重视。本文通过两项研究考察日常情绪体验与意识努力程度对自我控制的影响,并探讨其背后的内在机制。研究1采用问卷调查随机抽样300名大学生发现,大学生日常情绪体验差异显著,且以积极情绪体验为主,同时低意识努力的情绪刺激源显著多于高意识努力的情绪刺激源;研究2在研究1基础上,进一步通过“意识努力”介入的方式对136名被试进行干预,结果表明大学生日常情绪体验对自我控制无显著预测效应,意识努力显著正向预测其自我控制,且高水平的意识努力在日常情绪体验与自我控制之间的关系中发挥正向调节作用。  相似文献   

9.
Using a visual and an acoustic sample set that appeared to favour the auditory modality of the monkey subjects, in Experiment 1 retention gradients generated in closely comparable visual and auditory matching (go/no-go) tasks revealed a more durable short-term memory (STM) for the visual modality. In Experiment 2, potentially interfering visual and acoustic stimuli were introduced during the retention intervals of the auditory matching task. Unlike the case of visual STM, delay-interval visual stimulation did not affect auditory STM. On the other hand, delay-interval music decreased auditory STM, confirming that the monkeys maintained an auditory trace during the retention intervals. Surprisingly, monkey vocalizations injected during the retention intervals caused much less interference than music. This finding, which was confirmed by the results of Experiments 3 and 4, may be due to differential processing of “arbitrary” (the acoustic samples) and species-specific (monkey vocalizations) sounds by the subjects. Although less robust than visual STM, auditory STM was nevertheless substantial, even with retention intervals as long as 32 sec.  相似文献   

10.
Phillips-Silver and Trainor (Phillips-Silver, J., Trainor, L.J., (2005). Feeling the beat: movement influences infants' rhythm perception. Science, 308, 1430) demonstrated an early cross-modal interaction between body movement and auditory encoding of musical rhythm in infants. Here we show that the way adults move their bodies to music influences their auditory perception of the rhythm structure. We trained adults, while listening to an ambiguous rhythm with no accented beats, to bounce by bending their knees to interpret the rhythm either as a march or as a waltz. At test, adults identified as similar an auditory version of the rhythm pattern with accented strong beats that matched their previous bouncing experience in comparison with a version whose accents did not match. In subsequent experiments we showed that this effect does not depend on visual information, but that movement of the body is critical. Parallel results from adults and infants suggest that the movement-sound interaction develops early and is fundamental to music processing throughout life.  相似文献   

11.
Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.  相似文献   

12.
Duplex perception: a comparison of monosyllables and slamming doors   总被引:1,自引:0,他引:1  
Duplex perception has been interpreted as revealing distinct systems for general auditory perception and speech perception. The systems yield distinct experiences of the same acoustic signal, the one conforming to the acoustic structure itself and the other to its source in vocal-tract activity. However, this interpretation has not been tested by examining whether duplex perception can be obtained for nonspeech sounds that are not plausibly perceived by a specialized system. In five experiments, we replicate some of the phenomena associated with duplex perception of speech using the sound of a slamming door. Similarities between subjects' responses to syllables and door sounds are striking enough to suggest that some conclusions in the speech literature should be tempered that (a) duplex perception is special to sounds for which there are perceptual modules and (b) duplex perception occurs because distinct systems have rendered different percepts of the same acoustic signal.  相似文献   

13.
Abstract :  This paper deals with the presence and possible 'meaning' of music in dreams. The author explores a possible meaning of music as the most fundamental human symbolic experience, which directly points to the emergence of the Self from the primal union mystique with the Great Mother. The relationships between acoustic and visual experiences are taken into account as two basic human forms of coming into existence, although wholly different from each other. The role of music in dreams seems to be that of the most direct representation of the emerging Self in its pure, pre-representational form. Therefore, when music appears in dreams, providing there is the activation of an emotional tone, all other elements—visual and verbal—should be considered as the expression of the sense to which the music is pointing. A clinical example is described in order to better express the author's opinions.  相似文献   

14.
The power of music is a literary topos, which can be attributed to intense and personally significant experiences, one of them being the state of absorption. Such phenomenal states are difficult to grasp objectively. We investigated the state of musical absorption by using eye tracking. We utilized a load related definition of state absorption: multimodal resources are committed to create a unified representation of music. Resource allocation was measured indirectly by microsaccade rate, known to indicate cognitive processing load. We showed in Exp. 1 that microsaccade rate also indicates state absorption. Hence, there is cross-modal coupling between an auditory aesthetic experience and fixational eye movements. When removing the fixational stimulus in Exp. 2, saccades are no longer generated upon visual input and the cross-modal coupling disappeared. Results are interpreted in favor of the load hypothesis of microsaccade rate and against the assumption of general slowing by state absorption.  相似文献   

15.
In two experiments, we investigated how auditory–motor learning influences performers’ memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory–motor (normal performance), and weakly coupled auditory–motor (performing along with auditory recordings). Pianists’ recognition of the learned melodies was better following auditory-only or auditory–motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory–motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory–motor learning. These findings suggest that motor learning can aid performers’ auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.  相似文献   

16.
Our understanding of human visual perception generally rests on the assumption that conscious visual states represent the interaction of spatial structures in the environment and our nervous system. This assumption is questioned by circumstances where conscious visual states can be triggered by external stimulation which is not primarily spatially defined. Here, subjective colors and forms are evoked by flickering light while the precise nature of those experiences varies over flicker frequency and phase. What's more, the occurrence of one subjective experience appears to be associated with the occurrence of others. While these data indicate that conscious visual experience may be evoked directly by particular variations in the flow of spatially unstructured light over time, it must be assumed that the systems responsible are essentially temporal in character and capable of representing a variety of visual forms and colors, coded in different frequencies or at different phases of the same processing rhythm.  相似文献   

17.
I argue that the neural realizers of experiences of trying (that is, experiences of directing effort towards the satisfaction of an intention) are not distinct from the neural realizers of actual trying (that is, actual effort directed towards the satisfaction of an intention). I then ask how experiences of trying might relate to the perceptual experiences one has while acting. First, I assess recent zombie action arguments regarding conscious visual experience, and I argue that contrary to what some have claimed, conscious visual experience plays a causal role for action control in some circumstances. Second, I propose a multimodal account of the experience of acting. According to this account, the experience of acting is (at the very least) a temporally extended, co‐conscious collection of agentive and perceptual experiences, functionally integrated and structured both by multimodal perceptual processing as well as by what an agent is, at the time, trying to do.  相似文献   

18.
Prior studies have observed selective neural responses in the adult human auditory cortex to music and speech that cannot be explained by the differing lower-level acoustic properties of these stimuli. Does infant cortex exhibit similarly selective responses to music and speech shortly after birth? To answer this question, we attempted to collect functional magnetic resonance imaging (fMRI) data from 45 sleeping infants (2.0- to 11.9-weeks-old) while they listened to monophonic instrumental lullabies and infant-directed speech produced by a mother. To match acoustic variation between music and speech sounds we (1) recorded music from instruments that had a similar spectral range as female infant-directed speech, (2) used a novel excitation-matching algorithm to match the cochleagrams of music and speech stimuli, and (3) synthesized “model-matched” stimuli that were matched in spectrotemporal modulation statistics to (yet perceptually distinct from) music or speech. Of the 36 infants we collected usable data from, 19 had significant activations to sounds overall compared to scanner noise. From these infants, we observed a set of voxels in non-primary auditory cortex (NPAC) but not in Heschl's Gyrus that responded significantly more to music than to each of the other three stimulus types (but not significantly more strongly than to the background scanner noise). In contrast, our planned analyses did not reveal voxels in NPAC that responded more to speech than to model-matched speech, although other unplanned analyses did. These preliminary findings suggest that music selectivity arises within the first month of life. A video abstract of this article can be viewed at https://youtu.be/c8IGFvzxudk .

Research Highlights

  • Responses to music, speech, and control sounds matched for the spectrotemporal modulation-statistics of each sound were measured from 2- to 11-week-old sleeping infants using fMRI.
  • Auditory cortex was significantly activated by these stimuli in 19 out of 36 sleeping infants.
  • Selective responses to music compared to the three other stimulus classes were found in non-primary auditory cortex but not in nearby Heschl's Gyrus.
  • Selective responses to speech were not observed in planned analyses but were observed in unplanned, exploratory analyses.
  相似文献   

19.
Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds—extensively edited samples produced by a French horn and a tenor saxophone—following either resynthesized speech or a short passage of music. Preceding contexts were “colored” by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.  相似文献   

20.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号