共查询到20条相似文献,搜索用时 15 毫秒
1.
This experiment was conducted to investigate cross-modal interactions in the emotional experience of music listeners. Previous research showed that visual information present in a musical performance is rich in expressive content, and moderates the subjective emotional experience of a participant listening and/or observing musical stimuli [Vines, B. W., Krumhansl, C. L., Wanderley, M. M., & Levitin, D. J. (2006). Cross-modal interactions in the perception of musical performance. Cognition, 101, 80--113.]. The goal of this follow-up experiment was to replicate this cross-modal interaction by investigating the objective, physiological aspect of emotional response to music measuring electrodermal activity. The scaled average of electrodermal amplitude for visual-auditory presentation was found to be significantly higher than the sum of the reactions when the music was presented in visual only (VO) and auditory only (AO) conditions, suggesting the presence of an emergent property created by bimodal interaction. Functional data analysis revealed that electrodermal activity generally followed the same contour across modalities of presentation, except during rests (silent parts of the performance) when the visual information took on particular salience. Finally, electrodermal activity and subjective tension judgments were found to be most highly correlated in the audio-visual (AV) condition than in the unimodal conditions. The present study provides converging evidence for the importance of seeing musical performances, and preliminary evidence for the utility of electrodermal activity as an objective measure in studies of continuous music-elicited emotions. 相似文献
2.
We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles: restrained, standard, and exaggerated intention. Participants used a 5-point Likert scale to rate each performance on 19 different emotional qualities. The data analysis revealed that variations in expressive intention had their greatest impact when the performances could be seen; the ratings from participants who could only hear the performances were the same across the three expressive styles. Evidence was also found for an interaction effect leading to an emergent property, intensity of positive emotion, when participants both heard and saw the musical performances. An exploratory factor analysis revealed orthogonal dimensions for positive and negative emotions, which may account for the subjective experience that many listeners report of having multi-valent or complex reactions to music, such as “bittersweet.” 相似文献
3.
Jonna K. Vuoskoski Marc R. Thompson Eric F. Clarke Charles Spence 《Attention, perception & psychophysics》2014,76(2):591-604
In musical performance, bodily gestures play an important role in communicating expressive intentions to audiences. Although previous studies have demonstrated that visual information can have an effect on the perceived expressivity of musical performances, the investigation of audiovisual interactions has been held back by the technical difficulties associated with the generation of controlled, mismatching stimuli. With the present study, we aimed to address this issue by utilizing a novel method in order to generate controlled, balanced stimuli that comprised both matching and mismatching bimodal combinations of different expressive intentions. The aim of Experiment 1 was to investigate the relative contributions of auditory and visual kinematic cues in the perceived expressivity of piano performances, and in Experiment 2 we explored possible crossmodal interactions in the perception of auditory and visual expressivity. The results revealed that although both auditory and visual kinematic cues contribute significantly to the perception of overall expressivity, the effect of visual kinematic cues appears to be somewhat stronger. These results also provide preliminary evidence of crossmodal interactions in the perception of auditory and visual expressivity. In certain performance conditions, visual cues had an effect on the ratings of auditory expressivity, and auditory cues had a small effect on the ratings of visual expressivity. 相似文献
4.
《Journal of Cognitive Psychology》2013,25(1):132-139
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content. 相似文献
5.
In this paper, we argue that music cognition involves the use of acoustic and auditory codes to evoke a variety of conscious experiences. The variety of domains that are encompassed by music is so diverse that it is unclear whether a single domain of structure or experience is defining. Music is best understood as a form of communication in which formal codes (acoustic patterns and their auditory representations) are employed to elicit a variety of conscious experiences. After proposing our theoretical perspective we offer three prominent examples of conscious experiences elicited by the code of music: the recognition of structure itself, affect, and the experience of motion. 相似文献
6.
In previous work done in our laboratory, we have investigated the perceived pitch class of isolated musical triads. We have found that as the amount of musical training increased, listeners perceptions progress, from very confused percepts of pitch class, to analytic percepts corresponding to the pitch class of the highest note in the triad, and finally to synthetic percepts corresponding to the root note for the more harmonic triad types. In the present work, we used a pitch matching technique to determine the actual pitch, rather than merely the pitch class, perceived when listeners analytically "hear out" a particular note in a major triad. There was a strong tendency for the pitch of the analytically perceived note to be displaced by as much as 60 cents in the direction of the other notes in the triad. The magnitude of this effect decreased as musical training increased, and it was also affected by the relative salience of the individual triad notes. These results have implications for the mechanism of triad perception, and for claims regarding the harmonic equivalence of triad inversions. 相似文献
7.
《Quarterly journal of experimental psychology (2006)》2013,66(11):2141-2155
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors. 相似文献
8.
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory–motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual–motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice. 相似文献
9.
Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody 总被引:2,自引:0,他引:2
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody. 相似文献
10.
Eve-Marie Quintin Anjali Bhatara Hélène Poissant Eric Fombonne 《Child neuropsychology》2013,19(3):250-275
Enhanced pitch perception and memory have been cited as evidence of a local processing bias in autism spectrum disorders (ASD). This bias is argued to account for enhanced perceptual functioning (Mottron &; Burack, 2001; Mottron, Dawson, Soulières, Hubert, &; Burack, 2006) and central coherence theories of ASD (Frith, 1989; Happé &; Frith, 2006). A local processing bias confers a different cognitive style to individuals with ASD (Happé, 1999), which accounts in part for their good visuospatial and visuoconstructive skills. Here, we present analogues in the auditory domain, audiotemporal or audioconstructive processing, which we assess using a novel experimental task: a musical puzzle. This task evaluates the ability of individuals with ASD to process temporal sequences of musical events as well as various elements of musical structure and thus indexes their ability to employ a global processing style. Musical structures created and replicated by children and adolescents with ASD (10–19 years old) and typically developing children and adolescents (7–17 years old) were found to be similar in global coherence. Presenting a musical template for reference increased accuracy equally for both groups, with performance associated to performance IQ and short-term auditory memory. The overall pattern of performance was similar for both groups; some puzzles were easier than others and this was the case for both groups. Task performance was further found to be correlated with the ability to perceive musical emotions, more so for typically developing participants. Findings are discussed in light of the empathizing-systemizing theory of ASD (Baron-Cohen, 2009) and the importance of describing the strengths of individuals with ASD (Happé, 1999; Heaton, 2009). 相似文献
11.
Recent findings suggest the involvement of the cerebellum in perceptual and cognitive tasks. Our study investigated whether cerebellar patients show musical priming based on implicit knowledge of tonal-harmonic music. Participants performed speeded phoneme identification on sung target chords, which were either related or less-related to prime contexts in terms of the tonal-harmonic system. As groups, both cerebellar patients and age-matched controls showed facilitated processing for related targets, as previously observed for healthy young adults. The outcome suggests that an intact cerebellum is not mandatory for accessing implicit knowledge stored in long-term memory and for its influence on perception. One patient showed facilitated processing for less-related targets (suggesting sensory priming). The findings suggest directions for future research on auditory perception in cerebellar patients to further our understanding of cerebellar functions. 相似文献
12.
The present paper reviews a set of studies designed to investigate different aspects of the capacity for processing Western music. This includes perceiving the relationships between a theme and its variations, perceiving musical tensions and relaxations, generating musical expectancies, integrating local structures in large-scale structures, learning new compositional systems and responding to music in an emotional (affective) way. The main focus of these studies was to evaluate the influence of intensive musical training on these capacities. The overall set of data highlights that some musical capacities are acquired through exposure to music without the help of explicit training. These capacities reach such a degree of sophistication that they enable untrained listeners to respond to music as "musically experienced listeners" do. 相似文献
13.
《Journal of Cognitive Psychology》2013,25(4):409-419
The results of one empirical study are presented to investigate whether voice recognition might profitably be integrated into a single IAC network for person perception. An identity priming paradigm was used to determine whether face perception and voice perception combined to influence one another. The results revealed within-modality priming of faces by prior presentations of faces, and of voices by prior presentation of voices. Critically, cross-modality priming was also revealed, confirming that the two modalities can be represented within a single system and can influence one another. These results are supported by the results of a simulation, and are discussed in terms of the theoretical development of IAC, and the benefits and future questions that arise from consideration of an integrated multimodal model of person perception. 相似文献
14.
Both emotional reactivity and categorization have long been studied within the framework of hemispheric asymmetry. However, little attempt has been made to integrate both research areas using any form of neuropsychological research, despite behavioral data suggesting a consistent relationship between affective and categorization processes. The primary goal of the current study was to examine the possibility of a laterally mediated interaction between emotional reactivity and the cognitive process of categorization. Using a split visual fields categorization task combined with affect inducing procedures, we hypothesized that the relationship between state affect and categorization would be dependent on the nature of state affect and on the hemisphere targeted. Results offered support for this hypothesis, showing that state affect related changes in categorization appeared only in the hemisphere commonly associated with both a specific affective state and categorization strategy employed. Findings are discussed in terms of possible evidence for a hemispheric arousal effect underlying the relationship between affect and categorization. 相似文献
15.
Following the framework that controlled performance is dependent upon cognitive and emotional processes which are inherently inter-linked, effects of trait and state negative affect (NA) on inhibitory control (IC) were studied in two experiments using an emotional day-night task (EDNT) - an inhibition based decision-making task embedded with emotional content. It was hypothesized that the effects of processing negatively loaded stimuli would depend on trait levels of negative and positive affects, particularly in conditions that entail IC. In Experiment 1, EDNT performance was compared with performance of an emotionally loaded control task that required to perform a dominant response rather than to inhibit it. In Experiment 2, EDNT performance was compared with an emotionally loaded control task that required performing an alternative rule which did not involve inhibiting the dominant response. Results of both Experiments showed that participants high on NA trait reactivity showed improved performance while processing ‘sad’ content, only in the inhibitory task and not in either of the control tasks. Results point to an interaction of trait and state factors in IC, and highlight the notion that heightened NA may sub-serve inhibition in sad contexts, which require counter-intuitive operations. 相似文献
16.
Many studies have suggested that structural and functional cerebral neuroplastic processes result from long-term musical training, which in turn may produce cognitive differences between musicians and non-musicians. We aimed to investigate whether intensive, long-term musical practice is associated with improvements in three different forms of visual attention ability: selective, divided and sustained attention. Musicians from symphony orchestras (n = 38) and non-musicians (n = 38), who were comparable in age, gender and education, were submitted to three neuropsychological tests, measuring reaction time and accuracy. Musicians showed better performance relative to non-musicians on four variables of the three visual attention tests, and such an advantage could not solely be explained by better sensorimotor integration. Moreover, in the group of musicians, significant correlations were observed between the age at the commencement of musical studies and reaction time in all visual attention tests. The results suggest that musicians present augmented ability in different forms of visual attention, thus illustrating the possible cognitive benefits of long-term musical training. 相似文献
17.
Panksepp J 《Integrative psychological & behavioral science》2008,42(1):47-55
This commentary on Dan Shanahan’s, A New View of Language, Emotion and the Brain, basically agrees with an emotion-based view of the evolutionary and developmental basis of language acquisition. It provides
a supplementary neuroscience perspective that is more deeply affective and epigenetic in the sense that all claims about neocortically-based
language modules need to be tempered by the existing genetic evidence as well as the robust neuroscience evidence that the
cortex resembles random-access-memory space, a tabula rasa upon which epigenetic and learning processes create functional networks. The transition from non-linguistic creatures to
linguistic ones may have required the conjunction of social-affective brain mechanisms, morphological changes in the articulatory
apparatus, an abundance of cross-modal cortical processing ability, and the initial urge to communicate in coordinate prosodic
gestural and vocal ways, which may have been more poetic and musical than current propositional language. There may be no
language instinct that is independent of these evolutionary pre-adaptations.
相似文献
Jaak PankseppEmail: |
18.
《Quarterly journal of experimental psychology (2006)》2013,66(11):2125-2152
A central aim of cognitive psychology is to explain how we integrate stimulus dimensions into a unified percept, but how the dimensions of pitch and time combine in the perception of music remains a largely unresolved issue. The goal of this study was to test the effect of varying the degree of conformity to dimensional structure in pitch and time (specifically, tonality and metre) on goodness ratings and classifications of melodies. The pitches and durations of melodies were either presented in their original order, as a reordered sequence, or replaced with random elements. Musically trained and untrained participants (24 each) rated melodic goodness, attending selectively to the dimensions of pitch, time, or both. Also, 24 trained participants classified whether or not the melodies were tonal, metric, or both. Pitch and temporal manipulations always influenced responses, but participants successfully emphasized either dimension in accordance with instructions. Effects of pitch and time were mostly independent for selective attention conditions, but more interactive when evaluating both dimensions. When interactions occurred, the effect of either dimension increased as the other dimension conformed more to its original structure. Relative main effect sizes (| pitch η2 – time η2 |) predicted the strength of pitch–time interactions (pitch?×?time η2); interactions were stronger when main effect sizes were more evenly matched. These results have implications for dimensional integration in several domains. Relative main effect size could serve as an indicator of dimensional salience, such that interactions are more likely when dimensions are equally salient. 相似文献
19.
Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual cues. Emotion perception research has focused on static facial cues; however, dynamic audio-visual (AV) cues mimic real-world social cues more accurately than static and/or unimodal stimuli. Novel dynamic AV stimuli were presented using a block design in two fMRI studies, comparing bimodal stimuli to unimodal conditions, and emotional to neutral stimuli. Results suggest that the bilateral superior temporal region plays distinct roles in the perception of emotion and in the integration of auditory and visual cues. Given the greater ecological validity of the stimuli developed for this study, this paradigm may be helpful in elucidating the deficits in emotion perception experienced by clinical populations. 相似文献
20.
This study examines the effect of musical experience and family handedness background on the categorization of musical intervals (two-note chords). Right-handed subjects, who were divided into four groups on the basis of musical training and presence (or absence) of left-handed family members, categorized musical intervals which were monaurally presented to left or right ear. The results, based on consistency and discreteness of categorization, showed: (1) Musicians' performance is superior to nonmusicians'; (2) musicians and nonmusicians differ significantly on their ear of preference; (3) family handedness background significantly affects ear of preference among musicians but not among nonmusicians. 相似文献