首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.  相似文献   

2.
An experiment demonstrated the capacity of sex role stereotyped portrayals of women and men found in popular rock music videos to alter impressions formed of a man and a woman who subsequently were seen interacting. The results indicated that stereotypic rock music videos increased the accessibility of (i.e., primed) sex role stereotypic schemas, dramatically changing impressions of the interactants. Of particular interest were judgments of skill, indicating that impressions formed by subjects who had watched sex role stereotypic videos, but not by those who had watched neutral videos, seemed guided by the principles of chivalry: droit de seiquneur for her and noblesse oblige for him. A process whereby rock music videos perpetuate sex role stereotypes was described.The authors are indebted to M. Kaseta and M. McCliment who served as the actors, and to C. Alexander, G. Bause, M. Case, K. Glassco, J. Grauer, K. Hannula, S. King, F. Kirsch, J. Minor, G. Rinehart, A. Saghy, D. Schnering, J. Sweeney, and R. Unakar who served as experimenters.  相似文献   

3.
An experiment was conducted to test the effects of rock music videos on impressions of a man subsequently seen performing an antisocial act. After neutral music videos, impressions of the target were more negative when he made an obscene gesture toward a female experimenter than when he did not. After videos containing antisocial content, the antisocial act had a negligible, if not opposite, effect on impressions. Results and implications are discussed in terms of social-cognitive theories of information processing.  相似文献   

4.
Team effectiveness and group performance are often defined by standards set by domain experts. Professional musicians consistently report that sound output is the most important standard for evaluating the quality of group performance in the domain of music. However, across six studies, visual information dominated rapid judgments of group performance. Participants (1062 experts and novices) were able to select the actual winners of live ensemble competitions and distinguish top-ranked orchestras from non-ranked orchestras based on 6-s silent video recordings yet were unable to do so from sound recordings or recordings with both video and sound. These findings suggest that judgments of group performance in the domain of music are driven at least in part by visual cues about group dynamics and leadership.  相似文献   

5.
Two experiments examined the effects of priming on appraisal and recall of a subsequent social interaction. Popular rock music videos depicting sex-role stereotypic themes were used to prime sex-role stereotypic schemas. Two commonly available sex-role stereotypic event schemas ("boy-meets-girl" and "boy-dumps-girl") were identified in rock music videos. Subjects were exposed to one of these two types of videos (or neutral videos) before watching an interaction between a man and a woman that had been constructed to be schematically consistent or inconsistent with either the boy-meets-girl or the boy-dumps-girl schema. In Experiment 1, consistent with predictions from contemporary schematic processing theories, greater and more accurate recall of the actors' behaviors was found when the interaction was schema-inconsistent with the priming videos than when it was schema-consistent. In addition, after either stereotypic priming videos, both actors were liked more when their behavior was schema-consistent than when it was schema-inconsistent. Trait judgments in Experiment 2 showed that more positive traits were also ascribed to both actors when behaviors occurring during the interaction had been made schema-consistent rather than schema-inconsistent by the priming videos. The findings argue that, by serving as priming stimuli, rock music videos can produce strong, predictable, and nonconscious cognitive effects on viewers.  相似文献   

6.
Chapados C  Levitin DJ 《Cognition》2008,108(3):639-651
This experiment was conducted to investigate cross-modal interactions in the emotional experience of music listeners. Previous research showed that visual information present in a musical performance is rich in expressive content, and moderates the subjective emotional experience of a participant listening and/or observing musical stimuli [Vines, B. W., Krumhansl, C. L., Wanderley, M. M., & Levitin, D. J. (2006). Cross-modal interactions in the perception of musical performance. Cognition, 101, 80--113.]. The goal of this follow-up experiment was to replicate this cross-modal interaction by investigating the objective, physiological aspect of emotional response to music measuring electrodermal activity. The scaled average of electrodermal amplitude for visual-auditory presentation was found to be significantly higher than the sum of the reactions when the music was presented in visual only (VO) and auditory only (AO) conditions, suggesting the presence of an emergent property created by bimodal interaction. Functional data analysis revealed that electrodermal activity generally followed the same contour across modalities of presentation, except during rests (silent parts of the performance) when the visual information took on particular salience. Finally, electrodermal activity and subjective tension judgments were found to be most highly correlated in the audio-visual (AV) condition than in the unimodal conditions. The present study provides converging evidence for the importance of seeing musical performances, and preliminary evidence for the utility of electrodermal activity as an objective measure in studies of continuous music-elicited emotions.  相似文献   

7.
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar‐like rules (e.g. ABA) enhanced 5‐month‐olds’ capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle‐triangle‐circle) or auditory presentation of the syllables (la‐ba‐la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio‐visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8‐ to 10‐month‐old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio‐visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio‐visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ  相似文献   

8.
The present study investigated the effect of sexually objectifying music video exposure on young women's implicit bodily self-perception and the moderating role of self-esteem. Fifty-six college women of normal weight were either exposed to three sexually objectifying music videos or three neutral music videos. Perceived and ideal body size were measured both before and after video exposure, using horizontally stretched and compressed photographs of the participant's own body in swimming garment. As expected, only women low (but not high) in self-esteem were negatively affected by the sexually objectifying content of the music videos: they perceived themselves as bigger and showed an increased discrepancy between their perceived and ideal body size after video exposure. The neutral music videos did not influence women's bodily self-perceptions. These findings suggest that body image is a flexible construct, and that high self-esteem can protect women against the adverse effects of sexually objectifying media.  相似文献   

9.
10.
To evaluate the mediating impact of gender and gender role self-perceptions on affective responses to rock music videos, female and male undergraduates recruited from the predominantly Caucasian population of a southeastern university completed the Bem Sex Role Inventory and then watched and evaluated nine short music video segments. Consistent with previous research, the results highlight the critical importance of gender as a determinant of affective reactions to popular music. Males, in general, showed the strongest positive reactions (i.e., greatest enjoyment, least disturbance) toward hard-rock music videos while females reported the strongest positive reactions toward soft-rock music videos. Furthermore, both genders reported significant misestimations of other-gender peers' reactions. On the other hand, the influence of gender role self-perceptions proved minimal. Some implications of these findings are discussed.An earlier draft of this paper was presented at the November 1991 meeting of the Speech Communication Association in Atlanta, Georgia.  相似文献   

11.
Music is a stimulus capable of triggering an array of basic and complex emotions. We investigated whether and how individuals employ music to induce specific emotional states in everyday situations for the purpose of emotion regulation. Furthermore, we wanted to examine whether specific emotion-regulation styles influence music selection in specific situations. Participants indicated how likely it would be that they would want to listen to various pieces of music (which are known to elicit specific emotions) in various emotional situations. Data analyses by means of non-metric multidimensional scaling revealed a clear preference for pieces of music that were emotionally congruent with an emotional situation. In addition, we found that specific emotion-regulation styles might influence the selection of pieces of music characterised by specific emotions. Our findings demonstrate emotion-congruent music selection and highlight the important role of specific emotion-regulation styles in the selection of music in everyday situations.  相似文献   

12.
Music is a stimulus capable of triggering an array of basic and complex emotions. We investigated whether and how individuals employ music to induce specific emotional states in everyday situations for the purpose of emotion regulation. Furthermore, we wanted to examine whether specific emotion-regulation styles influence music selection in specific situations. Participants indicated how likely it would be that they would want to listen to various pieces of music (which are known to elicit specific emotions) in various emotional situations. Data analyses by means of non-metric multidimensional scaling revealed a clear preference for pieces of music that were emotionally congruent with an emotional situation. In addition, we found that specific emotion-regulation styles might influence the selection of pieces of music characterised by specific emotions. Our findings demonstrate emotion-congruent music selection and highlight the important role of specific emotion-regulation styles in the selection of music in everyday situations.  相似文献   

13.
Emotional responses to music: the need to consider underlying mechanisms   总被引:1,自引:0,他引:1  
Juslin PN  Västfjäll D 《The Behavioral and brain sciences》2008,31(5):559-75; discussion 575-621
Research indicates that people value music primarily because of the emotions it evokes. Yet, the notion of musical emotions remains controversial, and researchers have so far been unable to offer a satisfactory account of such emotions. We argue that the study of musical emotions has suffered from a neglect of underlying mechanisms. Specifically, researchers have studied musical emotions without regard to how they were evoked, or have assumed that the emotions must be based on the "default" mechanism for emotion induction, a cognitive appraisal. Here, we present a novel theoretical framework featuring six additional mechanisms through which music listening may induce emotions: (1) brain stem reflexes, (2) evaluative conditioning, (3) emotional contagion, (4) visual imagery, (5) episodic memory, and (6) musical expectancy. We propose that these mechanisms differ regarding such characteristics as their information focus, ontogenetic development, key brain regions, cultural impact, induction speed, degree of volitional influence, modularity, and dependence on musical structure. By synthesizing theory and findings from different domains, we are able to provide the first set of hypotheses that can help researchers to distinguish among the mechanisms. We show that failure to control for the underlying mechanism may lead to inconsistent or non-interpretable findings. Thus, we argue that the new framework may guide future research and help to resolve previous disagreements in the field. We conclude that music evokes emotions through mechanisms that are not unique to music, and that the study of musical emotions could benefit the emotion field as a whole by providing novel paradigms for emotion induction.  相似文献   

14.
We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.  相似文献   

15.
To date, numerosity judgments have been studied only under conditions of unimodal stimulus presentation. It is therefore unclear whether the same limitations on correctly reporting the number of unimodal visual or tactile stimuli presented in a display might be expected under conditions in which participants have to count stimuli presented simultaneously in two or more different sensory modalities. In Experiment 1, we investigated numerosity judgments using both unimodal and bimodal displays consisting of one to six vibrotactile stimuli (presented over the body surface) and one to six visual stimuli (seen on the body via mirror reflection). Participants had to count the number of stimuli regardless of their modality of presentation. Bimodal numerosity judgments were significantly less accurate than predicted on the basis of an independent modality-specific resources account, thus showing that numerosity judgments might rely on a unitary amodal system instead. The results of a second experiment demonstrated that divided attention costs could not account for the poor performance in the bimodal conditions of Experiment 1. We discuss these results in relation to current theories of cross-modal integration and to the cognitive resources and/or common higher order spatial representations possibly accessed by both visual and tactile stimuli.  相似文献   

16.
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10 s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5 s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high “notes” indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT > VV) induced “inhibition” of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT > TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms.  相似文献   

17.
Most people are able to identify basic emotions expressed in music and experience affective reactions to music. But does music generally induce emotion? Does it elicit subjective feelings, physiological arousal, and motor reactions reliably in different individuals? In this interdisciplinary study, measurement of skin conductance, facial muscle activity, and self-monitoring were synchronized with musical stimuli. A group of 38 participants listened to classical, rock, and pop music and reported their feelings in a two-dimensional emotion space during listening. The first entrance of a solo voice or choir and the beginning of new sections were found to elicit interindividual changes in subjective feelings and physiological arousal. Quincy Jones' "Bossa Nova" motivated movement and laughing in more than half of the participants. Bodily reactions such as "goose bumps" and "shivers" could be stimulated by the "Tuba Mirum" from Mozart's Requiem in 7 of 38 participants. In addition, the authors repeated the experiment seven times with one participant to examine intraindividual stability of effects. This exploratory combination of approaches throws a new light on the astonishing complexity of affective music listening.  相似文献   

18.
Previous studies have shown that music is a powerful means to convey affective states, but it remains unclear whether and how social context shape the intensity and quality of emotions perceived in music. Using a within-subject design, we studied this question in two experimental settings, i.e. when subjects were alone versus in company of others without direct social interaction or feedback. Non-vocal musical excerpts of the emotional qualities happiness or sadness were rated on arousal and valence dimensions. We found evidence for an amplification of perceived emotion in the solitary listening condition, i.e. happy music was rated as happier and more arousing when nobody else was around and, in an analogous manner, sad music was perceived as sadder. This difference might be explained by a shift of attention in the presence of others. The observed interaction of perceived emotion and social context did not differ for stimuli of different cultural origin.  相似文献   

19.
Gallace A  Tan HZ  Spence C 《Perception》2006,35(2):247-266
A large body of research now supports the claim that two different and dissociable processes are involved in making numerosity judgments regarding visual stimuli: subitising (fast and nearly errorless) for up to 4 stimuli, and counting (slow and error-prone) when more than 4 stimuli are presented. We studied tactile numerosity judgments for combinations of 1-7 vibrotactile stimuli presented simultaneously over the body surface. In experiment 1, the stimuli were presented once, while in experiment 2 conditions of single presentation and repeated presentation of the stimulus were compared. Neither experiment provided any evidence for a discontinuity in the slope of either the RT or error data suggesting that subitisation does not occur for tactile stimuli. By systematically varying the intensity of the vibrotactile stimuli in experiment 3, we were able to demonstrate that participants were not simply using the 'global intensity' of the whole tactile display to make their tactile numerosity judgments, but were, instead, using information concerning the number of tactors activated. The results of the three experiments reported here are discussed in relation to current theories of counting and subitising, and potential implications for the design of tactile user interfaces are highlighted.  相似文献   

20.
Emotional events tend to be retained more strongly than other everyday occurrences, a phenomenon partially regulated by the neuromodulatory effects of arousal. Two experiments demonstrated the use of relaxing music as a means of reducing arousal levels, thereby challenging heightened long-term recall of an emotional story. In Experiment 1, participants (N=84) viewed a slideshow, during which they listened to either an emotional or neutral narration, and were exposed to relaxing or no music. Retention was tested 1 week later via a forced choice recognition test. Retention for both the emotional content (Phase 2 of the story) and material presented immediately after the emotional content (Phase 3) was enhanced, when compared with retention for the neutral story. Relaxing music prevented the enhancement for material presented after the emotional content (Phase 3). Experiment 2 (N=159) provided further support to the neuromodulatory effect of music by post-event presentation of both relaxing music and non-relaxing auditory stimuli (arousing music/background sound). Free recall of the story was assessed immediately afterwards and 1 week later. Relaxing music significantly reduced recall of the emotional story (Phase 2). The findings provide further insight into the capacity of relaxing music to attenuate the strength of emotional memory, offering support for the therapeutic use of music for such purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号