首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
为探讨基于视听双通道的音乐情绪冲突效应、冲突情境下的优势加工通道和音乐经验对结果的影响,本研究采用音乐表演视频为材料,比较音乐组和非音乐组被试在一致型和不一致型视听双通道下的情绪评定速度、准确性及强度。结果发现:(1)一致型条件下的情绪评定更准确且更强烈;(2)不一致型条件下,被试更多以听觉通道的情绪线索为依据进行情绪类型评定;(3)非音乐组被试比音乐组被试更依赖视觉通道的情绪线索。结果表明:通道间情绪信息的不一致阻碍了音乐情绪加工; 听觉通道是音乐情绪冲突情境下的优势加工通道; 音乐经验降低了情绪冲突效应对音乐组被试的干扰。  相似文献   

2.
音乐表演影像材料的情绪诱发功能有音乐和影像的共同优势,但目前缺乏以音乐表演为内容的情绪音乐影像库。试图建立可供相关研究使用的情绪音乐影像系统。根据前期研究使用过的材料、相关研究结果及音乐专业人士的建议,收集分别表达愤怒、恐惧、悲伤和开心的音乐,包括歌剧、交响乐、民乐等,剪辑为30秒左右的片段,非音乐专业被试对材料的情绪类型和强度进行评定,最终筛选出能有效诱发目标情绪的64条音乐影像材料。未来可在测量指标的客观性和审美性及被试和音乐材料的样本量两方面改进。  相似文献   

3.
返回抑制(inhibition of return, IOR)与情绪刺激都具有引导注意偏向、提高搜索效率的特点, 但二者间是否存在一定的交互作用迄今为止尚不明确。研究采用“线索-目标”范式并在视听双通道呈现情绪刺激来考察情绪刺激的加工与IOR的交互作用。实验1中情绪刺激以单通道视觉面孔或一致的视听双通道呈现, 实验2通过在视听通道呈现不一致的情绪刺激进一步考察视听双通道情绪一致刺激对IOR的影响是否是由听觉通道一致的情绪刺激导致的, 即是否对听觉通道的情绪刺激进行了加工。结果发现, 视听双通道情绪一致刺激能够削弱IOR, 但情绪不一致刺激与IOR之间不存在交互作用, 并且单双通道的IOR不存在显著差异。研究结果表明仅在视听双通道呈现情绪一致刺激时, 才会影响同一阶段的IOR, 这进一步支持了IOR的知觉抑制理论。  相似文献   

4.
该研究选取具有音乐训练经验和没有音乐训练经验的成年被试(研究生)共两组,每组男女数量各一半,使用脑电图(EEG)探讨了由速度和调式组成的不同音乐形态诱发情绪活动的脑电特征,结果表明:(1)在不同音乐形态上,小调音乐和慢速音乐诱发各脑区的δ、θ、β和γ四种波段的功率平均值均高于大调音乐和中速及快速音乐,只有α波的功率平均值低于大调音乐和中速及快速音乐;(2)所有音乐形态诱发音乐专业被试各脑区的功率平均值均高于非音乐专业被试,表明音乐训练经验导致了特殊神经网络参与加工;(3)所有音乐形态诱发男性被试各脑区的功率平均值均高于女性被试,表明两性在特异性信息加工方面具有神经解剖差异以及不同加工策略。  相似文献   

5.
通过要求被试判断同时呈现的视听信息情绪效价的关系,考察视听情绪信息整合加工特点。实验一中词汇效价与韵律效价不冲突,实验二中词汇效价与韵律效价冲突。两个实验一致发现当面孔表情为积极时,被试对视听通道情绪信息关系判断更准确;实验二还发现,当面孔表情为消极时,相对于韵律线索,被试根据语义线索对视听信息关系判断更迅速。上述结果说明视听信息在同时呈现时,视觉信息可能先行加工,并影响到随后有关视听关系的加工。  相似文献   

6.
情绪、情绪调节策略与情绪材料记忆的关系   总被引:2,自引:0,他引:2       下载免费PDF全文
研究采用实验法,以288名中小学生为被试,考察了不同情绪状态、不同情绪调节策略与不同类型情绪材料记忆的关系。结果表明:(1)快乐情绪的再认反应时短于悲伤情绪,快乐情绪的再认正确率高于悲伤情绪。(2)表达抑制策略下,词汇与图片的再认反应时无显著差异;认知重评策略下,词汇的再认反应时短于图片;对词汇和图片的再认反应时,认知重评组短于表达抑制组;认知重评组的再认正确率高于表达抑制组。(3)快乐材料再认反应时短于悲伤材料;快乐情绪下快乐材料的再认正确率高于悲伤材料,悲伤情绪下悲伤材料的再认正确率高于快乐材料。  相似文献   

7.
本研究分别在时间和情绪认知维度上考察预先准备效应对情绪视听整合的影响。时间辨别任务(实验1)发现视觉引导显著慢于听觉引导,并且整合效应量为负值。情绪辨别任务(实验2)发现整合效应量为正值;在负性情绪整合中,听觉引导显著大于视觉引导;在正性情绪整合中,视觉引导显著大于听觉引导。研究表明,情绪视听整合基于情绪认知加工,而时间辨别会抑制整合;此外,跨通道预先准备效应和情绪预先准备效应都与引导通道有关。  相似文献   

8.
音乐情绪感知是指听众对音乐表达情绪的认知和理解。本研究选取3岁、4岁、5岁和大学生四个年龄阶段的中国被试,探讨了他们分别在中国音乐和西方音乐条件下,对愤怒、悲伤、抒情和高兴四类情绪的感知能力及发展特点。结果显示:(1)儿童音乐情绪感知能力随年龄的上升而提高,其中,4岁是儿童获得基本感知能力的重要时期,5岁时已达到成人水平;(2)儿童对高兴情绪的感知能力优于其他情绪类型;(3)儿童对中国音乐情绪和西方音乐情绪的感知能力并无显著文化差异。  相似文献   

9.
以简单图形为视觉刺激,以短纯音作为听觉刺激,通过指导被试注意不同通道(注意视觉、注意听觉、注意视听)以形成不同注意状态(选择性注意和分配性注意),考察了注意对多感觉整合的影响,发现只有在分配性注意时被试对双通道目标的反应最快最准确。通过竞争模型分析发现,这种对双通道目标的加工优势源自于视听双通道刺激的整合。上述结果表明,只有在分配性注意状态下才会产生多感觉整合。  相似文献   

10.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

11.
以小四、初二和高二年级中有音乐经验和无音乐经验的学生为被试,采用等级量表评定法,通过两个实验,分别在无概念提示和有概念提示的条件下,要求被试对音乐旋律片段的张力进行判断,探讨概念提示和音乐经验对于音乐张力感知的影响。结果发现:(1)音乐张力感知是一个随个体成熟而不断发展的过程,小学四年级到初中二年级之间音乐张力感知变化较大,到高中二年级趋于稳定;音乐训练经验仅对小学四年级被试的音乐张力感知有促进作用;(2)概念提示有助于小学四年级被试对音乐张力概念的理解,而对高二被试无显著促进作用,说明音乐张力概念的掌握  相似文献   

12.
The experiment investigated how the addition of emotion information from the voice affects the identification of facial emotion. We presented whole face, upper face, and lower face displays and examined correct recognition rates and patterns of response confusions for auditory-visual (AV), auditory-only (AO), and visual-only (VO) expressive speech. Emotion recognition accuracy was superior for AV compared to unimodal presentation. The pattern of response confusions differed across the unimodal conditions and across display type. For AV presentation, a response confusion only occurred when such a confusion was present in each modality separately, thus response confusions were reduced compared to unimodal presentations. Emotion space (calculated from the confusion data) differed across display types for the VO presentations but was more similar for the AV ones indicating that the addition of the auditory information acted to harmonize the various VO response patterns. These results are discussed with respect to how bimodal emotion recognition combines auditory and visual information.  相似文献   

13.
音乐情绪识别能力是利用音乐开展情绪调节的基本条件。传统的以五声音阶为基础的具有独特韵味的中国民族音乐反映了中国人独有的情感和价值观念, 在情绪调节和音乐治疗方面具有积极的作用, 是研究音乐情绪识别的有效音乐刺激。本研究采用跨通道情绪启动范式, 通过人际反应指针问卷筛选出高、低共情组被试各36人参加脑电实验, 考察共情能力对中国民族音乐情绪识别的影响。脑电数据显示, 在进行中国民族音乐情绪内隐识别时, 将宫调和羽调音乐作为启动刺激, 诱发了中期的P2、N400以及晚期正成分LPC (Late Positive Component)。低共情组P2和N400成分的波幅大于高共情组, 高共情组LPC成分的波幅大于低共情组。本研究第一次从电生理层面考察了不同共情能力的个体在进行中国民族音乐情绪识别时的神经反应差异。高低共情组在中国民族音乐情绪识别不同阶段的注意投入可能影响了其对音乐刺激的感受, 进而影响音乐情绪识别。  相似文献   

14.
15.
With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers’ abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper’s influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N?=?319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0–10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.  相似文献   

16.
The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d = 0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.  相似文献   

17.
The claim that many musical works are representational is highly controversial. The formalist view that music is pure form and without any, or any significant, representational content is widely held. Two facts about music are, however, well-established by empirical science: Music is heard as resembling human expressive behaviour and music arouses ordinary emotions. This paper argues that it follows from these facts that music also represents human expressive behaviour and ordinary emotions.  相似文献   

18.
Chapados C  Levitin DJ 《Cognition》2008,108(3):639-651
This experiment was conducted to investigate cross-modal interactions in the emotional experience of music listeners. Previous research showed that visual information present in a musical performance is rich in expressive content, and moderates the subjective emotional experience of a participant listening and/or observing musical stimuli [Vines, B. W., Krumhansl, C. L., Wanderley, M. M., & Levitin, D. J. (2006). Cross-modal interactions in the perception of musical performance. Cognition, 101, 80--113.]. The goal of this follow-up experiment was to replicate this cross-modal interaction by investigating the objective, physiological aspect of emotional response to music measuring electrodermal activity. The scaled average of electrodermal amplitude for visual-auditory presentation was found to be significantly higher than the sum of the reactions when the music was presented in visual only (VO) and auditory only (AO) conditions, suggesting the presence of an emergent property created by bimodal interaction. Functional data analysis revealed that electrodermal activity generally followed the same contour across modalities of presentation, except during rests (silent parts of the performance) when the visual information took on particular salience. Finally, electrodermal activity and subjective tension judgments were found to be most highly correlated in the audio-visual (AV) condition than in the unimodal conditions. The present study provides converging evidence for the importance of seeing musical performances, and preliminary evidence for the utility of electrodermal activity as an objective measure in studies of continuous music-elicited emotions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号