首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 140 毫秒
1.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

2.
返回抑制(inhibition of return, IOR)与情绪刺激都具有引导注意偏向、提高搜索效率的特点, 但二者间是否存在一定的交互作用迄今为止尚不明确。研究采用“线索-目标”范式并在视听双通道呈现情绪刺激来考察情绪刺激的加工与IOR的交互作用。实验1中情绪刺激以单通道视觉面孔或一致的视听双通道呈现, 实验2通过在视听通道呈现不一致的情绪刺激进一步考察视听双通道情绪一致刺激对IOR的影响是否是由听觉通道一致的情绪刺激导致的, 即是否对听觉通道的情绪刺激进行了加工。结果发现, 视听双通道情绪一致刺激能够削弱IOR, 但情绪不一致刺激与IOR之间不存在交互作用, 并且单双通道的IOR不存在显著差异。研究结果表明仅在视听双通道呈现情绪一致刺激时, 才会影响同一阶段的IOR, 这进一步支持了IOR的知觉抑制理论。  相似文献   

3.
通过要求被试判断同时呈现的视听信息情绪效价的关系,考察视听情绪信息整合加工特点。实验一中词汇效价与韵律效价不冲突,实验二中词汇效价与韵律效价冲突。两个实验一致发现当面孔表情为积极时,被试对视听通道情绪信息关系判断更准确;实验二还发现,当面孔表情为消极时,相对于韵律线索,被试根据语义线索对视听信息关系判断更迅速。上述结果说明视听信息在同时呈现时,视觉信息可能先行加工,并影响到随后有关视听关系的加工。  相似文献   

4.
采用2×3的被试内实验设计,将注意条件和目标刺激类型作为实验变量,考察了指向不同感觉通道的注意对视听语义整合加工的不同影响。结果发现,只有在同时注意视觉和听觉刺激时,被试对语义一致的视听刺激反应最快,即产生冗余信号效应。而在选择性注意一个感觉通道时,语义一致的视听刺激并不具有加工优势。进一步分析发现,在同时注意视觉和听觉时语义一致视听刺激的加工优势源自于其视觉和听觉成分产生了整合。也就是说,只有在同时注意视觉和听觉时,语义一致视听刺激才会产生整合,语义不一致视听刺激不会产生整合。而在选择性注意一个感觉通道时,不论语义是否一致,视听刺激均不会产生整合。  相似文献   

5.
采用刺激探测任务,探讨了听觉通道下负数的低水平加工能否引起注意的SNARC效应。结果表明:(1)负数的低水平加工能够产生注意的SNARC效应,即线索为绝对值小的负数时,被试对左耳目标反应更快;线索为绝对值大的负数时,被试对右耳目标反应更快。(2)负数的加工所引起听觉空间注意的转移并不依赖于负数所在的情境,而仅通过其绝对值大小与空间表征发生关联。这一结论与视觉条件下所得到的结论不一致,并支持了系统发生论假说。  相似文献   

6.
采用改进的点–探测范式,考察熟练双语者在情绪注意偏向中对情绪信息加工的优势效应及该效应产生的原因。采用2(熟练双语者和非熟练双语者)×3(靶刺激与消极词汇出现的位置一致、不一致和中性条件)×2(线索呈现时间为100 ms和400 ms)的混合实验设计,记录被试对靶子的按键反应时和错误率。结果发现:线索呈现100 ms和400 ms时,两组被试在注意定向分数上无显著差异;线索呈现100 ms时,两组被试的注意解除分数无显著差异;线索呈现400 ms时,熟练双语者的注意解除分数显著小于非熟练双语者。结果表明,在情绪注意偏向任务中,熟练双语者比非熟练双语者表现出了对情绪词汇信息更好的注意抑制控制能力,体现出对情绪信息加工的双语优势效应,产生这一优势效应的原因是熟练双语者在注意的晚期阶段比非熟练双语者具有更好的情绪信息注意解除能力。  相似文献   

7.
以简单图形为视觉刺激,以短纯音作为听觉刺激,通过指导被试注意不同通道(注意视觉、注意听觉、注意视听)以形成不同注意状态(选择性注意和分配性注意),考察了注意对多感觉整合的影响,发现只有在分配性注意时被试对双通道目标的反应最快最准确。通过竞争模型分析发现,这种对双通道目标的加工优势源自于视听双通道刺激的整合。上述结果表明,只有在分配性注意状态下才会产生多感觉整合。  相似文献   

8.
王婷  植凤英  陆禹同  张积家 《心理学报》2019,51(9):1040-1056
音乐训练对认知能力具有广泛的促进效应。本研究结合执行功能的三个成分(抑制控制、工作记忆和认知灵活性), 在我国民族音乐背景下, 匹配实验任务的视觉和听觉形式, 探讨侗歌经验对侗族中学生执行功能的影响。结果表明, 侗歌组被试在抑制能力和刷新能力上显著好于侗族非侗歌组被试和汉族被试, 这一优势在视觉任务中和听觉任务中均存在, 说明侗歌经验产生的认知优势具有跨感觉通道的普遍性。侗歌组被试和侗族非侗歌组被试的转换能力差异不显著。侗族非侗歌组被试的抑制能力和转换能力好于汉族被试, 这体现了语言和音乐的交互作用。  相似文献   

9.
大脑中线电极诱发的μ抑制波(包括α和β频段)是人类镜像系统活动的电生理指标。尽管音乐情绪表现被认为是通过模仿个体的心理状态来实现的, 但是尚未有研究探讨人类镜像系统与音乐情绪加工的关系。本研究通过EEG技术, 采用跨通道情绪启动范式, 探究人类镜像系统是否参与和弦情绪的自动加工。愉悦或不愉悦的和弦启动情绪一致与不一致的目标面孔。行为结果显示, 被试对情绪一致面孔的反应显著快于情绪不一致面孔的反应。EEG结果显示, 在听觉刺激出现后的500~650 ms之间, 与情绪一致条件相比, 情绪不一致条件诱发了β频段的去同步化。在听觉刺激出现后的300~450 ms, 无论是情绪一致, 还是不一致条件, 都诱发了α频段的去同步化。源分析结果显示, μ抑制波主要出现在人类镜像系统的相关脑区。这些结果表明, 音乐情绪的自动加工与人类镜像系统的活动密切相关。  相似文献   

10.
本研究分别在时间和情绪认知维度上考察预先准备效应对情绪视听整合的影响。时间辨别任务(实验1)发现视觉引导显著慢于听觉引导,并且整合效应量为负值。情绪辨别任务(实验2)发现整合效应量为正值;在负性情绪整合中,听觉引导显著大于视觉引导;在正性情绪整合中,视觉引导显著大于听觉引导。研究表明,情绪视听整合基于情绪认知加工,而时间辨别会抑制整合;此外,跨通道预先准备效应和情绪预先准备效应都与引导通道有关。  相似文献   

11.
The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d = 0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.  相似文献   

12.
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content.  相似文献   

13.
Words that are semantically congruous with their preceding discourse context are easier to process than words that are semantically incongruous with their context. This facilitation of semantic processing is reflected by an attenuation of the N400 event-related potential (ERP). We asked whether this was true of emotional words in emotional contexts where discourse congruity was conferred through emotional valence. ERPs were measured as 24 participants read two-sentence scenarios with critical words that varied by emotion (pleasant, unpleasant, or neutral) and congruity (congruous or incongruous). Semantic predictability, constraint, and plausibility were comparable across the neutral and emotional scenarios. As expected, the N400 was smaller to neutral words that were semantically congruous (vs. incongruous) with their neutral discourse context. No such N400 congruity effect was observed on emotional words following emotional discourse contexts. Rather, the amplitude of the N400 was small to all emotional words (pleasant and unpleasant), regardless of whether their emotional valence was congruous with the valence of their emotional discourse context. However, consistent with previous studies, the emotional words produced a larger late positivity than did the neutral words. These data suggest that comprehenders bypassed deep semantic processing of valence-incongruous emotional words within the N400 time window, moving rapidly on to evaluate the words’ motivational significance.  相似文献   

14.
Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual cues. Emotion perception research has focused on static facial cues; however, dynamic audio-visual (AV) cues mimic real-world social cues more accurately than static and/or unimodal stimuli. Novel dynamic AV stimuli were presented using a block design in two fMRI studies, comparing bimodal stimuli to unimodal conditions, and emotional to neutral stimuli. Results suggest that the bilateral superior temporal region plays distinct roles in the perception of emotion and in the integration of auditory and visual cues. Given the greater ecological validity of the stimuli developed for this study, this paradigm may be helpful in elucidating the deficits in emotion perception experienced by clinical populations.  相似文献   

15.
A classical experiment of auditory stream segregation is revisited, reconceptualising perceptual ambiguity in terms of affordances and musical engagement. Specifically, three experiments are reported that investigate how listeners’ perception of auditory sequences change dynamically depending on emotional context. The experiments show that listeners adapt their attention to higher or lower pitched streams (Experiments 1 and 2) and the degree of auditory stream integration or segregation (Experiment 3) in accordance with the presented emotional context. Participants with and without formal musical training show this influence, although to differing degrees (Experiment 2). Contributing evidence to the literature on interactions between emotion and cognition, these experiments demonstrate how emotion is an intrinsic part of music perception and not merely a product of the listening experience.  相似文献   

16.
This study examined differences in children's use of social cues to make emotional inferences. Children ages 4, 5, and 8 years were presented with stimuli that depicted another child in affectively congruous and affectively incongruous expression/situation combinations. The intensity of positive and negative facial expressions was varied across situations. Subjects judged the target's feelings and selected among the alternative facial expressions or situations the one they had just seen. No significant age-related differences were found in the extent to which children registered and used both the expressive and situational information when making emotional inferences. The main experimental measure asked children to explain their judgments. In explaining their judgments, subjects' rationales indicated that they: (a) used both the situational and expressive cues; and (b) were sensitive to congruous versus incongruous cues, and even to mild versus strong incongruous cues. Children's rationales also reflected a sensitivity to expressive and situational negativity. For each age group, the rationales were more elaborate when the cues were problematic. Characteristic strategies, however, were also found for each age group. These distinct strategies may reflect social-life changes in children's social "theories" of emotion.  相似文献   

17.
In a behavioral study we analyzed the influence of visual action primes on abstract action sentence processing. We thereby aimed at investigating mental motor involvement during processes of meaning constitution of action verbs in abstract contexts. In the first experiment, participants executed either congruous or incongruous movements parallel to a video prime. In the second experiment, we added a no‐movement condition. After the execution of the movement, participants rendered a sensibility judgment on action sentence targets. It was expected that congruous movements would facilitate both concrete and abstract action sentence comprehension in comparison to the incongruous and the no‐movement condition. Results in Experiment 1 showed a concreteness effect but no effect of motor priming. Experiment 2 revealed a concreteness effect as well as an interaction effect of the sentence and the movement condition. The findings indicate an involvement of motor processes in abstract action language processing on a behavioral level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号