首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky "sentences" spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky "sentences" in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   

2.
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.  相似文献   

3.
Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose “alluring” more often when a stimulus was presented by a woman as compared to a man.  相似文献   

4.
Three experiments revealed that music lessons promote sensitivity to emotions conveyed by speech prosody. After hearing semantically neutral utterances spoken with emotional (i.e., happy, sad, fearful, or angry) prosody, or tone sequences that mimicked the utterances' prosody, participants identified the emotion conveyed. In Experiment 1 (n=20), musically trained adults performed better than untrained adults. In Experiment 2 (n=56), musically trained adults outperformed untrained adults at identifying sadness, fear, or neutral emotion. In Experiment 3 (n=43), 6-year-olds were tested after being randomly assigned to 1 year of keyboard, vocal, drama, or no lessons. The keyboard group performed equivalently to the drama group and better than the no-lessons group at identifying anger or fear.  相似文献   

5.
Emotional information can influence various cognitive processes, such as attention, motivation, and memory. Differences in the processing of emotion have been observed in individuals with high levels of autism-like traits. The current study aimed to determine the influence of emotional prosody on word learning ability in neurotypical adults who varied in their levels of autism-like traits. Thirty-eight participants learned 30 nonsense words as names for 30 “alien” characters. Alien names were verbally presented with happy, fearful, or neutral prosody. For all participants, recall performance was significantly worse for words spoken with fearful prosody compared to neutral. Recall performance was also worse for words spoken with happy prosody compared to neutral, but only for those with lower levels of autism-like traits. The findings suggest that emotional prosody can interfere with word learning, and that people with fewer autism-like traits may be more susceptible to such interference due to a higher attention bias toward emotion.  相似文献   

6.
模拟自然学习语境,探讨情绪韵律对具体和抽象词学习的调节作用及其产生原因。被试在高兴、愤怒、中性语境情绪韵律条件下学习抽象和具体词,记录学习过程及测试过程中的行为与脑电反应。结果发现在愤怒韵律背景下学习的抽象词加工的正确率更低、反应时更长;并且诱发了更为显著的脑电成分。表明愤怒韵律对词汇学习尤其是抽象词学习有显著的消极作用。此外,情绪韵律对词汇学习的调节作用是在学习过程中对词汇语义提取及晚期语义整合产生影响而形成的。  相似文献   

7.
Researchers have evaluated how broad categories of emotion (i.e. positive and negative) influence judgments of learning (JOLs) relative to neutral items. Specifically, JOLs are typically higher for emotional relative to neutral items. The novel goal of the present research was to evaluate JOLs for fine-grained categories of emotion. Participants studied faces with afraid, angry, sad, or neutral expressions (Experiment 1) and with afraid, angry, or sad expressions (Experiment 2). Participants identified the expressed emotion, made a JOL for each, and completed a recognition test. JOLs were higher for the emotional relative to neutral expressions. However, JOLs were insensitive to the categories of negative emotion. Using a survey design in Experiment 3, participants demonstrated idiosyncratic beliefs about emotion. Some people believed the fine-grained emotions were equally memorable, whereas others believed a specific emotion (e.g. anger) was most memorable. Thus, beliefs about emotion are nuanced, which has important implications for JOL theory.  相似文献   

8.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   

9.
郑茜  张亭亭  李量  范宁  杨志刚 《心理学报》2023,55(2):177-191
言语的情绪信息(情绪性韵律和情绪性语义)具有去听觉掩蔽的作用, 但其去掩蔽的具体机制还不清楚。本研究通过2个实验, 采用主观空间分离范式, 通过操纵掩蔽声类型的方式, 分别探究言语的情绪韵律和情绪语义去信息掩蔽的机制。结果发现, 情绪韵律在知觉信息掩蔽或者在知觉、认知双重信息掩蔽下, 均具有去掩蔽的作用。情绪语义在知觉信息掩蔽下不具有去掩蔽的作用, 但在知觉、认知双重信息掩蔽下具有去掩蔽的作用。这些结果表明, 言语的情绪韵律和情绪语义有着不同的去掩蔽机制。情绪韵律能够优先吸引听者更多的注意, 可以克服掩蔽声音在知觉上造成的干扰, 但对掩蔽声音在内容上的干扰作用很小。言语的情绪语义能够优先获取听者更多的认知加工资源, 具有去认知信息掩蔽的作用, 但不具有去知觉信息掩蔽的作用。  相似文献   

10.
The purpose of the present research was to examine if anxiety is linked to a memory-based attentional bias, in which attention to threat is thought to depend on implicit learning. Memory-based attentional biases were defined and also demonstrated in two experiments. A total of 168 university students were shown a pair of faces that varied in their emotional $ content (angry, neutral, and happy), with each type of emotion being consistently preceded by a particular neutral cue face, appearing in the same position. Eye movements were measured during these cue faces and during the emotional faces. The results of two experiments indicated that anxiety was connected with a tendency to avert one's gaze from the positions of angry faces to the positions of happy faces, before these were shown on the screen. This, in turn, caused a reduced perception of angry relative to happy faces. In Experiment 2, participants were also not aware of having a memory-based attentional bias.  相似文献   

11.
Visual working memory (WM) for face identities is enhanced when faces express negative versus positive emotion. To determine the stage at which emotion exerts its influence on memory for person information, we isolated expression (angry/happy) to the encoding phase (Experiment 1; neutral test faces) or retrieval phase (Experiment 2; neutral study faces). WM was only enhanced by anger when expression was present at encoding, suggesting that retrieval mechanisms are not influenced by emotional expression. To examine whether emotional information is discarded on completion of encoding or sustained in WM, in Experiment 3 an emotional word categorisation task was inserted into the maintenance interval. Emotional congruence between word and face supported memory for angry but not for happy faces, suggesting that negative emotional information is preferentially sustained during WM maintenance. Our findings demonstrate that negative expressions exert sustained and beneficial effects on WM for faces that extend beyond encoding.  相似文献   

12.
We used the remember-know procedure (Tulving, 1985 ) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   

13.
准确识别言语中的情绪韵律信息对社会交往非常重要。本研究采用功能近红外成像技术, 探索外显和内隐情绪加工条件下愤怒、恐惧、快乐三种情绪韵律加工过程中的大脑皮层神经活动。结果表明, 对愤怒、恐惧、快乐韵律进行特异性加工的脑区分别为左侧额极/眶额叶、右侧缘上回、左侧额下回, 其中右侧缘上回脑区同时受到情绪和任务的调控。此外, 右侧颞中回、颞下回和颞极在情绪外显任务中的激活明显强于内隐任务。本研究的结果部分支持了情绪韵律的层次模型, 也对该模型的第三层次, 即“额区对语音情绪信息的精细加工需要外显性情绪加工任务参与”提出了质疑。  相似文献   

14.
We assessed dysphoric and clinically distressed individuals' ability to ignore the emotional aspects of facial expressions using the Garner speeded‐classification task. Garner's paradigm tests the ability to selectively focus on a single relevant dimension while ignoring variations on other, irrelevant, ones. In the present task, the stimuli were faces of men and women expressing happy, angry, and neutral emotions. In Experiments 1 and 2, dysphoric and nondysphoric participants performed the Garner task, focusing on gender and ignoring emotion (Experiment 1) and focusing on emotion and ignoring gender (Experiment 2). Results suggest that dysphoric individuals exhibited more difficulty ignoring the emotional dimension of social stimuli even under specific instructions to do so than nondysphoric individuals. In Experiments 3 and 4, we replicated these results in clinically distressed and nondistressed individuals. The results of Experiment 3 further suggested that depression was more closely associated with the inability to selectively ignore emotion than was social anxiety. Experiment 4 confirmed that this failure of selective attention was specific to processing emotional, and not gender features. The implications of these findings for cognitive and interpersonal theories of depression are discussed.  相似文献   

15.
Older adults have greater difficulty than younger adults perceiving vocal emotions. To better characterise this effect, we explored its relation to age differences in sensory, cognitive and emotional functioning. Additionally, we examined the role of speaker age and listener sex. Participants (N?=?163) aged 19–34 years and 60–85 years categorised neutral sentences spoken by ten younger and ten older speakers with a happy, neutral, sad, or angry voice. Acoustic analyses indicated that expressions from younger and older speakers denoted the intended emotion with similar accuracy. As expected, younger participants outperformed older participants and this effect was statistically mediated by an age-related decline in both optimism and working-memory. Additionally, age differences in emotion perception were larger for younger as compared to older speakers and a better perception of younger as compared to older speakers was greater in younger as compared to older participants. Last, a female perception benefit was less pervasive in the older than the younger group. Together, these findings suggest that the role of age for emotion perception is multi-faceted. It is linked to emotional and cognitive change, to processing biases that benefit young and own-age expressions, and to the different aptitudes of women and men.  相似文献   

16.
Studies using facial emotional expressions as stimuli partially support the assumption of biased processing of social signals in social phobia. This pilot study explored for the first time whether individuals with social phobia display a processing bias towards emotional prosody. Fifteen individuals with generalized social phobia and fifteen healthy controls (HC) matched for gender, age, and education completed a recognition test consisting of meaningless utterances spoken in a neutral, angry, sad, fearful, disgusted or happy tone of voice. Participants also evaluated the stimuli with regard to valence and arousal. While these ratings did not differ significantly between groups, analysis of the recognition test revealed enhanced identification of sad and fearful voices and decreased identification of happy voices in individuals with social phobia compared with HC. The two groups did not differ in their processing of neutral, disgust, and anger prosody.  相似文献   

17.
We systematically examined the impact of emotional stimuli on time perception in a temporal reproduction paradigm where participants reproduced the duration of a facial emotion stimulus using an oval-shape stimulus or vice versa. Experiment 1 asked participants to reproduce the duration of an angry face (or the oval) presented for 2,000 ms. Experiment 2 included a range of emotional expressions (happy, sad, angry, and neutral faces as well as the oval stimulus) presented for different durations (500, 1,500, and 2,000 ms). We found that participants over-reproduced the durations of happy and sad faces using the oval stimulus. By contrast, there was a trend of under-reproduction when the duration of the oval stimulus was reproduced using the angry face. We suggest that increased attention to a facial emotion produces the relativity of time perception.  相似文献   

18.
The way our brain processes emotional stimuli has been studied intensively. One of the main issues still under debate is the laterality of valence processing. Herein, we employed the fact that pupil size increases under conditions of higher mental effort and during emotional processing, in order to contrast three proposed hypotheses in the field. We used different manual response mapping for emotional stimuli: Participants responded with their right hand for positive and with their left hand for negative facial expressions, or vice versa. The hands position was either regular (Experiment 1) or crossed (Experiment 2) in order to rule out a “spatial-valence association” alternate explanation. A third experiment was conducted by employing a passive viewing procedure of peripheral emotional stimuli. In the first two experiments, pupil size was larger when participants responded to positive stimuli with their left hand and to negative with their right hand, compared with the opposite mapping. Results of Experiment 3 strengthen the findings of Experiments 1 and 2. These findings provide significant psychophysiological evidence for the valence hypothesis: Processing positive stimuli involves the left hemisphere, while processing negative stimuli involves the right hemisphere. These results are discussed in relation to contemporary theories of emotion processing.  相似文献   

19.
黄贤军  张伟欣 《心理科学》2014,37(4):851-856
采用ERP技术分别考察了情绪判断和性别判断任务下情绪韵律的加工进程。结果显示:在175-275ms时间段,情绪韵律的加工受实验任务的调节,情绪判断任务下存在效价主效应及负性偏向,愤怒比高兴和中性诱发了更正的P2成分,而性别判断任务则无效价效应。在后期评价加工及反应准备阶段(400-800ms),两种任务下,愤怒都比高兴和中性诱发了更正的晚成分。上述结果说明,不同情绪韵律的识别存在不同的认知机制,并在一定程度上会受加工任务的调节。  相似文献   

20.
The interpersonal effects of anger and happiness in negotiations   总被引:2,自引:0,他引:2  
Three experiments investigated the interpersonal effects of anger and happiness in negotiations. In the course of a computer-mediated negotiation, participants received information about the emotional state (anger, happiness, or none) of their opponent. Consistent with a strategic-choice perspective, Experiment 1 showed that participants conceded more to an angry opponent than to a happy one. Experiment 2 showed that this effect was caused by tracking--participants used the emotion information to infer the other's limit, and they adjusted their demands accordingly. However, this effect was absent when the other made large concessions. Experiment 3 examined the interplay between experienced and communicated emotion and showed that angry communications (unlike happy ones) induced fear and thereby mitigated the effect of the opponent's experienced emotion. These results suggest that negotiators are especially influenced by their opponent's emotions when they are motivated and able to consider them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号