首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Research on emotion processing in the visual modality suggests a processing advantage for emotionally salient stimuli, even at early sensory stages; however, results concerning the auditory correlates are inconsistent. We present two experiments that employed a gating paradigm to investigate emotional prosody. In Experiment 1, participants heard successively building segments of Jabberwocky “sentences” spoken with happy, angry, or neutral intonation. After each segment, participants indicated the emotion conveyed and rated their confidence in their decision. Participants in Experiment 2 also heard Jabberwocky “sentences” in successive increments, with half discriminating happy from neutral prosody, and half discriminating angry from neutral prosody. Participants in both experiments identified neutral prosody more rapidly and accurately than happy or angry prosody. Confidence ratings were greater for neutral sentences, and error patterns also indicated a bias for recognising neutral prosody. Taken together, results suggest that enhanced processing of emotional content may be constrained by stimulus modality.  相似文献   

2.
Three experiments revealed that music lessons promote sensitivity to emotions conveyed by speech prosody. After hearing semantically neutral utterances spoken with emotional (i.e., happy, sad, fearful, or angry) prosody, or tone sequences that mimicked the utterances' prosody, participants identified the emotion conveyed. In Experiment 1 (n=20), musically trained adults performed better than untrained adults. In Experiment 2 (n=56), musically trained adults outperformed untrained adults at identifying sadness, fear, or neutral emotion. In Experiment 3 (n=43), 6-year-olds were tested after being randomly assigned to 1 year of keyboard, vocal, drama, or no lessons. The keyboard group performed equivalently to the drama group and better than the no-lessons group at identifying anger or fear.  相似文献   

3.
模拟自然学习语境,探讨情绪韵律对具体和抽象词学习的调节作用及其产生原因。被试在高兴、愤怒、中性语境情绪韵律条件下学习抽象和具体词,记录学习过程及测试过程中的行为与脑电反应。结果发现在愤怒韵律背景下学习的抽象词加工的正确率更低、反应时更长;并且诱发了更为显著的脑电成分。表明愤怒韵律对词汇学习尤其是抽象词学习有显著的消极作用。此外,情绪韵律对词汇学习的调节作用是在学习过程中对词汇语义提取及晚期语义整合产生影响而形成的。  相似文献   

4.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   

5.
The purpose of the present research was to examine if anxiety is linked to a memory-based attentional bias, in which attention to threat is thought to depend on implicit learning. Memory-based attentional biases were defined and also demonstrated in two experiments. A total of 168 university students were shown a pair of faces that varied in their emotional $ content (angry, neutral, and happy), with each type of emotion being consistently preceded by a particular neutral cue face, appearing in the same position. Eye movements were measured during these cue faces and during the emotional faces. The results of two experiments indicated that anxiety was connected with a tendency to avert one's gaze from the positions of angry faces to the positions of happy faces, before these were shown on the screen. This, in turn, caused a reduced perception of angry relative to happy faces. In Experiment 2, participants were also not aware of having a memory-based attentional bias.  相似文献   

6.
郑茜  张亭亭  李量  范宁  杨志刚 《心理学报》2023,55(2):177-191
言语的情绪信息(情绪性韵律和情绪性语义)具有去听觉掩蔽的作用, 但其去掩蔽的具体机制还不清楚。本研究通过2个实验, 采用主观空间分离范式, 通过操纵掩蔽声类型的方式, 分别探究言语的情绪韵律和情绪语义去信息掩蔽的机制。结果发现, 情绪韵律在知觉信息掩蔽或者在知觉、认知双重信息掩蔽下, 均具有去掩蔽的作用。情绪语义在知觉信息掩蔽下不具有去掩蔽的作用, 但在知觉、认知双重信息掩蔽下具有去掩蔽的作用。这些结果表明, 言语的情绪韵律和情绪语义有着不同的去掩蔽机制。情绪韵律能够优先吸引听者更多的注意, 可以克服掩蔽声音在知觉上造成的干扰, 但对掩蔽声音在内容上的干扰作用很小。言语的情绪语义能够优先获取听者更多的认知加工资源, 具有去认知信息掩蔽的作用, 但不具有去知觉信息掩蔽的作用。  相似文献   

7.
We used the remember-know procedure (Tulving, 1985 ) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.  相似文献   

8.
Studies using facial emotional expressions as stimuli partially support the assumption of biased processing of social signals in social phobia. This pilot study explored for the first time whether individuals with social phobia display a processing bias towards emotional prosody. Fifteen individuals with generalized social phobia and fifteen healthy controls (HC) matched for gender, age, and education completed a recognition test consisting of meaningless utterances spoken in a neutral, angry, sad, fearful, disgusted or happy tone of voice. Participants also evaluated the stimuli with regard to valence and arousal. While these ratings did not differ significantly between groups, analysis of the recognition test revealed enhanced identification of sad and fearful voices and decreased identification of happy voices in individuals with social phobia compared with HC. The two groups did not differ in their processing of neutral, disgust, and anger prosody.  相似文献   

9.
准确识别言语中的情绪韵律信息对社会交往非常重要。本研究采用功能近红外成像技术, 探索外显和内隐情绪加工条件下愤怒、恐惧、快乐三种情绪韵律加工过程中的大脑皮层神经活动。结果表明, 对愤怒、恐惧、快乐韵律进行特异性加工的脑区分别为左侧额极/眶额叶、右侧缘上回、左侧额下回, 其中右侧缘上回脑区同时受到情绪和任务的调控。此外, 右侧颞中回、颞下回和颞极在情绪外显任务中的激活明显强于内隐任务。本研究的结果部分支持了情绪韵律的层次模型, 也对该模型的第三层次, 即“额区对语音情绪信息的精细加工需要外显性情绪加工任务参与”提出了质疑。  相似文献   

10.
The present paper reports three new experiments suggesting that the valence of a face cue can influence attentional effects in a cueing paradigm. Moreover, heightened trait anxiety resulted in increased attentional dwell-time on emotional facial stimuli, relative to neutral faces. Experiment 1 presented a cueing task, in which the cue was either an "angry", "happy", or "neutral" facial expression. Targets could appear either in the same location as the face (valid trials) or in a different location to the face (invalid trials). Participants did not show significant variations across the different cue types (angry, happy, neutral) in responding to a target on valid trials. However, the valence of the face did affect response times on invalid trials. Specifically, participants took longer to respond to a target when the face cue was "angry" or "happy" relative to neutral. In Experiment 2, the cue-target stimulus onset asynchrony (SOA) was increased and an overall inhibition of return (IOR) effect was found (i.e., slower responses on valid trials). However, the "angry" face cue eliminated the IOR effect for both high and low trait anxious groups. In Experiment 3, threat-related and jumbled facial stimuli reduced the magnitude of IOR for high, but not for low, trait-anxious participants.These results suggest that: (i) attentional bias in anxiety may reflect a difficulty in disengaging from threat-related and emotional stimuli, and (ii) threat-related and ambiguous cues can influence the magnitude of the IOR effect.  相似文献   

11.
Emotional information can influence various cognitive processes, such as attention, motivation, and memory. Differences in the processing of emotion have been observed in individuals with high levels of autism-like traits. The current study aimed to determine the influence of emotional prosody on word learning ability in neurotypical adults who varied in their levels of autism-like traits. Thirty-eight participants learned 30 nonsense words as names for 30 “alien” characters. Alien names were verbally presented with happy, fearful, or neutral prosody. For all participants, recall performance was significantly worse for words spoken with fearful prosody compared to neutral. Recall performance was also worse for words spoken with happy prosody compared to neutral, but only for those with lower levels of autism-like traits. The findings suggest that emotional prosody can interfere with word learning, and that people with fewer autism-like traits may be more susceptible to such interference due to a higher attention bias toward emotion.  相似文献   

12.
We systematically examined the impact of emotional stimuli on time perception in a temporal reproduction paradigm where participants reproduced the duration of a facial emotion stimulus using an oval-shape stimulus or vice versa. Experiment 1 asked participants to reproduce the duration of an angry face (or the oval) presented for 2,000 ms. Experiment 2 included a range of emotional expressions (happy, sad, angry, and neutral faces as well as the oval stimulus) presented for different durations (500, 1,500, and 2,000 ms). We found that participants over-reproduced the durations of happy and sad faces using the oval stimulus. By contrast, there was a trend of under-reproduction when the duration of the oval stimulus was reproduced using the angry face. We suggest that increased attention to a facial emotion produces the relativity of time perception.  相似文献   

13.
Older adults have greater difficulty than younger adults perceiving vocal emotions. To better characterise this effect, we explored its relation to age differences in sensory, cognitive and emotional functioning. Additionally, we examined the role of speaker age and listener sex. Participants (N?=?163) aged 19–34 years and 60–85 years categorised neutral sentences spoken by ten younger and ten older speakers with a happy, neutral, sad, or angry voice. Acoustic analyses indicated that expressions from younger and older speakers denoted the intended emotion with similar accuracy. As expected, younger participants outperformed older participants and this effect was statistically mediated by an age-related decline in both optimism and working-memory. Additionally, age differences in emotion perception were larger for younger as compared to older speakers and a better perception of younger as compared to older speakers was greater in younger as compared to older participants. Last, a female perception benefit was less pervasive in the older than the younger group. Together, these findings suggest that the role of age for emotion perception is multi-faceted. It is linked to emotional and cognitive change, to processing biases that benefit young and own-age expressions, and to the different aptitudes of women and men.  相似文献   

14.
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.  相似文献   

15.
When searching for a discrepant target along a simple dimension such as color or shape, repetition of the target feature substantially speeds search, an effect known as feature priming of pop-out (V. Maljkovic and K. Nakayama, 1994). The authors present the first report of emotional priming of pop-out. Participants had to detect the face displaying a discrepant expression of emotion in an array of four face photographs. On each trial, the target when present was either a neutral face among emotional faces (angry in Experiment 1 or happy in Experiment 2), or an emotional face among neutral faces. Target detection was faster when the target displayed the same emotion on successive trials. This effect occurred for angry and for happy faces, not for neutral faces. It was completely abolished when faces were inverted instead of upright, suggesting that emotional categories rather than physical feature properties drive emotional priming of pop-out. The implications of the present findings for theoretical accounts of intertrial priming and for the face-in-the-crowd phenomenon are discussed.  相似文献   

16.
黄贤军  张伟欣 《心理科学》2014,37(4):851-856
采用ERP技术分别考察了情绪判断和性别判断任务下情绪韵律的加工进程。结果显示:在175-275ms时间段,情绪韵律的加工受实验任务的调节,情绪判断任务下存在效价主效应及负性偏向,愤怒比高兴和中性诱发了更正的P2成分,而性别判断任务则无效价效应。在后期评价加工及反应准备阶段(400-800ms),两种任务下,愤怒都比高兴和中性诱发了更正的晚成分。上述结果说明,不同情绪韵律的识别存在不同的认知机制,并在一定程度上会受加工任务的调节。  相似文献   

17.
The interpersonal effects of anger and happiness in negotiations   总被引:2,自引:0,他引:2  
Three experiments investigated the interpersonal effects of anger and happiness in negotiations. In the course of a computer-mediated negotiation, participants received information about the emotional state (anger, happiness, or none) of their opponent. Consistent with a strategic-choice perspective, Experiment 1 showed that participants conceded more to an angry opponent than to a happy one. Experiment 2 showed that this effect was caused by tracking--participants used the emotion information to infer the other's limit, and they adjusted their demands accordingly. However, this effect was absent when the other made large concessions. Experiment 3 examined the interplay between experienced and communicated emotion and showed that angry communications (unlike happy ones) induced fear and thereby mitigated the effect of the opponent's experienced emotion. These results suggest that negotiators are especially influenced by their opponent's emotions when they are motivated and able to consider them.  相似文献   

18.
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.  相似文献   

19.
Emotionally intoned sentences (happy, sad, angry, and neutral voices) were dichotically paired with monotone sentences. A left ear advantage was found for recognizing emotional intonation, while a simultaneous right ear advantage was found for recognizing the verbal content of the sentences. The results indicate a right hemispheric superiority in recognizing emotional stimuli. These findings are most reasonably attributed to differential lateralization of emotional functions, rather than to subject strategy effects. No evidence was found to support a hypothesis that each hemisphere is involved in processing different types of emotion.  相似文献   

20.
Researchers have evaluated how broad categories of emotion (i.e. positive and negative) influence judgments of learning (JOLs) relative to neutral items. Specifically, JOLs are typically higher for emotional relative to neutral items. The novel goal of the present research was to evaluate JOLs for fine-grained categories of emotion. Participants studied faces with afraid, angry, sad, or neutral expressions (Experiment 1) and with afraid, angry, or sad expressions (Experiment 2). Participants identified the expressed emotion, made a JOL for each, and completed a recognition test. JOLs were higher for the emotional relative to neutral expressions. However, JOLs were insensitive to the categories of negative emotion. Using a survey design in Experiment 3, participants demonstrated idiosyncratic beliefs about emotion. Some people believed the fine-grained emotions were equally memorable, whereas others believed a specific emotion (e.g. anger) was most memorable. Thus, beliefs about emotion are nuanced, which has important implications for JOL theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号