首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody–semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

2.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

3.
郑茜  张亭亭  李量  范宁  杨志刚 《心理学报》2023,55(2):177-191
言语的情绪信息(情绪性韵律和情绪性语义)具有去听觉掩蔽的作用, 但其去掩蔽的具体机制还不清楚。本研究通过2个实验, 采用主观空间分离范式, 通过操纵掩蔽声类型的方式, 分别探究言语的情绪韵律和情绪语义去信息掩蔽的机制。结果发现, 情绪韵律在知觉信息掩蔽或者在知觉、认知双重信息掩蔽下, 均具有去掩蔽的作用。情绪语义在知觉信息掩蔽下不具有去掩蔽的作用, 但在知觉、认知双重信息掩蔽下具有去掩蔽的作用。这些结果表明, 言语的情绪韵律和情绪语义有着不同的去掩蔽机制。情绪韵律能够优先吸引听者更多的注意, 可以克服掩蔽声音在知觉上造成的干扰, 但对掩蔽声音在内容上的干扰作用很小。言语的情绪语义能够优先获取听者更多的认知加工资源, 具有去认知信息掩蔽的作用, 但不具有去知觉信息掩蔽的作用。  相似文献   

4.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

5.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   

6.
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical ‘pseudo‐utterances’ were presented to listener groups with and without PD in two separate rating tasks. Task 1 required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo‐utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the polite/impolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language ( Pell & Leonard, 2003 ).  相似文献   

7.
Background: Facial expressions, prosody, and speech content constitute channels by which information is exchanged. Little is known about the simultaneous and differential contribution of these channels to empathy when they provide emotionality or neutrality. Especially neutralised speech content has gained little attention with regards to influencing the perception of other emotional cues. Methods: Participants were presented with video clips of actors telling short-stories. One condition conveyed emotionality in all channels while the other conditions either provided neutral speech content, facial expression, or prosody, respectively. Participants judged the emotion and intensity presented, as well as their own emotional state and intensity. Skin conductance served as a physiological measure of emotional reactivity. Results: Neutralising channels significantly reduced empathic responses. Electrodermal recordings confirmed these findings. The differential effect of the communication channels on empathy prerequisites was that target emotion recognition of the other decreased mostly when the face was neutral, whereas decreased emotional responses attributed to the target emotion were especially present in neutral speech. Conclusion: Multichannel integration supports conscious and autonomous measures of empathy and emotional reactivity. Emotional facial expressions influence emotion recognition, whereas speech content is important for responding with an adequate own emotional state, possibly reflecting contextual emotion-appraisal.  相似文献   

8.
Which brain regions are associated with recognition of emotional prosody? Are these distinct from those for recognition of facial expression? These issues were investigated by mapping the overlaps of co-registered lesions from 66 brain-damaged participants as a function of their performance in rating basic emotions. It was found that recognizing emotions from prosody draws on the right frontoparietal operculum, the bilateral frontal pole, and the left frontal operculum. Recognizing emotions from prosody and facial expressions draws on the right frontoparietal cortex, which may be important in reconstructing aspects of the emotion signaled by the stimulus. Furthermore, there were regions in the left and right temporal lobes that contributed disproportionately to recognition of emotion from faces or prosody, respectively.  相似文献   

9.
情绪韵律识别指个体通过对他人话语中除语义以外的声学线索进行分析,推断他人情绪状态的过程。本研究通过2个实验对社会性发展迟滞大学生的情绪韵律识别特点进行探索。结果发现:(1)社会性发展迟滞组厌恶和恐惧韵律的识别正确率显著低于对照组,两组在其他情绪韵律识别正确率上无显著差异;且两组被试情绪韵律识别趋势一致:高兴、愤怒、悲伤、恐惧和厌恶韵律识别正确率依次递减;(2)当语义与韵律矛盾时,迟滞组比对照组更易受语义干扰。  相似文献   

10.
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.  相似文献   

11.
Theoretical accounts suggest an increased and automatic neural processing of emotional, especially threat-related, facial expressions and emotional prosody. In line with this assumption, several functional imaging studies showed activation to threat-related faces and voices in subcortical and cortical brain areas during attentional distraction or unconscious stimulus processing. Furthermore, electrophysiological studies provided evidence for automatic early brain responses to emotional facial expressions and emotional prosody. However, there is increasing evidence that available cognitive resources modulate brain responses to emotional signals from faces and voices, even though conflicting findings may occur depending on contextual factors, specific emotions, sensory modality, and neuroscientific methods used. The current review summarizes these findings and suggests that further studies should combine information from different sensory modalities and neuroscientific methods such as functional neuroimaging and electrophysiology. Furthermore, it is concluded that the variable saliency and relevance of emotional social signals on the one hand and available cognitive resources on the other hand interact in a dynamic manner, making absolute boundaries of the automatic processing of emotional information from faces and voices unlikely.  相似文献   

12.
This study examined the relative influence of prosody and semantic content in children's inferences about intended listeners. Children (= 72), who ranged in age from 5 to 10 years, heard greetings with prosody and content that was either infant or adult directed and chose the intended listener from amongst an infant or an adult. While content affected all children's choices, the effect of prosody was stronger (at least, for children aged 7–10 years). For conditions in which prosodic cues were suggestive of one listener, and content cues, another, children aged 7–10 years chose the listener according to prosody. In contrast, the youngest age group (5‐ to 6‐year‐olds) chose listeners at chance levels in these incongruent conditions. While prosodic cues were most influential in determining children's choices, their ratings of how certain they felt about their choices indicated that content nonetheless influenced their thinking about the intended listener. Results are the first to show the unique influence of prosody in children's thinking about appropriate speech styles. Findings add to work showing children's ability to use prosody to make inferences about speakers' communicative intentions.  相似文献   

13.
郑志伟  黄贤军  张钦 《心理学报》2013,45(4):427-437
采用韵律/词汇干扰范式和延迟匹配任务, 通过两个ERP实验, 考察了汉语口语中情绪韵律能否、以及如何调节情绪词的识别。实验一中, 不同类型的情绪韵律分组呈现, ERP结果显示, 同与情绪韵律效价一致的情绪词相比, 与情绪韵律效价不一致的情绪词诱发了走向更负的P200、N300和N400成分; 实验二中, 不同类型的情绪韵律随机呈现, 上述效价一致性效应依然存在。实验结果表明, 情绪韵律能够调节情绪词识别, 主要表现在对情绪词的音韵编码和语义加工的双重易化上。  相似文献   

14.
Cochlear implant (CI) devices provide the opportunity for children who are deaf to perceive sound by electrical stimulation of the auditory nerve, with the goal of optimizing oral communication. One part of oral communication concerns meaning, while another part concerns emotion: affective speech prosody, in the auditory domain, and facial affect, in the visual domain. It is not known whether childhood CI users can identify emotion in speech and faces, so we investigated speech prosody and facial affect in children who had been deaf from infancy and experienced CI users. METHOD: Study participants were 18 CI users (ages 7–13 years) who received right unilateral CIs and 18 age- and gender-matched controls. Emotion recognition in speech prosody and faces was measured by the Diagnostic Analysis of Nonverbal Accuracy. RESULTS: Compared to controls, children with right CIs could identify facial affect but not affective speech prosody. Age at test and time since CI activation were uncorrelated with overall outcome measures. CONCLUSION: Children with right CIs recognize emotion in faces but have limited perception of affective speech prosody.  相似文献   

15.
The negative compatibility effect (NCE) is the surprising result that low-visibility prime arrows facilitate responses to opposite-direction target arrows. Here we compare the priming obtained with simple arrows to the priming of emotions when categorizing human faces, which represents a more naturalistic set of stimuli and for which there are no preexisting response biases. When inverted faces with neutral expressions were presented alongside emotional prime and target faces, only strong positive priming occurred. However, when the neutral faces were made to resemble the target faces in geometry (upright orientation), time (flashing briefly), and space (appearing in the same location), positive priming gradually weakened and became negative priming. Implications for theories of the NCE are discussed.  相似文献   

16.
The influence of emotional prosody on the evaluation of emotional facial expressions was investigated in an event-related brain potential (ERP) study using a priming paradigm, the facial affective decision task. Emotional prosodic fragments of short (200-msec) and medium (400-msec) duration were presented as primes, followed by an emotionally related or unrelated facial expression (or facial grimace, which does not resemble an emotion). Participants judged whether or not the facial expression represented an emotion. ERP results revealed an N400-like differentiation for emotionally related prime-target pairs when compared with unrelated prime-target pairs. Faces preceded by prosodic primes of medium length led to a normal priming effect (larger negativity for unrelated than for related prime-target pairs), but the reverse ERP pattern (larger negativity for related than for unrelated prime-target pairs) was observed for faces preceded by short prosodic primes. These results demonstrate that brief exposure to prosodic cues can establish a meaningful emotional context that influences related facial processing; however, this context does not always lead to a processing advantage when prosodic information is very short in duration.  相似文献   

17.
Facial attributes such as race, sex, and age can interact with emotional expressions; however, only a couple of studies have investigated the nature of the interaction between facial age cues and emotional expressions and these have produced inconsistent results. Additionally, these studies have not addressed the mechanism/s driving the influence of facial age cues on emotional expression or vice versa. In the current study, participants categorised young and older adult faces expressing happiness and anger (Experiment 1) or sadness (Experiment 2) by their age and their emotional expression. Age cues moderated categorisation of happiness vs. anger and sadness in the absence of an influence of emotional expression on age categorisation times. This asymmetrical interaction suggests that facial age cues are obligatorily processed prior to emotional expressions. Finding a categorisation advantage for happiness expressed on young faces relative to both anger and sadness which are negative in valence but different in their congruence with old age stereotypes or structural overlap with age cues suggests that the observed influence of facial age cues on emotion perception is due to the congruence between relatively positive evaluations of young faces and happy expressions.  相似文献   

18.
Two sources of information most relevant to guide social decision making are the cooperative tendencies associated with different people and their facial emotional displays. This electrophysiological experiment aimed to study how the use of personal identity and emotional expressions as cues impacts different stages of face processing and their potential isolated or interactive processing. Participants played a modified trust game with 8 different alleged partners, and in separate blocks either the identity or the emotions carried information regarding potential trial outcomes (win or loss). Behaviorally, participants were faster to make decisions based on identity compared to emotional expressions. Also, ignored (nonpredictive) emotions interfered with decisions based on identity in trials where these sources of information conflicted. Electrophysiological results showed that expectations based on emotions modulated processing earlier in time than those based on identity. Whereas emotion modulated the central N1 and VPP potentials, identity judgments heightened the amplitude of the N2 and P3b. In addition, the conflict that ignored emotions generated was reflected on the N170 and P3b potentials. Overall, our results indicate that using identity or emotional cues to predict cooperation tendencies recruits dissociable neural circuits from an early point in time, and that both sources of information generate early and late interactive patterns.  相似文献   

19.
20.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号