首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody-semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

2.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody–semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

3.
The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.  相似文献   

4.
Previous evidence supports differential event-related brain potential (ERP) responses for emotional prosodic processing and integrative emotional prosodic/semantic processing. While latter process elicits a negativity similar to the well-known N400 component, transitions in emotional prosodic processing elicit a positivity. To further substantiate this evidence, the current investigation utilized lexical-sentences and sentences without lexical content (pseudo-sentences) spoken in six basic emotions by a female and a male speaker. Results indicate that emotional prosodic expectancy violations elicit a right-lateralized positive-going ERP component independent of basic emotional prosodies and speaker voice. In addition, expectancy violations of integrative emotional prosody/semantics elicit a negativity with a whole-head distribution. The current results nicely complement previous evidence, and extend the results by showing the respective effects for a wider range of emotional prosodies independent of lexical content and speaker voice.  相似文献   

5.
神经重用假说认为对词的情绪效应可以不经语义形成而先行产生, 这更有助于人类适应环境。为了检验这一假说, 设计脑电实验和行为实验, 记录被试加工汉语厌恶情绪词和中性词的特征。结果发现, 170 ms左右厌恶词与中性词的ERPs发生分离, 且在枕部出现情绪刺激视觉注意关联的EPN; 厌恶词与中性词N400差异波溯源定位于脑岛附近, 其最大激活点时间窗口开始于380 ms。这表明在情绪词汇的语义分析之前就出现了情绪效应, 结果支持了神经重用假说。  相似文献   

6.
Shared acoustic cues in speech, music, and nonverbal emotional expressions were postulated to code for emotion quality and intensity favoring the hypothesis of a prehuman origin of affective prosody in human emotional communication. To explore this hypothesis, we examined in playback experiments using a habituation-dishabituation paradigm whether a solitary foraging, highly vocal mammal, the tree shrew, is able to discriminate two behaviorally defined states of affect intensity (low vs. high) from the voice of conspecifics. Playback experiments with communication calls of two different types (chatter call and scream call) given in the state of low affect intensity revealed that habituated tree shrews dishabituated to one call type (the chatter call) and showed a tendency to do so for the other one (the scream call), both given in the state of high affect intensity. Findings suggest that listeners perceive the acoustic variation linked to defined states of affect intensity as different within the same call type. Our findings in tree shrews provide first evidence that acoustically conveyed affect intensity is biologically relevant without any other sensory cue, even for solitary foragers. Thus, the perception of affect intensity in voice conveyed in stressful contexts represents a shared trait of mammals, independent of the complexity of social systems. Findings support the hypothesis that affective prosody in human emotional communication has deep-reaching phylogenetic roots, deriving from precursors already present and relevant in the vocal communication system of early mammals.  相似文献   

7.
An appraisal tendency approach was adopted to explore the influence of emotional certainty on stereotyping and judgment in a workplace context. Across two studies, participants completed an emotional memory task designed to induce emotions representing two different levels of emotional certainty (certain versus uncertain). They then reviewed interview footage, a résumé, and qualifying criteria before rating a hypothetical job candidate’s personality and employability. Study 1 revealed that emotions high in certainty (compared to uncertainty) led to more favorable personality and employability ratings for attractive compared to unattractive candidates. Study 2 produced the same pattern of results for younger (compared to older) candidates. We conclude that certainty appraisals associated with temporary, incidental emotions are a useful predictor of the likelihood that stereotypes will be applied during decision making.  相似文献   

8.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

9.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

10.
Little is known about the underlying dimensions of impaired recognition of emotional prosody that is frequently observed in patients with Parkinson's disease (PD). Because patients with PD also suffer from working memory deficits and impaired time perception, the present study examined the contribution of (a) working memory (frontal executive functioning) and (b) processing of the acoustic parameter speech rate to the perception of emotional prosody in PD. Two acoustic parameters known to be important for emotional classifications (speech duration and pitch variability) were systematically varied in prosodic utterances. Twenty patients with PD and 16 healthy controls (matched for age, sex, and IQ) participated in the study. The findings imply that (1) working memory dysfunctions and perception of emotional prosody are not independent in PD, (2) PD and healthy control subjects perceived vocal emotions categorically along two acoustic manipulation continua, and (3) patients with PD show impairments in processing of speech rate information.  相似文献   

11.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

12.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

13.
In this study, we attempted to determine whether phonetic disintegration of speech in Broca's aphasia affects the spectral characteristics of speech sounds as has been shown for the temporal characteristics of speech. To this end, we investigated the production of place of articulation in Broca's aphasics. Acoustic analysis of the spectral characteristics for stop consonants were conducted. Results indicated that the static aspects of speech production were preserved, as Broca's aphasics seemed to be able to reach the articulatory configuration for the appropriate place of articulation. However, the dynamic aspects of speech production seemed to be impaired, as their productions reflected problems with the source characteristics of speech sounds and with the integration of articulatory movements in the vocal tract. Listener perceptions of the aphasics' productions were compared with acoustic analyses for these same productions. The two measures were related; that is, the spectral characteristics of the utterances provided salient cues for place of articulation perception. An analysis of the occurrences of errors along the dimensions of voicing and place showed that aphasics rarely produce utterances containing both voice and place substitutions.  相似文献   

14.
We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles: restrained, standard, and exaggerated intention. Participants used a 5-point Likert scale to rate each performance on 19 different emotional qualities. The data analysis revealed that variations in expressive intention had their greatest impact when the performances could be seen; the ratings from participants who could only hear the performances were the same across the three expressive styles. Evidence was also found for an interaction effect leading to an emergent property, intensity of positive emotion, when participants both heard and saw the musical performances. An exploratory factor analysis revealed orthogonal dimensions for positive and negative emotions, which may account for the subjective experience that many listeners report of having multi-valent or complex reactions to music, such as “bittersweet.”  相似文献   

15.
The purpose of this study was to test the hypothesis that not only do babies use emotional signals from adults in order to relate emotions to specific situations (e.g., Campos & Stenberg, 1981) but also that mothers seek out emotional information from their infants ( Emde, 1992). Three groups of mothers and their infants, 3, 5 and 9 months old were video- and audio-taped, while playing in their homes with a soft toy and a remote-control Jack-in-the-box. During surprise-eliciting play with the Jack-in-the-box, maternal and infant gaze direction and their emotional expressions of surprise, pleasure, fear and neutral expressions were coded in three regions of the face. In addition, the mean fundamental frequency of maternal surprise-vocalisations was analysed. Maternal exclamations of surprise were compared with similar utterances of these mothers while playing with a soft toy as a baseline. During the surprise event, maternal and infant gaze directions as well as infant age were analysed in relation to maternal pitch. Results are discussed in terms of maternal use of the pitch of her voice to mark surprising situations, depending on the gaze-direction of the infant.  相似文献   

16.
The acknowledged paradox in our emotional response to fictional characters and events is that the very beliefs required by a cognitive account of the emotions are excluded by knowledge that the context is fictional. Various proposed solutions have failed to reconcile cognitivism with respect to the emotions with the facts of that response. Those offered include denying cognitivism by excluding the belief on the emotions, denying genuine emotional response in these contexts, or advocating either fictional realism or irrationalism in such response. By specifically examining pity and fear this paper tries to reconcile genuine emotional response to fictional characters and events with a unified cognitive account of the emotions by arguing that instead of excluding belief the existential condition on the beliefs in an emotion can be lifted by the invitation to imagine. At the same time it shows that the richness of that response need not be denied and throws some light on further related paradoxes (for instance by indicating why not all emotions are rationally possible in fictive contexts and that although we can pity fictions we cannot rationally fear them). Then by explaining why, unlike in ordinary contexts we do not act on our emotions in fictive ones, it differentiates the reasons for passivity in fictional and in historical circumstances.  相似文献   

17.
史汉文  李雨桐  隋雪 《心理科学进展》2022,30(12):2696-2707
情绪标签词直接表达情绪状态, 而情绪负载词本身不表达情绪状态, 但能引发个体的情绪反应。通过对情绪标签词和情绪负载词的相关研究综述发现, 情绪标签词与情绪负载词的加工优势存在不一致的结果; 任务需求、语言类型和词汇特征是导致情绪词类型效应不一致的主要原因; 情绪标签词与情绪负载词的加工差异可以通过语义表征的具身假说、密度假说进行解释。未来研究应深入考察情绪标签词与情绪负载词加工差异的原因; 考察两种词在句子和语篇水平的加工差异; 提供能够直接解释情绪词类型效应的理论假说; 对比中英双语者加工中文和英文情绪标签词和情绪负载词的差异; 采用神经成像技术继续探查情绪信息与语义信息加工的神经机制。  相似文献   

18.
Mimicry is a central plank of the emotional contagion theory; however, it was only tested with facial and postural emotional stimuli. This study explores the existence of mimicry in voice‐to‐voice communication by analyzing 8,747 sequences of emotional displays between customers and employees in a call‐center context. We listened live to 967 telephone interactions, registered the sequences of emotional displays, and analyzed them with a Markov chain. We also explored other propositions of emotional contagion theory that were yet to be tested in vocal contexts. Results supported that mimicry is significantly present at all levels. Our findings fill an important gap in the emotional contagion theory; have practical implications regarding voice‐to‐voice interactions; and open doors for future vocal mimicry research.  相似文献   

19.
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content.  相似文献   

20.
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical ‘pseudo‐utterances’ were presented to listener groups with and without PD in two separate rating tasks. Task 1 required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo‐utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the polite/impolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language ( Pell & Leonard, 2003 ).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号