首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

2.
An expressive disturbance of speech prosody has long been associated with idiopathic Parkinson's disease (PD), but little is known about the impact of dysprosody on vocal-prosodic communication from the perspective of listeners. Recordings of healthy adults (n=12) and adults with mild to moderate PD (n=21) were elicited in four speech contexts in which prosody serves a primary function in linguistic or emotive communication (phonemic stress, contrastive stress, sentence mode, and emotional prosody). Twenty independent listeners naive to the disease status of individual speakers then judged the intended meanings conveyed by prosody for tokens recorded in each condition. Findings indicated that PD speakers were less successful at communicating stress distinctions, especially words produced with contrastive stress, which were identifiable to listeners. Listeners were also significantly less able to detect intended emotional qualities of Parkinsonian speech, especially for anger and disgust. Emotional expressions that were correctly recognized by listeners were consistently rated as less intense for the PD group. Utterances produced by PD speakers were frequently characterized as sounding sad or devoid of emotion entirely (neutral). Results argue that motor limitations on the vocal apparatus in PD produce serious and early negative repercussions on communication through prosody, which diminish the social-linguistic competence of Parkinsonian adults as judged by listeners.  相似文献   

3.
Emotions are often accompanied by vocalizations whose acoustic features provide information about the physiological state of the speaker. Here, we ask if perceiving these affective signals in one’s own voice has an impact on one’s own emotional state, and if it is necessary to identify these signals as self-originated for the emotional effect to occur. Participants had to deliberate out loud about how they would feel in various familiar emotional scenarios, while we covertly manipulated their voices in order to make them sound happy or sad. Perceiving the artificial affective signals in their own voice altered participants’ judgements about how they would feel in these situations. Crucially, this effect disappeared when participants detected the vocal manipulation, either explicitly or implicitly. The original valence of the scenarios also modulated the vocal feedback effect. These results highlight the role of the exteroception of self-attributed affective signals in the emergence of emotional feelings.  相似文献   

4.
A key feature of psychopathy is the ability to deceive, manipulate, and con the unwary, while seeming to be perfectly sincere. Is this impression of sincerity achieved solely through body gestures and facial expression, or is there also something different about the voice quality of psychopaths? We analyzed the acoustic characteristics of speech in 20 male offenders (10 psychopaths and 10 nonpsychopaths), assessed with the Psychopathy Checklist—Revised (Hare, 1991). We used a computer program developed by Alpert, Merewether, Homel, Martz, and Lomask (1986) to measure variations in amplitude and prosody. Results indicated that psychopaths spoke more quietly than controls and did not differentiate, in voice emphasis, between neutral and affective words. These findings are consistent with the developing view that psychopaths are insensitive to the emotional connotations of language. In addition, their vocal characteristics may be part of a self-presentation mode designed to manipulate and control interpersonal interactions.  相似文献   

5.
Mimicry is a central plank of the emotional contagion theory; however, it was only tested with facial and postural emotional stimuli. This study explores the existence of mimicry in voice‐to‐voice communication by analyzing 8,747 sequences of emotional displays between customers and employees in a call‐center context. We listened live to 967 telephone interactions, registered the sequences of emotional displays, and analyzed them with a Markov chain. We also explored other propositions of emotional contagion theory that were yet to be tested in vocal contexts. Results supported that mimicry is significantly present at all levels. Our findings fill an important gap in the emotional contagion theory; have practical implications regarding voice‐to‐voice interactions; and open doors for future vocal mimicry research.  相似文献   

6.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   

7.
Research on the effects of expressive writing about emotional experiences and traumatic events has a long history in the affective and social sciences. However, very little is known about the incidence and impact of affective states when the writing activities are not explicitly emotional or are less emotionally charged. By integrating goal-appraisal and network theories of affect within cognitive process models of writing, we hypothesize that writing triggers a host of affective states, some of which are tied to the topic of the essays (topic affective states), while others are more closely related to the cognitive processes involved in writing (process affective states). We tested this hypothesis with two experiments involving fine-grained tracking of affect while participants wrote short essays on topics that varied in emotional intensity including topics used in standardized tests, to socially charged issues, and personal emotional experiences. The results indicated that (a) affect collectively accounted for a majority of the observations compared to neutral, (b) boredom, engagement/flow, anxiety, frustration, and happiness were most frequent affective states, (c) there was evidence for a proposed, but not mutually exclusive, distinction between process and topic affective states, (d) certain topic affective states were predictive of the quality of the essays, irrespective of the valence of these states, and (e) individual differences in scholastic aptitude, writing apprehension, and exposure to print correlated with affect frequency in expected directions. Implications of our findings for research focused on monitoring affect during everyday writing activities are discussed.  相似文献   

8.
Three experiments revealed that music lessons promote sensitivity to emotions conveyed by speech prosody. After hearing semantically neutral utterances spoken with emotional (i.e., happy, sad, fearful, or angry) prosody, or tone sequences that mimicked the utterances' prosody, participants identified the emotion conveyed. In Experiment 1 (n=20), musically trained adults performed better than untrained adults. In Experiment 2 (n=56), musically trained adults outperformed untrained adults at identifying sadness, fear, or neutral emotion. In Experiment 3 (n=43), 6-year-olds were tested after being randomly assigned to 1 year of keyboard, vocal, drama, or no lessons. The keyboard group performed equivalently to the drama group and better than the no-lessons group at identifying anger or fear.  相似文献   

9.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

10.
This study investigated the hypothesis that facial expressions are more controllable and closer to one's awareness than vocal cues. Specifically, it was suggested that nonverbal displays are a function of three factors: (a) expressiveness, or the tendency to display spontaneous nonverbal cues: (b) controllability, or the ability to voluntarily suppress or exaggerate one's spontaneous display: and (c) demeanor, or the sender's tendency to convey a particular impression regardless of his/her experienced emotion. Subjects' facial and vocal reactions to affective stimuli were recorded in a spontaneous condition or under instructions to magnify or conceal reactions to these stimuli. It was shown that information conveyed by facial expressions was best accounted for by controllability whereas information conveyed by tone of voice was best accounted for by expressiveness and demeanor.  相似文献   

11.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

12.
Little is known about the underlying dimensions of impaired recognition of emotional prosody that is frequently observed in patients with Parkinson's disease (PD). Because patients with PD also suffer from working memory deficits and impaired time perception, the present study examined the contribution of (a) working memory (frontal executive functioning) and (b) processing of the acoustic parameter speech rate to the perception of emotional prosody in PD. Two acoustic parameters known to be important for emotional classifications (speech duration and pitch variability) were systematically varied in prosodic utterances. Twenty patients with PD and 16 healthy controls (matched for age, sex, and IQ) participated in the study. The findings imply that (1) working memory dysfunctions and perception of emotional prosody are not independent in PD, (2) PD and healthy control subjects perceived vocal emotions categorically along two acoustic manipulation continua, and (3) patients with PD show impairments in processing of speech rate information.  相似文献   

13.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

14.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

15.
Cognitive dysfunction is well known in patients suffering from multiple sclerosis (MS) and has been described for many years. Cognitive impairment, memory, and attention deficits seem to be features of advanced MS stages, whereas depression and emotional instability already occur in early stages of the disease. However, little is known about processing of affective prosody in patients in early stages of relapsing–remitting MS (RRMS). In this study, tests assessing attention, memory, and processing of affective prosody were administered to 25 adult patients with a diagnosis of RRMS at an early stage and to 25 healthy controls (HC). Early stages of the disease were defined as being diagnosed with RRMS in the last 2 years and having an Expanded Disability Status Scale (EDSS) of 2 or lower. Patients and HC were comparable in intelligence quotient (IQ), educational level, age, handedness, and gender. Patients with early stages of RRMS performed below the control group with respect to the subtests ‘discrimination of affective prosody’ and ‘matching of affective prosody to facial expression’ for the emotion ‘angry’ of the ‘Tübingen Affect Battery’. These deficits were not related to executive performance. Our findings suggest that emotional prosody comprehension is deficient in young patients with early stages of RRMS. Deficits in discriminating affective prosody early in the disease may make misunderstandings and poor communication more likely. This might negatively influence interpersonal relationships and quality of life in patients with RRMS.  相似文献   

16.
准确解码语音中的情绪信息能让个体更好地适应社会环境, 此能力对新生儿和婴儿尤其重要, 因为人类刚出生时听觉系统远比视觉系统发育得完善。虽然已有研究表明5~7月龄的婴儿能分辨不同情绪种类的语音, 但目前对新生儿的研究还非常少。人类是否在出生时即具有分辨不同种类情绪性语音的能力?新生儿对情绪的加工是否存在正性或负性偏向?本文选用odd-ball范式考察高兴、恐惧、愤怒三种韵律性语音在1~6天龄新生儿大脑中诱发的事件相关电位。实验1直接对比三种情绪性条件, 发现新生儿大脑的额区(F3和F4电极点)可以区分情绪性语音的正负性, 正性(高兴)语音诱发的“失匹配反应”幅度明显大于负性(愤怒和恐惧)语音。实验2采用偏差和标准刺激反转的odd-ball范式, 证实了实验1的结果并非源于三种情绪语音物理属性的差异。本文的结果提示, 新生儿大脑可自动辨别正性与负性情绪语音, 但尚不能将愤怒和恐惧两种负性语音区分开来。更重要的是, 高兴语音比两种负性语音诱发了更大的失匹配反应, 这一结果首次从神经学层面(电生理指标)为新生儿情绪性语音加工的正性偏向提供了证据。  相似文献   

17.
18.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody–semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

19.
This study focused on how emotional expressions are implied through visual and vocal behaviors. The roles of proportion gaze, glance duration, and vocal loudness in expressing emotional positivity and intensity were examined. Emotional positivity, emotional intensity, and target of the communication were manipulated in a mixed design. Forty-eight female subjects performed a liking or an anger message to a man and to a camera with strong and weak intensity. Videotaped responses were analyzed. Strong emotional intensity conditions evoked more direct gaze regardless of the message positivity or the target of the emotional expression. Longer glances and louder speech were associated with only intense negative emotional expression regardless of the target of the expression. The proportion gaze data support the view that eye contact serves as an intensifier of affective expression. Methodological considerations and questions about generalizability are discussed.  相似文献   

20.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody-semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号