首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Human beings seem to be able to recognize emotions from speech very well and information communication technology aims to implement machines and agents that can do the same. However, to be able to automatically recognize affective states from speech signals, it is necessary to solve two main technological problems. The former concerns the identification of effective and efficient processing algorithms capable of capturing emotional acoustic features from speech sentences. The latter focuses on finding computational models able to classify, with an approximation as good as human listeners, a given set of emotional states. This paper will survey these topics and provide some insights for a holistic approach to the automatic analysis, recognition and synthesis of affective states.  相似文献   

2.
Shared acoustic cues in speech, music, and nonverbal emotional expressions were postulated to code for emotion quality and intensity favoring the hypothesis of a prehuman origin of affective prosody in human emotional communication. To explore this hypothesis, we examined in playback experiments using a habituation-dishabituation paradigm whether a solitary foraging, highly vocal mammal, the tree shrew, is able to discriminate two behaviorally defined states of affect intensity (low vs. high) from the voice of conspecifics. Playback experiments with communication calls of two different types (chatter call and scream call) given in the state of low affect intensity revealed that habituated tree shrews dishabituated to one call type (the chatter call) and showed a tendency to do so for the other one (the scream call), both given in the state of high affect intensity. Findings suggest that listeners perceive the acoustic variation linked to defined states of affect intensity as different within the same call type. Our findings in tree shrews provide first evidence that acoustically conveyed affect intensity is biologically relevant without any other sensory cue, even for solitary foragers. Thus, the perception of affect intensity in voice conveyed in stressful contexts represents a shared trait of mammals, independent of the complexity of social systems. Findings support the hypothesis that affective prosody in human emotional communication has deep-reaching phylogenetic roots, deriving from precursors already present and relevant in the vocal communication system of early mammals.  相似文献   

3.
The ability to recognize others’ facial expressions is critical to the social communication of affective states. The present work examined how transient states of high physiological arousal during aerobic exercise influence recognizing and rating morphed facial expressions. Participants exercised at either a low or high work rate. While exercising and then during cool-down and rest periods, participants performed a version of the morphed faces task that involved animated faces changing into or away from five target affective states (happy, surprise, sadness, anger, and disgust); they were asked to stop the animation when the face first corresponded to a target state, and rate its emotional intensity. Results demonstrated no differences in animation stop data, but overall lower ratings of perceived emotion intensity during high versus low work rate exercise; these effects dissipated through cool-down and rest periods. Results highlight important interactions between physiological states and processing emotional information.  相似文献   

4.
Emotional cues contain important information about the intentions and feelings of others. Despite a wealth of research into children's understanding of facial signals of emotions, little research has investigated the developmental trajectory of interpreting affective cues in the voice. In this study, 48 children ranging between 5 and 10 years were tested using forced‐choice tasks with non‐verbal vocalizations and emotionally inflected speech expressing different positive, neutral and negative states. Children as young as 5 years were proficient in interpreting a range of emotional cues from vocal signals. Consistent with previous work, performance was found to improve with age. Furthermore, the two tasks, examining recognition of non‐verbal vocalizations and emotionally inflected speech, respectively, were sensitive to individual differences, with high correspondence of performance across the tasks. From this demonstration of children's ability to recognize emotions from vocal stimuli, we also conclude that this auditory emotion recognition task is suitable for a wide age range of children, providing a novel, empirical way to investigate children's affect recognition skills.  相似文献   

5.
The capacity to interpret others people's behavior and mental states is a vital part of human social communication. This ability, also called mentalizing or Theory of Mind (ToM), may also serve as a protective factor against aggression and antisocial behavior. This study investigates the relationship between two measures of psychopathy (clinical assessment and self‐report) and the ability to identify mental states from photographs of the eye region. The participants in the study were 92 male inmates at Bergen prison, Norway. The results showed some discrepancy in connection to assessment methodology. For the self‐report (SRP‐III), we found an overall negative association between mental state discrimination and psychopathy, while for the clinical instrument (PCL‐R) the results were more mixed. For Factor 1 psychopathic traits (interpersonal and affective), we found positive associations with discrimination of neutral mental states, but not with the positive or negative mental states. Factor 2 traits (antisocial lifestyle) were found to be negatively associated with discrimination of mental states. The results from this study demonstrate a heterogeneity in the psychopathic construct where psychopathic traits related to an antisocial and impulsive lifestyle are associated with lower ability to recognize others' mental states, while interpersonal and affective psychopathic traits are associated with a somewhat enhanced ability to recognize others' emotional states.  相似文献   

6.
The stereotypic portrayal of women as more emotional than men was evaluated in the present study. Equal numbers of female and male subjects were administered an interview consisting of questions designed to elicit two different levels of affect. Measures of subjects' facial expressions, speech, and visual behavior were analyzed for indications of emotionality, the affective state elicited in the interview, and emotional expression, a trait which is independent of question content. As hypothesized, it was found that women were more expressive of emotion in their higher level of facial activity. However, measures of speech and visual behavior, which reflected the expected difference in the affective states elicited by the different types of interview questions, did not differentiate between men and women. Finally, ratings of the quality of the subjects' facial expressions provided some evidence of sex differences in reactions to the questions. It was concluded that a reconsideration of the question of sex differences in emotionality is needed. Previous generalizations based on indirect experimental data and on potentially unreliable subjective reports must be challenged by more direct and dependable investigations.The author is grateful to Galen Baril for his assistance in the collection of the data and to Max Zachau for coding the data. This research was supported by a Faculty Research Grant from the University of Maine at Orono.  相似文献   

7.
Data from therapists who were treating 26 patients when they committed suicide were utilized to identify signs that warned of a suicide crisis. Three factors were identified as markers of the suicide crisis: a precipitating event; one or more intense affective states other than depression; and at least one of three behavioral patterns: speech or actions suggesting suicide, deterioration in social or occupational functioning, and increased substance abuse. Problems in communication between patient and therapist were identified as factors interfering with crisis recognition. Evaluation of the identified affects and behaviors may help therapists recognize a suicide crisis.  相似文献   

8.
Adult humans recognize that even unfamiliar speech can communicate information between third parties, demonstrating an ability to separate communicative function from linguistic content. We examined whether 12-month-old infants understand that speech can communicate before they understand the meanings of specific words. Specifically, we test the understanding that speech permits the transfer of information about a Communicator's target object to a Recipient. Initially, the Communicator selectively grasped one of two objects. In test, the Communicator could no longer reach the objects. She then turned to the Recipient and produced speech (a nonsense word) or non-speech (coughing). Infants looked longer when the Recipient selected the non-target than the target object when the Communicator had produced speech but not coughing (Experiment 1). Looking time patterns differed from the speech condition when the Recipient rather than the Communicator produced the speech (Experiment 2), and when the Communicator produced a positive emotional vocalization (Experiment 3), but did not differ when the Recipient had previously received information about the target by watching the Communicator's selective grasping (Experiment 4). Thus infants understand the information-transferring properties of speech and recognize some of the conditions under which others' information states can be updated. These results suggest that infants possess an abstract understanding of the communicative function of speech, providing an important potential mechanism for language and knowledge acquisition.  相似文献   

9.
Many studies have shown that infants prefer infant-directed (ID) speech to adult-directed (AD) speech. ID speech functions to aid language learning, obtain and/or maintain an infant's attention, and create emotional communication between the infant and caregiver. We examined psychophysiological responses to ID speech that varied in affective content (i.e., love/comfort, surprise, fear) in a group of typically developing 9-month-old infants. Regional EEG and heart rate were collected continuously during stimulus presentation. We found the pattern of overall frontal EEG power was linearly related to affective intensity of the ID speech, such that EEG power was greatest in response to fear, than surprise than love/comfort; this linear pattern was specific to the frontal region. We also noted that heart rate decelerated to ID speech independent of affective content. As well, infants who were reported by their mothers as temperamentally distressed tended to exhibit greater relative right frontal EEG activity during baseline and in response to affective ID speech, consistent with previous work with visual stimuli and extending it to the auditory modality. Findings are discussed in terms of how increases in frontal EEG power in response to different affective intensity may reflect the cognitive aspects of emotional processing across sensory domains in infancy.  相似文献   

10.
Glucocorticoids have a key role in stress responses. There are, however, substantial differences in cortisol reactivity among individuals. We investigated if affective trait and mood induction influence the reactivity to psychological stress in a group of 63 young adults, male (n=27) and female (n=36), aged ca. 21 years. On the experimental day the participants viewed either a block of pleasant or unpleasant pictures for 5 min to induce positive or negative mood, respectively. Then, they had 5 min to prepare a speech to be delivered in front of a video-camera. Saliva samples were collected to measure cortisol, and questionnaire-based affective scales were used to estimate emotional states and traits. Compared to basal levels, a cortisol response to the acute speech stressor was only seen for those who had first viewed unpleasant pictures and scored above the average on the negative affect scale. There were no sex differences. In conclusion, high negative affect associated with exposure to an unpleasant context increased sensitivity to an acute stressor, and was critical to stimulation of cortisol release by the speech stressor.  相似文献   

11.
近年来,情感代理如何影响学习受到了研究者们的高度重视。情感代理是可以通过面部表情、声音、肢体动作和言语信息等影响学习者情感体验的教学代理。以往研究主要关注两种类型的情感代理:表达型情感代理和移情型情感代理。表达型情感代理是仅通过自身的情绪表达(如:使用微笑的面部表情和热情的声音)以影响学习者情绪体验的代理。移情型情感代理则是能根据学习者的学习表现或情绪状态给予情感反馈(如;点头、鼓励和共情)的代理,其目的是为了调节学习者的情绪、激励其继续努力。虽然不同的研究者对情感代理的具体操作有所不同,但无论哪种类型的情感代理均是为了增加学习者的积极情绪,提高内部动机,最终促进学习。关于情感代理的潜在作用,研究者基于不同的理论观点给出了不同的解释。情绪感染理论认为一个人的情绪状态容易受到另一个人情绪表达的影响,因此界面代理的情绪会直接影响学习者的情绪和动机。情绪反应理论认为如果教师的言语和非言语线索诱发了积极的情绪,学习者就会产生趋近学习的行为(例如,制定相应的学习计划)。多媒体学习认知情感理论强调了学习过程中情感和动机的重要性。基于多媒体学习认知情感理论,情感代理能唤起学习者的积极情绪,增加学习动机,进而提高学习成绩。而认知负荷理论和干扰理论却认为情感代理丰富的面部表情和手势动作可能会增加学习者的外部认知负荷,吸引学习者的注意力,减少对关键信息的注意,因此干扰学习效果。在上述理论的指导下,研究者们对情感代理的效果进行了探究,结果发现,情感代理可以有效地唤起学习者的积极情绪(d积极情绪= 0.45),提高学习动机(d内部动机= 0.52)。但不一定能影响认知负荷(d内部认知负荷 = -0.01;d外部认知负荷 = 0.09;d相关认知负荷 = 0.08),并且在学习效果上的作用也比较微弱(d保持 = 0.18;d理解 = 0.32;d迁移 = 0.14;d联合 = 0.32)。情感代理在学习效果上的作用不稳健的原因可能是受到了潜在调节因素的影响。例如,学习者的个体特征(如,工作记忆能力和年级水平)、情感代理的类型、任务特征和测验时间等。总之,尽管目前关于情感代理的研究结果存在不一致,但整体而言,在积极情感代理条件下,学习者更加快乐,更有动力。因此,在教育实践中,教学设计者可以考虑为学习者呈现一个积极的教学代理以帮助他们更加快乐地学习。未来关于情感代理的研究需要继续关注情感代理的操纵和评定方法;探究影响情感代理效果的边界条件;考察情感代理影响学习背后的神经机制;提高情感代理研究的生态效度等。  相似文献   

12.
Research on the effects of expressive writing about emotional experiences and traumatic events has a long history in the affective and social sciences. However, very little is known about the incidence and impact of affective states when the writing activities are not explicitly emotional or are less emotionally charged. By integrating goal-appraisal and network theories of affect within cognitive process models of writing, we hypothesize that writing triggers a host of affective states, some of which are tied to the topic of the essays (topic affective states), while others are more closely related to the cognitive processes involved in writing (process affective states). We tested this hypothesis with two experiments involving fine-grained tracking of affect while participants wrote short essays on topics that varied in emotional intensity including topics used in standardized tests, to socially charged issues, and personal emotional experiences. The results indicated that (a) affect collectively accounted for a majority of the observations compared to neutral, (b) boredom, engagement/flow, anxiety, frustration, and happiness were most frequent affective states, (c) there was evidence for a proposed, but not mutually exclusive, distinction between process and topic affective states, (d) certain topic affective states were predictive of the quality of the essays, irrespective of the valence of these states, and (e) individual differences in scholastic aptitude, writing apprehension, and exposure to print correlated with affect frequency in expected directions. Implications of our findings for research focused on monitoring affect during everyday writing activities are discussed.  相似文献   

13.
郑志伟  黄贤军  张钦 《心理学报》2013,45(4):427-437
采用韵律/词汇干扰范式和延迟匹配任务, 通过两个ERP实验, 考察了汉语口语中情绪韵律能否、以及如何调节情绪词的识别。实验一中, 不同类型的情绪韵律分组呈现, ERP结果显示, 同与情绪韵律效价一致的情绪词相比, 与情绪韵律效价不一致的情绪词诱发了走向更负的P200、N300和N400成分; 实验二中, 不同类型的情绪韵律随机呈现, 上述效价一致性效应依然存在。实验结果表明, 情绪韵律能够调节情绪词识别, 主要表现在对情绪词的音韵编码和语义加工的双重易化上。  相似文献   

14.
SPEECH EVENTS, LANGUAGE DEVELOPMENT, AND THE CLINICAL SITUATION   总被引:1,自引:1,他引:0  
Psychoanalysis brings about psychic change by the mediation of speech. This paper reflects upon the significance of the structure and developmental organisation of the speech event as a verbal and non-verbal unit composed of semantically and prosodically encoded messages, interactions and emotional contact between partners. Spoken words communicate semantic meanings and the affects of a given speech event. Words carry personal emotional meanings which are inseparable from their referential significance. Such emotional meanings are very hard to articulate in words. They are conveyed by the ineffable but essential feelings present in their sound and pronunciation. Speech is an intentionally object-related and emotionally engaging social activity resulting from a child having been spoken to early in life by an adult wanting to establish affective verbal contact. The early organisation and later transformation of the structure of the speech event carries private meanings for each person's listening and speaking stance. A refined understanding of the structural and emotional complexities of verbal communicative exchanges during analysis may enhance the analyst's ability to understand the patient'smanner of participation in the analytic process.  相似文献   

15.
Narratives are not only about events, but also about the emotions those events elicit. Understanding a narrative involves not just the affective valence of implied emotional states, but the formation of an explicit mental representation of those states. In turn, this representation provides a mechanism that particularizes emotion and modulates its display, which then allows emotional expression to be modified according to particular contexts. This includes understanding that a character may feel an emotion but inhibit its display or even express a deceptive emotion. We studied how 59 school-aged children with head injury and 87 normally-developing age-matched controls understand real and deceptive emotions in brief narratives. Children with head injury showed less sensitivity than controls to how emotions are expressed in narratives. While they understood the real emotions in the text, and could recall what provoked the emotion and the reason for concealing it, they were less able than controls to identify deceptive emotions. Within the head injury group, factors such as an earlier age at head injury and frontal lobe contusions were associated with poor understanding of deceptive emotions. The results are discussed in terms of the distinction between emotions as felt and emotions as a cognitive framework for understanding other people's actions and mental states. We conclude that children with head injury understand emotional communication, the spontaneous externalization of real affect, but not emotive communication, the conscious, strategic modification of affective signals to influence others through deceptive facial expressions.  相似文献   

16.
Affective individual differences and startle reflex modulation   总被引:6,自引:0,他引:6  
Potentiation of startle has been demonstrated in experimentally produced aversive emotional states, and clinical reports suggest that potentiated startle may be associated with fear or anxiety. To test the generalizability of startle potentiation across a variety of emotional states as well as its sensitivity to individual differences in fearfulness, the acoustic startle response of 17 high- and 15 low-fear adult subjects was assessed during fear, anger, joy, sadness, pleasant relaxation, and neutral imagery. Startle responses were larger in all aversive affective states than during pleasant imagery. This effect was enhanced among high fear subjects, although followup testing indicated that other affective individual differences (depression and anger) may also be related to increased potentiation of startle in negative affect. Startle latency was reduced during high- rather than low-arousal imagery but was unaffected by emotional valence.  相似文献   

17.
Singh L 《Cognition》2008,106(2):833-870
Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural speech is replete with variability, only some of which determines the meaning of a word, it remains unclear how infants might ever overcome the effects of surface variability without appealing to meaning. In the current set of experiments, consequences of high and low variability are examined in preverbal infants. The source of variability, vocal affect, is a common property of infant-directed speech with which young learners have to contend. Across a series of four experiments, infants' abilities to recognize repeated encounters of words, as well as to reject similar-sounding words, are investigated in the context of high and low affective variation. Results point to positive consequences of affective variation, both in creating generalizable memory representations for words, but also in establishing phonologically precise memories for words. Conversely, low variability appears to degrade word recognition on both fronts, compromising infants' abilities to generalize across different affective forms of a word and to detect similar-sounding items. Findings are discussed in the context of principles of categorization that may potentiate the early growth of a lexicon.  相似文献   

18.
In all languages studied to date, distinct prosodic contours characterize different intention categories of infant-directed (ID) speech. This vocal behavior likely exists universally as a species-typical trait, but little research has examined whether listeners can accurately recognize intentions in ID speech using only vocal cues, without access to semantic information. We recorded native-English-speaking mothers producing four intention categories of utterances (prohibition, approval, comfort, and attention) as both ID and adult-directed (AD) speech, and we then presented the utterances to Shuar adults (South American hunter-horticulturalists). Shuar subjects were able to reliably distinguish ID from AD speech and were able to reliably recognize the intention categories in both types of speech, although performance was significantly better with ID speech. This is the first demonstration that adult listeners in an indigenous, nonindustrialized, and nonliterate culture can accurately infer intentions from both ID speech and AD speech in a language they do not speak.  相似文献   

19.
Trichotillomania (TTM), a repetitive hair-pulling disorder, is underrepresented in the clinical literature. The current project explores the relationship between affective regulation and disordered hair-pulling. Previous research suggests that cycles of emotional states are correlated with the disorder and may induce, reinforce, or otherwise contribute to hair-pulling behavior. We use anonymous internet survey responses from 1162 self-identified hair-pullers to address four questions about affective regulation in people with TTM: (1) Do hair-pullers experience greater difficulty “snapping out” of affective states than non-pullers? (2) Does difficulty with emotional control correlate with TTM severity? (3) Are subtypes identifiable based on the emotions that trigger hair-pulling behavior? (4) Does difficulty “snapping out” of an emotion predict whether that emotion triggers pulling behavior? The results showed a small-to-moderate relationship between affective regulation and problematic hair-pulling. In addition, individual patterns of emotion regulation were systematically related to emotional cues for hair-pulling as well as overall hair-pulling severity. These findings contribute to an understanding of the phenomenology of TTM and provide empirical support for treatments focused on affect regulation.  相似文献   

20.
"Mood contagion": the automatic transfer of mood between persons   总被引:1,自引:0,他引:1  
The current studies aimed to find out whether a nonintentional form of mood contagion exists and which mechanisms can account for it. In these experiments participants who expected to be tested for text comprehension listened to an affectively neutral speech that was spoken in a slightly sad or happy voice. The authors found that (a) the emotional expression induced a congruent mood state in the listeners, (b) inferential accounts to emotional sharing were not easily reconciled with the findings, (c) different affective experiences emerged from intentional and nonintentional forms of emotional sharing, and (d) findings suggest that a perception-behavior link (T. L. Chartrand & J. A. Bargh, 1999) can account for these findings, because participants who were required to repeat the philosophical speech spontaneously imitated the target person's vocal expression of emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号