首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Emotional cues contain important information about the intentions and feelings of others. Despite a wealth of research into children's understanding of facial signals of emotions, little research has investigated the developmental trajectory of interpreting affective cues in the voice. In this study, 48 children ranging between 5 and 10 years were tested using forced‐choice tasks with non‐verbal vocalizations and emotionally inflected speech expressing different positive, neutral and negative states. Children as young as 5 years were proficient in interpreting a range of emotional cues from vocal signals. Consistent with previous work, performance was found to improve with age. Furthermore, the two tasks, examining recognition of non‐verbal vocalizations and emotionally inflected speech, respectively, were sensitive to individual differences, with high correspondence of performance across the tasks. From this demonstration of children's ability to recognize emotions from vocal stimuli, we also conclude that this auditory emotion recognition task is suitable for a wide age range of children, providing a novel, empirical way to investigate children's affect recognition skills.  相似文献   

2.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   

3.
Many authors have speculated about a close relationship between vocal expression of emotions and musical expression of emotions. but evidence bearing on this relationship has unfortunately been lacking. This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion. The patterns are generally consistent with K. R. Scherer's (1986) theoretical predictions. The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions. Discussion focuses on theoretical accounts and directions for future research.  相似文献   

4.
Vocal Expression and Perception of Emotion   总被引:3,自引:0,他引:3  
Speech is an acoustically rich signal that provides considerable personal information about talkers. The expression of emotions in speech sounds and corresponding abilities to perceive such emotions are both fundamental aspects of human communication. Findings from studies seeking to characterize the acoustic properties of emotional speech indicate that speech acoustics provide an external cue to the level of nonspecific arousal associated with emotionalprocesses and to a lesser extent, the relative pleasantness of experienced emotions. Outcomes from perceptual tests show that listeners are able to accurately judge emotions from speech at rates far greater than expected by chance. More detailed characterizations of these production and perception aspects of vocal communication will necessarily involve knowledge aboutdifferences among talkers, such as those components of speech that provide comparatively stable cues to individual talkers identities.  相似文献   

5.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

6.
Vocal expressions of emotions taken from a recorded version of a play were content. masked by using electronic filtering, randomized splicing and a combination of both techniques in addition to a no-treatment condition in a 2×2 design. Untrained listener-judges rated the voice samples in the four conditions on 20 semantic differential scales. Irrespective of the severe reduction in the number and types of vocal cues in the masking conditions, the mean ratings of the judges in all four groups agreed on a level significantly beyond chance expectations on the differential position of the emotional expressions in a multidimensional space of emotional meaning. The results suggest that a minimal set of vocal cues consisting of pitch level and variation, amplitude level and variation, and rate of articulation may be sufficient to communicate the evaluation, potency, and activity dimensions of emotional meaning. Each of these dimensions may be associated with a specific pattern of vocal cues or cue combinations. No differential effects of the type of content-masking for specific emotions were found. Systematic effects of the masking techniques consisted in a lowering of the perceived activity level of the emotions in the case of electronic filtering, and more positive ratings on the evaluative dimension in the case of randomized splicing. Electronic filtering tended to decrease, randomized splicing tended to increase inter-rater reliability.This research was supported by a research grant (GS-2654) from the Division of Social Sciences of the National Science Foundation to Robert Rosenthal.  相似文献   

7.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

8.
Affective computing research has advanced emotion recognition systems using facial expressions, voices, gaits, and physiological signals, yet these methods are often impractical. This study integrates mouse cursor motion analysis into affective computing and investigates the idea that movements of the computer cursor can provide information about emotion of the computer user. We extracted 16–26 trajectory features during a choice‐reaching task and examined the link between emotion and cursor motions. Participants were induced for positive or negative emotions by music, film clips, or emotional pictures, and they indicated their emotions with questionnaires. Our 10‐fold cross‐validation analysis shows that statistical models formed from “known” participants (training data) could predict nearly 10%–20% of the variance of positive affect and attentiveness ratings of “unknown” participants, suggesting that cursor movement patterns such as the area under curve and direction change help infer emotions of computer users.  相似文献   

9.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

10.
Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

11.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

12.
Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217–223. doi:10.1016/j.cognition.2010.08.008], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction – vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.  相似文献   

13.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

14.
This research examines the correspondence between theoretical predictions on vocal expression patterns in naturally occurring emotions (as based on the component process theory of emotion; Scherer, 1986) and empirical data on the acoustic characteristics of actors' portrayals. Two male and two female professional radio actors portrayed anger, sadness, joy, fear, and disgust based on realistic scenarios of emotion-eliciting events. A series of judgment studies was conducted to assess the degree to which judges are able to recognize the intended emotion expressions. Disgust was relatively poorly recognized; average recognition accuracy for the other emotions attained 62.8% across studies. A set of portrayals reaching a satisfactory level of recognition accuracy underwent digital acoustic analysis. The results for the acoustic parameters extracted from the speech signal show a number of significant differences between emotions, generally confirming the theoretical predictions.This research was supported by a grant from the Deutsche Forschungsgemeinschaft (Sche 156/8-5). The authors acknowledge collaboration of Westdeutscher Rundfunk, Cologne, in producing professional versions of actor emotion portrayals, and thank Kurt Balser, Theodor Gehm, Christoph Gierschner, Judy Hall, Ursula Hess, Alice Isen, and one anonymous reviewer for helpful comments and contributions.  相似文献   

15.
Emotion in Speech: The Acoustic Attributes of Fear, Anger, Sadness, and Joy   总被引:4,自引:0,他引:4  
Decoders can detect emotion in voice with much greater accuracy than can be achieved by objective acoustic analysis. Studies that have established this advantage, however, used methods that may have favored decoders and disadvantaged acoustic analysis. In this study, we applied several methodologic modifications for the analysis of the acoustic differentiation of fear, anger, sadness, and joy. Thirty-one female subjects between the ages of 18 and 35 (encoders) were audio-recorded during an emotion-induction procedure and produced a total of 620 emotion-laden sentences. Twelve female judges (decoders), three for each of the four emotions, were assigned to rate the intensity of one emotion each. Their combined ratings were used to select 38 prototype samples per emotion. Past acoustic findings were replicated, and increased acoustic differentiation among the emotions was achieved. Multiple regression analysis suggested that some, although not all, of the acoustic variables were associated with decoders' ratings. Signal detection analysis gave some insight into this disparity. However, the analysis of the classic constellation of acoustic variables may not completely capture the acoustic features that influence decoders' ratings. Future analyses would likely benefit from the parallel assessment of respiration, phonation, and articulation.  相似文献   

16.
Chickadees produce a multi-note chick-a-dee call in multiple socially relevant contexts. One component of this call is the D note, which is a low-frequency and acoustically complex note with a harmonic-like structure. In the current study, we tested black-capped chickadees on a between-category operant discrimination task using vocalizations with acoustic structures similar to black-capped chickadee D notes, but produced by various songbird species, in order to examine the role that phylogenetic distance plays in acoustic perception of vocal signals. We assessed the extent to which discrimination performance was influenced by the phylogenetic relatedness among the species producing the vocalizations and by the phylogenetic relatedness between the subjects’ species (black-capped chickadees) and the vocalizers’ species. We also conducted a bioacoustic analysis and discriminant function analysis in order to examine the acoustic similarities among the discrimination stimuli. A previous study has shown that neural activation in black-capped chickadee auditory and perceptual brain regions is similar following the presentation of these vocalization categories. However, we found that chickadees had difficulty discriminating between forward and reversed black-capped chickadee D notes, a result that directly corresponded to the bioacoustic analysis indicating that these stimulus categories were acoustically similar. In addition, our results suggest that the discrimination between vocalizations produced by two parid species (chestnut-backed chickadees and tufted titmice) is perceptually difficult for black-capped chickadees, a finding that is likely in part because these vocalizations contain acoustic similarities. Overall, our results provide evidence that black-capped chickadees’ perceptual abilities are influenced by both phylogenetic relatedness and acoustic structure.  相似文献   

17.
Although the configurations of psychoacoustic cues signalling emotions in human vocalizations and instrumental music are very similar, cross‐domain links in recognition performance have yet to be studied developmentally. Two hundred and twenty 5‐ to 10‐year‐old children were asked to identify musical excerpts and vocalizations as happy, sad, or fearful. The results revealed age‐related increases in overall recognition performance with significant correlations across vocal and musical conditions at all developmental stages. Recognition scores were greater for musical than vocal stimuli and were superior in females compared with males. These results confirm that recognition of emotions in vocal and musical stimuli is linked by 5 years and that sensitivity to emotions in auditory stimuli is influenced by age and gender.  相似文献   

18.
ABSTRACT

Film clips are commonly used to elicit subjectively experienced emotional states for many research purposes, but film clips currently available in databases are out of date, include a limited set of emotions, and/or pertain to only one conceptualization of emotion. This work reports validation data from two studies aimed to elicit basic and complex emotions (amusement, anger, anxiety, compassion, contentment, disgust, fear, happiness/joy, irritation, neutrality, pride, relief, sadness, surprise), equally distributed according to valence (positive, negative) and intensity (high, low). Participants rated film clips according to the degree of experienced emotion, and for valence and arousal. Our findings initiate an iterative archive of film clips shown here to discretely elicit 11 different emotions. Although further validation of these film clips is needed, ratings provided here should assist researchers in selecting potential film clips to meet the aims of their work.  相似文献   

19.
Operant conditioning and multidimensional scaling procedures were used to study auditory perception of complex sounds in the budgerigar. In a same-different discrimination task, budgerigars learned to discriminate among natural vocal signals. Multidimensional scaling procedures were used to arrange these complex acoustic stimuli in a two-dimensional space reflecting perceptual organization. Results show that budgerigars group vocal stimuli according to functional and acoustical categories. Studies with only contact calls show that birds also make within-category discriminations. The acoustic cues in contact calls most salient to budgerigars appear to be quite complex. There is a suggestion that the sex of the signaler may also be encoded in these calls. The results from budgerigars were compared with the results from humans tested on some of the same sets of complex sounds.  相似文献   

20.
Previous research has demonstrated that even brief exposures to facial expressions of emotions elicit facial mimicry in receivers in the form of corresponding facial muscle movements. As well, vocal and verbal patterns of speakers converge in conversations, a type of vocal mimicry. There is also evidence of cross-modal mimicry in which emotional vocalizations elicit corresponding facial muscle activity. Further, empathic capacity has been associated with enhanced tendency towards facial mimicry as well as verbal synchrony. We investigated a type of potential cross-modal mimicry in a simulated dyadic situation. Specifically, we examined the influence of facial expressions of happy, sad, and neutral emotions on the vocal pitch of receivers, and its potential association with empathy. Results indicated that whereas both mean pitch and variability of pitch varied somewhat in the predicted directions, empathy was correlated with the difference in the variability of pitch while speaking to the sad and neutral faces. Discussion of results considers the dimensional nature of emotional vocalizations and possible future directions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号