首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

2.
Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of emotional information in social interactions. However, the investigation of how we process these vocal cues entered the research agenda only recently. Here, we introduce a new corpus of nonverbal vocalizations, which we recorded and submitted to perceptual and acoustic validation. It consists of 121 sounds expressing four positive emotions (achievement/triumph, amusement, sensual pleasure, and relief) and four negative ones (anger, disgust, fear, and sadness), produced by two female and two male speakers. For perceptual validation, a forced choice task was used (n = 20), and ratings were collected for the eight emotions, valence, arousal, and authenticity (n = 20). We provide these data, detailed for each vocalization, for use by the research community. High recognition accuracy was found for all emotions (86 %, on average), and the sounds were reliably rated as communicating the intended expressions. The vocalizations were measured for acoustic cues related to temporal aspects, intensity, fundamental frequency (f0), and voice quality. These cues alone provide sufficient information to discriminate between emotion categories, as indicated by statistical classification procedures; they are also predictors of listeners’ emotion ratings, as indicated by multiple regression analyses. This set of stimuli seems a valuable addition to currently available expression corpora for research on emotion processing. It is suitable for behavioral and neuroscience research and might as well be used in clinical settings for the assessment of neurological and psychiatric patients. The corpus can be downloaded from Supplementary Materials.  相似文献   

3.
This research examines the correspondence between theoretical predictions on vocal expression patterns in naturally occurring emotions (as based on the component process theory of emotion; Scherer, 1986) and empirical data on the acoustic characteristics of actors' portrayals. Two male and two female professional radio actors portrayed anger, sadness, joy, fear, and disgust based on realistic scenarios of emotion-eliciting events. A series of judgment studies was conducted to assess the degree to which judges are able to recognize the intended emotion expressions. Disgust was relatively poorly recognized; average recognition accuracy for the other emotions attained 62.8% across studies. A set of portrayals reaching a satisfactory level of recognition accuracy underwent digital acoustic analysis. The results for the acoustic parameters extracted from the speech signal show a number of significant differences between emotions, generally confirming the theoretical predictions.This research was supported by a grant from the Deutsche Forschungsgemeinschaft (Sche 156/8-5). The authors acknowledge collaboration of Westdeutscher Rundfunk, Cologne, in producing professional versions of actor emotion portrayals, and thank Kurt Balser, Theodor Gehm, Christoph Gierschner, Judy Hall, Ursula Hess, Alice Isen, and one anonymous reviewer for helpful comments and contributions.  相似文献   

4.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

5.
Emotions in music are conveyed by a variety of acoustic cues. Notably, the positive association between sound intensity and arousal has particular biological relevance. However, although amplitude normalization is a common procedure used to control for intensity in music psychology research, direct comparisons between emotional ratings of original and amplitude-normalized musical excerpts are lacking.

In this study, 30 nonmusicians retrospectively rated the subjective arousal and pleasantness induced by 84 six-second classical music excerpts, and an additional 30 nonmusicians rated the same excerpts normalized for amplitude. Following the cue-redundancy and Brunswik lens models of acoustic communication, we hypothesized that arousal and pleasantness ratings would be similar for both versions of the excerpts, and that arousal could be predicted effectively by other acoustic cues besides intensity.

Although the difference in mean arousal and pleasantness ratings between original and amplitude-normalized excerpts correlated significantly with the amplitude adjustment, ratings for both sets of excerpts were highly correlated and shared a similar range of values, thus validating the use of amplitude normalization in music emotion research. Two acoustic parameters, spectral flux and spectral entropy, accounted for 65% of the variance in arousal ratings for both sets, indicating that spectral features can effectively predict arousal. Additionally, we confirmed that amplitude-normalized excerpts were adequately matched for loudness. Overall, the results corroborate our hypotheses and support the cue-redundancy and Brunswik lens models.  相似文献   

6.
The current study assessed the processing of facial displays of emotion (Happy, Disgust, and Neutral) of varying emotional intensities in participants with high vs. low social anxiety. Use of facial expressions of varying intensities allowed for strong external validity and a fine-grained analysis of interpretation biases. Sensitivity to perceiving negative evaluation in faces (i.e., emotion detection) was assessed at both long (unlimited) and brief (60 ms) stimulus durations. In addition, ratings of perceived social cost were made indicating what participants judged it would be like to have a social interaction with a person exhibiting the stimulus emotion. Results suggest that high social anxiety participants did not demonstrate biases in their sensitivity to perceiving negative evaluation (i.e. disgust) in facial expressions. However, high social anxiety participants did estimate the perceived cost of interacting with someone showing disgust to be significantly greater than low social anxiety participants, regardless of the intensity of the disgust expression. These results are consistent with a specific type of interpretation bias in which participants with social anxiety have elevated ratings of the social cost of interacting with individuals displaying negative evaluation.  相似文献   

7.
Two studies investigated the utility of indirect scaling methods, based on graded pair comparisons, for the testing of quantitative emotion theories. In Study 1, we measured the intensity of relief and disappointment caused by lottery outcomes, and in Study 2, the intensity of disgust evoked by pictures, using both direct intensity ratings and graded pair comparisons. The stimuli were systematically constructed to reflect variables expected to influence the intensity of the emotions according to theoretical models of relief/disappointment and disgust, respectively. Two probabilistic scaling methods were used to estimate scale values from the pair comparison judgements: Additive functional measurement (AFM) and maximum likelihood difference scaling (MLDS). The emotion models were fitted to the direct and indirect intensity measurements using nonlinear regression (Study 1) and analysis of variance (Study 2). Both studies found substantially improved fits of the emotion models for the indirectly determined emotion intensities, with their advantage being evident particularly at the level of individual participants. The results suggest that indirect scaling methods yield more precise measurements of emotion intensity than rating scales and thereby provide stronger tests of emotion theories in general and quantitative emotion theories in particular.  相似文献   

8.
Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.  相似文献   

9.
There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners’ second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.  相似文献   

10.
Emotion in Speech: The Acoustic Attributes of Fear, Anger, Sadness, and Joy   总被引:4,自引:0,他引:4  
Decoders can detect emotion in voice with much greater accuracy than can be achieved by objective acoustic analysis. Studies that have established this advantage, however, used methods that may have favored decoders and disadvantaged acoustic analysis. In this study, we applied several methodologic modifications for the analysis of the acoustic differentiation of fear, anger, sadness, and joy. Thirty-one female subjects between the ages of 18 and 35 (encoders) were audio-recorded during an emotion-induction procedure and produced a total of 620 emotion-laden sentences. Twelve female judges (decoders), three for each of the four emotions, were assigned to rate the intensity of one emotion each. Their combined ratings were used to select 38 prototype samples per emotion. Past acoustic findings were replicated, and increased acoustic differentiation among the emotions was achieved. Multiple regression analysis suggested that some, although not all, of the acoustic variables were associated with decoders' ratings. Signal detection analysis gave some insight into this disparity. However, the analysis of the classic constellation of acoustic variables may not completely capture the acoustic features that influence decoders' ratings. Future analyses would likely benefit from the parallel assessment of respiration, phonation, and articulation.  相似文献   

11.
Researchers have begun to use response times (RTs) to emotion items as an indirect measure of emotional clarity. Our first aim was to scrutinise the properties of this RT measure in more detail than previously. To be able to provide recommendations as to whether (and how) emotional intensity – as a possible confound – should be controlled for, we investigated the specific form of the relation between emotional intensity and RTs to emotion items. In particular, we assumed an inverted U-shaped relation at the item level. Moreover, we analysed the RT measure’s convergent validity with respect to individuals’ confidence in their emotion ratings. As a second aim, we compared the predictive validity of emotional clarity measures (RT measure, self-report) with respect to daily emotion regulation. The results of three experience sampling studies showed that the association between emotional intensity and RT followed an inverted U shape. RT was in part related to confidence. Emotional clarity measures were unrelated to reappraisal. There was some evidence that lower emotional clarity was related to a greater use of suppression. The findings highlight that emotional intensity and squared emotional intensity should be controlled for when using the RT measure of emotional clarity in future research.  相似文献   

12.
Many fMRI studies have examined the neural mechanisms supporting emotional memory for stimuli that generate emotion rather automatically (e.g., a picture of a dangerous animal or of appetizing food). However, far fewer studies have examined how memory is influenced by emotion related to social and political issues (e.g., a proposal for large changes in taxation policy), which clearly vary across individuals. In order to investigate the neural substrates of affective and mnemonic processes associated with personal opinions, we employed an fMRI task wherein participants rated the intensity of agreement/disagreement to sociopolitical belief statements paired with neural face pictures. Following the rating phase, participants performed an associative recognition test in which they distinguished identical versus recombined face–statement pairs. The study yielded three main findings: behaviorally, the intensity of agreement ratings was linked to greater subjective emotional arousal as well as enhanced high-confidence subsequent memory. Neurally, statements that elicited strong (vs. weak) agreement or disagreement were associated with greater activation of the amygdala. Finally, a subsequent memory analysis showed that the behavioral memory advantage for statements generating stronger ratings was dependent on the medial prefrontal cortex (mPFC). Together, these results both underscore consistencies in neural systems supporting emotional arousal and suggest a modulation of arousal-related encoding mechanisms when emotion is contingent on referencing personal beliefs.  相似文献   

13.
This study investigated emotion during interpersonal conflicts between mates. It addressed questions about how clearly couples express emotion (encoding), how accurately they recognize each other's emotion (decoding), and how well they distinguish between types of negative emotion. It was theorized that couples express and perceive both: (a) event-specific emotions, which are unique to particular people on particular occasions, and (b) contextual-couple emotions, which reflect the additive effect of emotions across different events and across both partners. Eighty-three married couples engaged in a series of two conflict conversations. Self-report ratings, observer ratings, and partner ratings were used to assess two types of negative emotion: hard emotion (e.g., angry or annoyed) and soft emotion (e.g., sad or hurt). Couples were reasonably accurate in encoding, decoding, and in distinguishing between types of emotion. Emotion expression was strongly associated with general levels of contextual-couple emotion summed across two conversations, whereas emotion perception was more closely tied to specific events. Hard emotion was readily perceived when it was overtly expressed, and soft emotion could sometimes be recognized even when it was not expressed clearly.  相似文献   

14.
Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task‐dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real‐life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs.  相似文献   

15.
The intensity and valence of 30 emotion terms, 30 events typical of those emotions, and 30 autobiographical memories cued by those emotions were each rated by different groups of 40 undergraduates. A vector model gave a consistently better account of the data than a circumplex model, both overall and in the absence of high-intensity, neutral valence stimuli. The Positive Activation – Negative Activation (PANA) model could be tested at high levels of activation, where it is identical to the vector model. The results replicated when ratings of arousal were used instead of ratings of intensity for the events and autobiographical memories. A reanalysis of word norms gave further support for the vector and PANA models by demonstrating that neutral valence, high-arousal ratings resulted from the averaging of individual positive and negative valence ratings. Thus, compared to a circumplex model, vector and PANA models provided overall better fits.  相似文献   

16.
Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.  相似文献   

17.
In a recent study (Gilead et al., 2016), perspective taking (PT) was found to have a significant effect on affect ratings of negative pictures compared to neutrals. The current study explores the question whether PT would be affected equally by distinct negative emotions. We used neutral pictures and pictures classified as provoking sadness or disgust, matched for their intensity and arousal. Participants were asked to rate the pictures (on a scale from 1—no emotional reaction, to 5—very strong reaction) from 3 different perspectives - tough, sensitive, or their own – “me”. In Experiment 1, all pictures were mixed in the same blocks. In Experiment 2, the sad and disgust pictures were separated into two different blocks (each including neutrals). Both experiments showed significant interaction between PT and emotion. PT was found to be influenced by valence; however, distinct negative emotions were found to affect PT similarly.  相似文献   

18.
Multi-label tasks confound age differences in perceptual and cognitive processes. We examined age differences in emotion perception with a technique that did not require verbal labels. Participants matched the emotion expressed by a target to two comparison stimuli, one neutral and one emotional. Angry, disgusted, fearful, happy, and sad facial expressions of varying intensity were used. Although older adults took longer to respond than younger adults, younger adults only outmatched older adults for the lowest intensity disgust and fear expressions. Some participants also completed an identity matching task in which target stimuli were matched on personal identity instead of emotion. Although irrelevant to the judgment, expressed emotion still created interference. All participants were less accurate when the apparent difference in expressive intensity of the matched stimuli was large, suggesting that salient emotion cues increased difficulty of identity matching. Age differences in emotion perception were limited to very low intensity expressions.  相似文献   

19.
Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming the morphological confounding inherent in facial displays. In four studies, participants’ judgments revealed gender stereotyping. Observers accurately perceived emotion from biological motion displays (Study 1), and this affected sex categorizations. Angry displays were overwhelmingly judged to be men; sad displays were judged to be women (Studies 2–4). Moreover, this pattern remained strong when stimuli were equated for velocity (Study 3). We argue that these results were obtained because perceivers applied gender stereotypes of emotion to infer sex category (Study 4). Implications for both vision sciences and social psychology are discussed.  相似文献   

20.
Affective stimuli are increasingly used in emotion research. Typically, stimuli are selected from databases providing affective norms. The validity of these norms is a critical factor with regard to the applicability of the stimuli for emotion research. We therefore probed the validity of the Leipzig Affective Norms for German (LANG) by correlating valence and arousal ratings across different sensory modalities. A sample of 120 words was selected from the LANG database, and auditory recordings of these words were obtained from two professional actors. The auditory stimuli were then rated again for valence and arousal. This cross-modal validation approach yielded very high correlations between auditory and visual ratings (>.95). These data confirm the strong validity of the Leipzig Affective Norms for German and encourage their use in emotion research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号