首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.  相似文献   

2.
Under a noisy “cocktail-party” listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker’s voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker’s voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.  相似文献   

3.
Nonverbal vocal expressions, such as laughter, sobbing, and screams, are an important source of emotional information in social interactions. However, the investigation of how we process these vocal cues entered the research agenda only recently. Here, we introduce a new corpus of nonverbal vocalizations, which we recorded and submitted to perceptual and acoustic validation. It consists of 121 sounds expressing four positive emotions (achievement/triumph, amusement, sensual pleasure, and relief) and four negative ones (anger, disgust, fear, and sadness), produced by two female and two male speakers. For perceptual validation, a forced choice task was used (n = 20), and ratings were collected for the eight emotions, valence, arousal, and authenticity (n = 20). We provide these data, detailed for each vocalization, for use by the research community. High recognition accuracy was found for all emotions (86 %, on average), and the sounds were reliably rated as communicating the intended expressions. The vocalizations were measured for acoustic cues related to temporal aspects, intensity, fundamental frequency (f0), and voice quality. These cues alone provide sufficient information to discriminate between emotion categories, as indicated by statistical classification procedures; they are also predictors of listeners’ emotion ratings, as indicated by multiple regression analyses. This set of stimuli seems a valuable addition to currently available expression corpora for research on emotion processing. It is suitable for behavioral and neuroscience research and might as well be used in clinical settings for the assessment of neurological and psychiatric patients. The corpus can be downloaded from Supplementary Materials.  相似文献   

4.
ABSTRACT

Judgments of emotional stimuli’s valence and arousal can differ based on the perceiver’s age. With most of the existing literature on age-related changes in such ratings based on perceptions of visually-presented pictures or words, less is known about how youth and adults perceive and rate the affective information contained in auditory emotional stimuli. The current study examined age-related differences in adolescent (n?=?31; 45% female; aged 12–17, M?=?14.35, SD?=?1.68) and adult listeners’ (n?=?30; 53% female; aged 21–30, M?=?26.20 years, SD?=?2.98) ratings of the valence and arousal of spoken words conveying happiness, anger, and a neutral expression. We also fitted closed curves to the average ratings for each emotional expression to determine their relative position on the valence-arousal plane of an affective circumplex. Compared to adults, adolescents’ ratings of emotional prosody were generally higher in valence, but more constricted in range for both valence and arousal. This pattern of ratings is suggestive of lesser differentiation amongst emotional categories’ holistic properties, which may have implications for the successful recognition and appropriate response to vocal emotional cues in adolescents’ social environments.  相似文献   

5.
The emotional organization of autobiographical memory was examined by determining whether emotional cues would influence autobiographical retrieval in younger and older adults. Unfamiliar musical cues that represented orthogonal combinations of positive and negative valence and high and low arousal were used. Whereas cue valence influenced the valence of the retrieved memories, cue arousal did not affect arousal ratings. However, high-arousal cues were associated with reduced response latencies. A significant bias to report positive memories was observed, especially for the older adults, but neither the distribution of memories across the life span nor response latencies varied across memories differing in valence or arousal. These data indicate that emotional information can serve as effective cues for autobiographical memories and that autobiographical memories are organized in terms of emotional valence but not emotional arousal. Thus, current theories of autobiographical memory must be expanded to include emotional valence as a primary dimension of organization.  相似文献   

6.
Remembering is impacted by several factors of retrieval, including the emotional content of a memory cue. Here we tested how musical retrieval cues that differed on two dimensions of emotion—valence (positive and negative) and arousal (high and low)—impacted the following aspects of autobiographical memory recall: the response time to access a past personal event, the experience of remembering (ratings of memory vividness), the emotional content of a cued memory (ratings of event arousal and valence), and the type of event recalled (ratings of event energy, socialness, and uniqueness). We further explored how cue presentation affected autobiographical memory retrieval by administering cues of similar arousal and valence levels in a blocked fashion to one half of the tested participants, and randomly to the other half. We report three main findings. First, memories were accessed most quickly in response to musical cues that were highly arousing and positive in emotion. Second, we observed a relation between a cue and the elicited memory’s emotional valence but not arousal; however, both the cue valence and arousal related to the nature of the recalled event. Specifically, high cue arousal led to lower memory vividness and uniqueness ratings, but cues with both high arousal and positive valence were associated with memories rated as more social and energetic. Finally, cue presentation impacted both how quickly and specifically memories were accessed and how cue valence affected the memory vividness ratings. The implications of these findings for views of how emotion directs the access to memories and the experience of remembering are discussed.  相似文献   

7.
Vocal expressions of emotions taken from a recorded version of a play were content. masked by using electronic filtering, randomized splicing and a combination of both techniques in addition to a no-treatment condition in a 2×2 design. Untrained listener-judges rated the voice samples in the four conditions on 20 semantic differential scales. Irrespective of the severe reduction in the number and types of vocal cues in the masking conditions, the mean ratings of the judges in all four groups agreed on a level significantly beyond chance expectations on the differential position of the emotional expressions in a multidimensional space of emotional meaning. The results suggest that a minimal set of vocal cues consisting of pitch level and variation, amplitude level and variation, and rate of articulation may be sufficient to communicate the evaluation, potency, and activity dimensions of emotional meaning. Each of these dimensions may be associated with a specific pattern of vocal cues or cue combinations. No differential effects of the type of content-masking for specific emotions were found. Systematic effects of the masking techniques consisted in a lowering of the perceived activity level of the emotions in the case of electronic filtering, and more positive ratings on the evaluative dimension in the case of randomized splicing. Electronic filtering tended to decrease, randomized splicing tended to increase inter-rater reliability.This research was supported by a research grant (GS-2654) from the Division of Social Sciences of the National Science Foundation to Robert Rosenthal.  相似文献   

8.
Verbal framing effects have been widely studied, but little is known about how people react to multiple framing cues in risk communication, where verbal messages are often accompanied by facial and vocal cues. We examined joint and differential effects of verbal, facial, and vocal framing on risk preference in hypothetical monetary and life–death situations. In the multiple framing condition with the factorial design (2 verbal frames × 2 vocal tones × 4 basic facial expressions × 2 task domains), each scenario was presented auditorily with a written message on a photo of the messenger's face. Compared with verbal framing effects resulting in preference reversal, multiple frames made risky choice more consistent and shifted risk preference without reversal. Moreover, a positive tone of voice increased risk‐seeking preference in women. When the valence of facial and vocal cues was incongruent with verbal frame, verbal framing effects were significant. In contrast, when the affect cues were congruent with verbal frame, framing effects disappeared. These results suggest that verbal framing is given higher priority when other affect cues are incongruent. Further analysis revealed that participants were more risk‐averse when positive affect cues (positive tone or facial expressions) were congruently paired with a positive verbal frame whereas participants were more risk‐seeking when positive affect cues were incongruent with the verbal frame. In contrast, for negative affect cues, congruency promoted risk‐seeking tendency whereas incongruency increased risk‐aversion. Overall, the results show that facial and vocal cues interact with verbal framing and significantly affect risk communication. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Modest advances are being made in understanding the neurology and functions of laughter. The discovery of tickle-induced "laughter" in animals should facilitate the characterization of this basic emotional response of the mammalian brain. The existence of such vocal activities in species other than humans (e.g., rats) suggests that the fundamental brain processes for joyful affect may have emerged early in vertebrate brain evolution. Here, I summarize the little that we know about the evolutionary and brain sources of laughter, and how the accompanying positive emotions may solidify social bonds within the mammalian brain. Discovery of unique neurochemistries that specifically promote laughter and joy may provide clues for development of new classes of antidepressants.  相似文献   

10.
VOCAL EXPRESSION OF EMOTION:   总被引:4,自引:0,他引:4  
Abstract— Acoustic properties of speech likely provide external cues about internal emotional processes, a phenomenon called vocal expression of emotion. Testing this supposition, we examined fundamental frequency (F0) and two perturbation measures, jitter and shimmer, in short speech samples recorded from subjects performing a lexical decision task. Statistically significant differences were found between baseline and on task values and as interaction effects involving differences in trait levels of emotional intensity and the proportion of success versus failure feedback received. These results indicate that acoustic properties of speech can be used to index emotional processes and that characteristic differences in emotional intensity may mediate vocal expression of emotion  相似文献   

11.
Facial attributes such as race, sex, and age can interact with emotional expressions; however, only a couple of studies have investigated the nature of the interaction between facial age cues and emotional expressions and these have produced inconsistent results. Additionally, these studies have not addressed the mechanism/s driving the influence of facial age cues on emotional expression or vice versa. In the current study, participants categorised young and older adult faces expressing happiness and anger (Experiment 1) or sadness (Experiment 2) by their age and their emotional expression. Age cues moderated categorisation of happiness vs. anger and sadness in the absence of an influence of emotional expression on age categorisation times. This asymmetrical interaction suggests that facial age cues are obligatorily processed prior to emotional expressions. Finding a categorisation advantage for happiness expressed on young faces relative to both anger and sadness which are negative in valence but different in their congruence with old age stereotypes or structural overlap with age cues suggests that the observed influence of facial age cues on emotion perception is due to the congruence between relatively positive evaluations of young faces and happy expressions.  相似文献   

12.
Cue saliency is known to influence prospective memory performance, whereby perceptually or conceptually distinct cues facilitate remembering and attenuate adult age-related deficits. The present study investigated whether similar benefits for older adults are also seen for emotional valence. A total of 41 older and 41 younger adults performed a prospective memory task in which the emotional valence of the prospective memory cues was manipulated. Emotionally valenced cues increased prospective memory performance across both groups. Age deficits were only observed when neutral (but not positive or negative) prospective cues were presented. Findings are consistent with predictions that salient cues facilitate participants' prospective memory performance and reduce age-related differences, while extending the concept of saliency to include emotional valence.  相似文献   

13.
Emotion can be conceptualized by the dimensional account of emotion with the dimensions of valence and arousal. There is little discussion of the difference in discriminability across the dimensions. The present study hypothesized that any pair of emotional expressions differing in the polarity of both valence and arousal dimensions would be easier to distinguish than a pair differing in only one dimension. The results indicate that the difference in the dimensions did not affect participants’ reaction time. Most pairs of emotional expressions, except those involving fear, were similarly discriminative. Reaction times to pairs with a fearful expression were faster than to those without. The fast reaction time to fearful facial expressions underscores the survival value of emotions.  相似文献   

14.
王敬欣  贾丽萍  张阔  张赛 《心理科学》2013,36(2):335-339
分别将情绪面孔图片作为线索(实验一)或靶子(实验二)呈现,利用线索-靶子范式考察了定位任务中情绪面孔加工对返回抑制的影响。结果发现:以不同情绪面孔作为线索时,返回抑制稳定出现,表现出与情绪加工的分离;而当情绪面孔作为靶子时,与中性面孔相比,情绪面孔引发的返回抑制量显著较小。该研究表明,在定位任务中,返回抑制不受线索生物性的影响,但是会受到靶子性质的影响,反映出机体在不同条件下对空间注意和情绪加工的灵活性。  相似文献   

15.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   

16.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   

17.
This study examined the relationships among nonverbal behaviors, dimensions of source credibility, and speaker persuasiveness in a public speaking context. Relevant nonverbal literature was organized according to a Brunswikian lens model. Nonverbal behavioral composites, grouped according to their likely proximal percepts, were hypothesized to significantly affect both credibility and persuasiveness. A sample of 60 speakers gave videotaped speeches that were judged on credibility and persuasiveness by classmates. Pairs of trained raters coded 22 vocalic, kinesic, and proxemic nonverbal behaviors evidenced in the tapes. Results confirmed numerous associations between nonverbal behaviors and attributions of credibility and persuasiveness. Greater perceived competence and composure were associated with greater vocal and facial pleasantness, with greater facial expressiveness contributing to competence perceptions. Greater sociability was associated with more kinesic/proxemic immediacy, dominance, and relaxation and with vocal pleasantness. Most of these same cues also enhanced character judgments. No cues were related to dynamism judgments. Greater perceived persuasiveness correlated with greater vocal pleasantness (especially fluency and pitch variety), kinesic/proxemic immediacy, facial expressiveness, and kinesic relaxation (especially high random movement but little tension). All five dimensions of credibility related to persuasiveness. Advantages of analyzing nonverbal cues according to proximal percepts are discussed.  相似文献   

18.
Fifteen-month-old infants detected a violation when an actor performed an action that did not match her preceding vocal cue: The infants looked reliably longer when the actor expressed a humorous vocal cue followed by a sweet action or expressed a sweet vocal cue followed by a humorous action, than when the vocal cue was followed by a matching action. The infants failed to detect the mismatch when one person expressed the vocal cue and another performed the action. The results suggest that by 15 months of age, infants are capable of distinguishing between two types of vocal cues and actions along the positive emotional spectrum: humor and sweetness. Furthermore, they match humorous vocal cues to humorous actions and sweet vocal cues to sweet actions only when the cues and actions are made by the same person.  相似文献   

19.
Adults are highly proficient in understanding emotional signals from both facial and vocal cues, including when communicating across cultural boundaries. However, the developmental origin of this ability is poorly understood, and in particular, little is known about the ontogeny of differentiation of signals with the same valence. The studies reported here employed a habituation paradigm to test whether preverbal infants discriminate between non-linguistic vocal expressions of relief and triumph. Infants as young as 6 months who had habituated to relief or triumph showed significant discrimination of relief and triumph tokens at test (i.e. greater recovery to the unhabituated stimulus type), when exposed to tokens from a single individual (Study 1). Infants habituated to expressions from multiple individuals showed less consistent discrimination in that consistent discrimination was only found when infants were habituated to relief tokens (Study 2). Further, infants tested with tokens from individuals from different cultures showed dishabituation only when habituated to relief tokens and only at 10–12 months (Study 3). These findings suggest that discrimination between positive emotional expressions develops early and is modulated by learning. Further, infants' categorical representations of emotional expressions, like those of speech sounds, are influenced by speaker-specific information.  相似文献   

20.
Recent research has shown that many prospective thoughts are organised in networks of related events, but the relational dimensions that contribute to the formation of such networks are not fully understood. Here, we investigated the organisational role of emotion by using cues of different valence for eliciting event networks. We found that manipulating the emotional valence of cues influenced the characteristics of events within networks, and that members of a network were more similar to each other on affective components than they were to members of other networks. Furthermore, a substantial proportion of events within networks were part of thematic clusters and cluster membership significantly modulated the impact of represented events on current well-being, in part through an intensification of the emotion felt when thinking about these events. These findings demonstrate that emotion contributes to the organisation of future thoughts in networks that can affect people's well-being.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号