首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous research has demonstrated that even brief exposures to facial expressions of emotions elicit facial mimicry in receivers in the form of corresponding facial muscle movements. As well, vocal and verbal patterns of speakers converge in conversations, a type of vocal mimicry. There is also evidence of cross-modal mimicry in which emotional vocalizations elicit corresponding facial muscle activity. Further, empathic capacity has been associated with enhanced tendency towards facial mimicry as well as verbal synchrony. We investigated a type of potential cross-modal mimicry in a simulated dyadic situation. Specifically, we examined the influence of facial expressions of happy, sad, and neutral emotions on the vocal pitch of receivers, and its potential association with empathy. Results indicated that whereas both mean pitch and variability of pitch varied somewhat in the predicted directions, empathy was correlated with the difference in the variability of pitch while speaking to the sad and neutral faces. Discussion of results considers the dimensional nature of emotional vocalizations and possible future directions.  相似文献   

2.
We systematically examined the impact of emotional stimuli on time perception in a temporal reproduction paradigm where participants reproduced the duration of a facial emotion stimulus using an oval-shape stimulus or vice versa. Experiment 1 asked participants to reproduce the duration of an angry face (or the oval) presented for 2,000 ms. Experiment 2 included a range of emotional expressions (happy, sad, angry, and neutral faces as well as the oval stimulus) presented for different durations (500, 1,500, and 2,000 ms). We found that participants over-reproduced the durations of happy and sad faces using the oval stimulus. By contrast, there was a trend of under-reproduction when the duration of the oval stimulus was reproduced using the angry face. We suggest that increased attention to a facial emotion produces the relativity of time perception.  相似文献   

3.
The purpose of the present investigation was to assess whether interpersonal closeness facilitates earlier emotion detection as the emotional expression unfolds. Female undergraduate participants were either paired with a close friend or an acquaintance (n = 92 pairs). Participants viewed morphed movies of their partner and a stranger gradually shifting from a neutral to either a sad, angry, or happy expression. As predicted, findings indicate a closeness advantage. Close friends detected the onset of their partners’ angry and sad expressions earlier than acquaintances. Additionally, close friends were more accurate than acquaintances in identifying angry and sad expressions at the onset, particularly in non-vignette conditions when these expressions were void of context. These findings suggest that closeness does indeed facilitate emotional perception, particularly in ambiguous situations for negative emotions.  相似文献   

4.
Older adults have greater difficulty than younger adults perceiving vocal emotions. To better characterise this effect, we explored its relation to age differences in sensory, cognitive and emotional functioning. Additionally, we examined the role of speaker age and listener sex. Participants (N?=?163) aged 19–34 years and 60–85 years categorised neutral sentences spoken by ten younger and ten older speakers with a happy, neutral, sad, or angry voice. Acoustic analyses indicated that expressions from younger and older speakers denoted the intended emotion with similar accuracy. As expected, younger participants outperformed older participants and this effect was statistically mediated by an age-related decline in both optimism and working-memory. Additionally, age differences in emotion perception were larger for younger as compared to older speakers and a better perception of younger as compared to older speakers was greater in younger as compared to older participants. Last, a female perception benefit was less pervasive in the older than the younger group. Together, these findings suggest that the role of age for emotion perception is multi-faceted. It is linked to emotional and cognitive change, to processing biases that benefit young and own-age expressions, and to the different aptitudes of women and men.  相似文献   

5.
Valence-specific laterality effects have been frequently obtained in facial emotion perception but not in vocal emotion perception. We report a dichotic listening study further examining whether valence-specific laterality effects generalise to vocal emotions. Based on previous literature, we tested whether valence-specific laterality effects were dependent on blocked presentation of the emotion conditions, on the naturalness of the emotional stimuli, or on listener sex. We presented happy and sad sentences, paired with neutral counterparts, dichotically in an emotion localisation task, with vocal stimuli being preceded by verbal labels indicating target emotions. The measure was accuracy. When stimuli of the same emotion were presented as a block, a valence-specific laterality effect was demonstrated, but only in original stimuli and not morphed stimuli. There was a separate interaction with listener sex. We interpret our findings as suggesting that the valence-specific laterality hypothesis is supported only in certain circumstances. We discuss modulating factors, and we consider whether the mechanisms underlying those factors may be attentional or experiential in nature.  相似文献   

6.
Emotion recognition is mediated by a complex network of cortical and subcortical areas, with the two hemispheres likely being differently involved in processing positive and negative emotions. As results on valence-dependent hemispheric specialisation are quite inconsistent, we carried out three experiments with emotional stimuli with a task being sensitive to measure specific hemispheric processing. Participants were required to bisect visual lines that were delimited by emotional face flankers, or to haptically bisect rods while concurrently listening to emotional vocal expressions. We found that prolonged (but not transient) exposition to concurrent happy stimuli significantly shifted the bisection bias to the right compared to both sad and neutral stimuli, indexing a greater involvement of the left hemisphere in processing of positively connoted stimuli. No differences between sad and neutral stimuli were observed across the experiments. In sum, our data provide consistent evidence in favour of a greater involvement of the left hemisphere in processing positive emotions and suggest that (prolonged) exposure to stimuli expressing happiness significantly affects allocation of (spatial) attentional resources, regardless of the sensory (visual/auditory) modality in which the emotion is perceived and space is explored (visual/haptic).  相似文献   

7.
Infants’ ability to discriminate emotional facial expressions and tones of voice is well-established, yet little is known about infant discrimination of emotional body movements. Here, we asked if 10–20-month-old infants rely on high-level emotional cues or low-level motion related cues when discriminating between emotional point-light displays (PLDs). In Study 1, infants viewed 18 pairs of angry, happy, sad, or neutral PLDs. Infants looked more at angry vs. neutral, happy vs. neutral, and neutral vs. sad. Motion analyses revealed that infants preferred the PLD with more total body movement in each pairing. Study 2, in which infants viewed inverted versions of the same pairings, yielded similar findings except for sad-neutral. Study 3 directly paired all three emotional stimuli in both orientations. The angry and happy stimuli did not significantly differ in terms of total motion, but both had more motion than the sad stimuli. Infants looked more at angry vs. sad, more at happy vs. sad, and about equally to angry vs. happy in both orientations. Again, therefore, infants preferred PLDs with more total body movement. Overall, the results indicate that a low-level motion preference may drive infants’ discrimination of emotional human walking motions.  相似文献   

8.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed.  相似文献   

9.
We examined proactive and reactive control effects in the context of task-relevant happy, sad, and angry facial expressions on a face-word Stroop task. Participants identified the emotion expressed by a face that contained a congruent or incongruent emotional word (happy/sad/angry). Proactive control effects were measured in terms of the reduction in Stroop interference (difference between incongruent and congruent trials) as a function of previous trial emotion and previous trial congruence. Reactive control effects were measured in terms of the reduction in Stroop interference as a function of current trial emotion and previous trial congruence. Previous trial negative emotions exert greater influence on proactive control than the positive emotion. Sad faces in the previous trial resulted in greater reduction in the Stroop interference for happy faces in the current trial. However, current trial angry faces showed stronger adaptation effects compared to happy faces. Thus, both proactive and reactive control mechanisms are dependent on emotional valence of task-relevant stimuli.  相似文献   

10.
Drivers’ emotions significantly affect their driving performance and thus are related to driving safety issues. The objective of this study is to examine how taxi drivers’ on-duty emotional states are associated with their driving speed in real driving situations. An experiment was conducted among 15 taxi drivers in Hiroshima, Japan for 15 consecutive days in 2019. A biometric device was used to track drivers’ emotional states while on duty; the five examined states included happy, angry, relaxed, sad, and neutral. Random effects panel regression results revealed that negative emotions of taxi drivers (angry and sad) have significant impacts on increasing driving speed. In contrast, a neutral emotional state is related with decreased speed, while happy and relaxed emotional states show no significant impact. Moreover, we found that factors such as driving with customers, driving long hours, and number of break hours are significantly associated with driving speed. This study contributes to the literature by providing empirical evidence on the roles that emotional states play in explaining driving speed in real-life driving situations, in contrast to studies that use simulated driving or mood induction procedures.  相似文献   

11.
Previous binocular rivalry studies with younger adults have shown that emotional stimuli dominate perception over neutral stimuli. Here we investigated the effects of age on patterns of emotional dominance during binocular rivalry. Participants performed a face/house rivalry task where the emotion of the face (happy, angry, neutral) and orientation (upright, inverted) of the face and house stimuli were varied systematically. Age differences were found with younger adults showing a general emotionality effect (happy and angry faces were more dominant than neutral faces) and older adults showing inhibition of anger (neutral faces were more dominant than angry faces) and positivity effects (happy faces were more dominant than both angry and neutral faces). Age differences in dominance patterns were reflected by slower rivalry rates for both happy and angry compared to neutral face/house pairs in younger adults, and slower rivalry rates for happy compared to both angry and neutral face/house pairs in older adults. Importantly, these patterns of emotional dominance and slower rivalry rates for emotional-face/house pairs disappeared when the stimuli were inverted. This suggests that emotional valence, and not low-level image features, were responsible for the emotional bias in both age groups. Given that binocular rivalry has a limited role for voluntary control, the findings imply that anger suppression and positivity effects in older adults may extend to more automatic tasks.  相似文献   

12.
Much research on emotional facial expression employs posed expressions and expressive subjects. To test the generalizability of this research to more spontaneous expressions of both expressive and nonexpressive posers, subjects engaged in happy, sad, angry, and neutral imagery, and voluntarily posed happy, sad, and angry facial expressions while facial muscle activity (brow, cheek, and mouth regions) and autonomic activity (skin resistance and heart period) were recorded. Subjects were classified as expressive or nonexpressive on the basis of the intensity of their posed expressions. The posed and imagery-induced expressions were similar, but not identical. Brow activity present in the imagery-induced sad expressions was weak or absent in the posed ones. Both nonexpressive and expressive subjects demonstrated similar heart rate acceleration during emotional imagery and demonstrated similar posed and imagery-induced happy expressions, but nonexpressive subjects showed little facial activity during both their posed and imagery-induced sad and angry expressions. The implications of these findings are discussed.  相似文献   

13.
Three experiments revealed that music lessons promote sensitivity to emotions conveyed by speech prosody. After hearing semantically neutral utterances spoken with emotional (i.e., happy, sad, fearful, or angry) prosody, or tone sequences that mimicked the utterances' prosody, participants identified the emotion conveyed. In Experiment 1 (n=20), musically trained adults performed better than untrained adults. In Experiment 2 (n=56), musically trained adults outperformed untrained adults at identifying sadness, fear, or neutral emotion. In Experiment 3 (n=43), 6-year-olds were tested after being randomly assigned to 1 year of keyboard, vocal, drama, or no lessons. The keyboard group performed equivalently to the drama group and better than the no-lessons group at identifying anger or fear.  相似文献   

14.
Despite significant advancements in the research of subjective well-being (SWB), little is known about its connection with basic cognitive processes. The present study explores the association between selective attention to emotional stimuli (i.e. emotional faces) and both the emotional and cognitive components of SWB (i.e. emotional well-being and satisfaction in life, respectively). Participants (N?=?83) were asked to freely watch a series of 84 pairs of emotional (happy, angry, or sad) and neutral faces from the Karolinska Directed Emotional Faces database. Eye-tracking methodology measured first fixations, number of fixations, and the time spent looking at emotional faces. Results showed that both the emotional and cognitive components of SWB were related to a general bias to attend to happy faces and avoid sad faces. Yet, bootstrapping analyses showed that positive emotions, rather than life satisfaction, were responsible for the positive information-processing bias. We discuss the potential functionality of these biases and their implications for research on positive emotions.  相似文献   

15.
Pell MD 《Brain and cognition》2002,48(2-3):499-504
This report describes some preliminary attributes of stimuli developed for future evaluation of nonverbal emotion in neurological populations with acquired communication impairments. Facial and vocal exemplars of six target emotions were elicited from four male and four female encoders and then prejudged by 10 young decoders to establish the category membership of each item at an acceptable consensus level. Representative stimuli were then presented to 16 additional decoders to gather indices of how category membership and encoder gender influenced recognition accuracy of emotional meanings in each nonverbal channel. Initial findings pointed to greater facility in recognizing target emotions from facial than vocal stimuli overall and revealed significant accuracy differences among the six emotions in both the vocal and facial channels. The gender of the encoder portraying emotional expressions was also a significant factor in how well decoders recognized specific emotions (disgust, neutral), but only in the facial condition.  相似文献   

16.
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.  相似文献   

17.
The tendency to express emotions non-verbally is positively related to perception of emotions in oneself. This study examined its relationship to perception of emotions in others. In 40 healthy adults, EEG theta synchronization was used to indicate emotion processing following presentation of happy, angry, and neutral faces. Both positive and negative expressiveness were associated with higher emotional sensitivity, as shown by cortical responses to facial expressions during the early, unconscious processing stage. At the late, conscious processing stage, positive expressiveness was associated with higher sensitivity to happy faces but lower sensitivity to angry faces. Thus, positive expressiveness predisposes people to allocate fewer attentional resources for conscious perception of angry faces. In contrast, negative expressiveness was consistently associated with higher sensitivity. The effects of positive expressiveness occurred in cortical areas that deal with emotions, but the effects of negative expressiveness occurred in areas engaged in self-referential processes in the context of social relationships.  相似文献   

18.
为探讨高特质焦虑者在前注意阶段对情绪刺激的加工模式以明确其情绪偏向性特点, 本研究采用偏差-标准反转Oddball范式探讨了特质焦虑对面部表情前注意加工的影响。结果发现: 对于低特质焦虑组, 悲伤面孔所诱发的早期EMMN显著大于快乐面孔, 而对于高特质焦虑组, 快乐和悲伤面孔所诱发的早期EMMN差异不显著。并且, 高特质焦虑组的快乐面孔EMMN波幅显著大于低特质焦虑组。结果表明, 人格特质是影响面部表情前注意加工的重要因素。不同于普通被试, 高特质焦虑者在前注意阶段对快乐和悲伤面孔存在相类似的加工模式, 可能难以有效区分快乐和悲伤情绪面孔。  相似文献   

19.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

20.
The goal of this research was to examine the effects of facial expressions on the speed of sex recognition. Prior research revealed that sex recognition of female angry faces was slower compared with male angry faces and that female happy faces are recognized faster than male happy faces. We aimed to replicate and extend the previous research by using different set of facial stimuli, different methodological approach and also by examining the effects of some other previously unexplored expressions (such as crying) on the speed of sex recognition. In the first experiment, we presented facial stimuli of men and women displaying anger, fear, happiness, sadness, crying and three control conditions expressing no emotion. Results showed that sex recognition of angry females was significantly slower compared with sex recognition in any other condition, while sad, crying, happy, frightened and neutral expressions did not impact the speed of sex recognition. In the second experiment, we presented angry, neutral and crying expressions in blocks and again only sex recognition of female angry expressions was slower compared with all other expressions. The results are discussed in a context of perceptive features of male and female facial configuration, evolutionary theory and social learning context.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号