首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Subjects' facial expressions were videotaped without their knowledge while they watched two pleasant and two unpleasant videotaped scenes (spontaneous facial encoding). Later, subjects' voices were audiotaped while describing their reactions to the scenes (vocal encoding). Finally, subjects were videotaped with their knowledge while they posed appropriate facial expressions to the scenes (posed facial encoding). The videotaped expressions were presented for decoding to the same subjects. The vocal material, both the original version and an electronically filtered version, was rated by judges other than the original senders. Results were as follows: (a) accuracy of vocal encoding (measured by ratings of both the filtered and unfiltered versions) was positively related to accuracy of facial encoding; (b) posing increased the accuracy of facial communication, particularly for more pleasant affects and less intense affects; (c) encoding of posed cues was correlated with encoding of spontaneous cues and decoding of posed cues was correlated with decoding of spontaneous cues; (d) correlations, within encoding and decoding, of similar scenes were positive while those among dissimilar scenes were low or negative; (e) while correlations between total encoding and total decoding were positive and low, correlations between encoding and decoding of the same scene were negative; (f) there were sex differences in decoding ability and in the relationships of personality variables with encoding and decoding of facial cues.  相似文献   

2.
王异芳  苏彦捷  何曲枝 《心理学报》2012,44(11):1472-1478
研究从言语的韵律和语义两条线索出发,试图探讨学前儿童基于声音线索情绪知觉的发展特点.实验一中,124名3~5岁儿童对男、女性用5种不同情绪(高兴、生气、害怕、难过和中性)的声音表达的中性语义句子进行了情绪类型上的判断.3~5岁儿童基于声音韵律线索情绪知觉能力随着年龄的增长不断提高,主要表现在生气、害怕和中性情绪上.不同情绪类型识别的发展轨迹不完全相同,总体来说,高兴的声音韵律最容易识别,而害怕是最难识别的.当韵律和语义线索冲突时,学前儿童更多地依赖韵律线索来判断说话者的情绪状态.被试对女性用声音表达的情绪更敏感.  相似文献   

3.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

4.
The research examined whether subjects with hearing impairment would differ from normal hearing subjects in their ability to decode emotions from video stimuli (48 video takes in which two actors portrayed six different emotions). Studies in tactile and visual perception lead one to expect deficits, while there is also some evidence for compensation. Twenty-six subjects with hearing impairment and 26 matched normal hearing subjects participated (average age = 25.5 years; nine female, 17 male subjects in each group). Results indicate that in general subjects with hearing impairment were slightly less successful in decoding emotions from the visual stimuli than the normal hearing subjects. A comparison between highly (loss > 60–90 dBA) and moderately (loss about 30–60 dBA) impaired subjects on the other hand indicated poorer emotion decoding only for the moderately impaired group. Post-hoc analyses indicated that these effects were specific to males. Results are discussed with respect to compensation versus deficit, and with respect to issues of training.  相似文献   

5.
The aim of this study is to get an insight of the interpersonal process of emotional labor, and the role of positive emotions in the interaction between the sender and receiver, while taking both the perspective of the sender and the receiver into account. We tested the influence of the perceived display of positive emotions of Dutch trainee police officers (N?=?80) during an interaction with offenders on perceived authenticity and perceived performance success, incorporating the senders’ emotion regulation technique (i.e., deep acting and surface acting). Consistent with hypotheses, results of structural equation modeling analyses showed that perceived authenticity mediates the relationship between the perceived display of positive emotions and perceived performance success, while the specific senders’ emotion regulation technique was not related to perceived performance success. Furthermore, results showed that perceived performance success mediated the relationship between the perceived display of positive emotions and senders’ felt positive emotions after the interaction, controlling for senders’ positive affect.  相似文献   

6.
Posed stimuli dominate the study of nonverbal communication of emotion, but concerns have been raised that the use of posed stimuli may inflate recognition accuracy relative to spontaneous expressions. Here, we compare recognition of emotions from spontaneous expressions with that of matched posed stimuli. Participants made forced-choice judgments about the expressed emotion and whether the expression was spontaneous, and rated expressions on intensity (Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to accurately infer emotions from both posed and spontaneous expressions, from auditory, visual, and audiovisual cues. Furthermore, perceived intensity and prototypicality were found to play a role in the accurate recognition of emotion, particularly from spontaneous expressions. Our findings demonstrate that perceivers can reliably recognise emotions from spontaneous expressions, and that depending on the comparison set, recognition levels can even be equivalent to that of posed stimulus sets.  相似文献   

7.
Facial expressions of emotion are key cues to deceit (M. G. Frank & P. Ekman, 1997). Given that the literature on aging has shown an age-related decline in decoding emotions, we investigated (a) whether there are age differences in deceit detection and (b) if so, whether they are related to impairments in emotion recognition. Young and older adults (N = 364) were presented with 20 interviews (crime and opinion topics) and asked to decide whether each interview subject was lying or telling the truth. There were 3 presentation conditions: visual, audio, or audiovisual. In older adults, reduced emotion recognition was related to poor deceit detection in the visual condition for crime interviews only.  相似文献   

8.
The authors investigated young children's ability to decode the emotions of happiness and anger expressed by their parent and an adult stranger. Parents and adult strangers (encoders) were videotaped while describing events that had elicited happiness or anger. Children viewed brief clips edited from these videotapes and indicated the emotion that their parent or the stranger was expressing. With male encoders, only children's age predicted accuracy. With female encoders, mothers' expressive style and children's age interacted to predict children's decoding accuracy. Compared with older children of less positively expressive mothers, older children of more positively expressive mothers were more accurate overall, because they were better at recognizing happiness. In general, children were no more or less accurate in decoding their parent's emotions than they were in decoding an unknown adult's emotions.  相似文献   

9.
A laboratory study of early dating and married/cohabiting couples showed that perceived appropriateness of emotion expression was lowest for early daters' negative emotions. Partners in more developed relationships managed positive emotions less than negative emotions and less than early daters managed either negative or positive emotions. Biological sex moderated the effect of valence and relationship level on discrepancy scores, the greatest differences between stages being for males’positive emotions and females’negative emotions. A second study using partners across all stages of relationship development found evidence of a curvilinear pattern for relationship length on discrepancy scores. More management of negative emotions was reported by partners in early and later stages of relationship development. Perceived appropriateness of emotion expression was found to increase with relationship development. Females expressions of emotion were considered least appropriate in early-stage relationships. Together the results provide evidence of display rule evolution as relationships develop.  相似文献   

10.
Tasks assessing theory of mind (ToM) and non-mental state control tasks were administered to young and older adults to examine previous contradictory findings about age differences in mental state decoding. Age differences were found on a verbal ToM task after controlling for vocabulary levels. Older adults achieved significantly lower scores than did younger adults on static and dynamic visual ToM tasks, and a similar pattern was found on non-ToM control tasks. Rather than a specific ToM deficit, older adults exhibited a more general impairment in the ability to decode cues from verbal and visual information about people.  相似文献   

11.
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   

12.
The authors investigated young children's ability to decode the emotions of happiness and anger expressed by their parent and an adult stranger. Parents and adult strangers (encoders) were videotaped while describing events that had elicited happiness or anger. Children viewed brief clips edited from these videotapes and indicated the emotion that their parent or the stranger was expressing. With male encoders, only children's age predicted accuracy. With female encoders, mothers' expressive style and children's age interacted to predict children's decoding accuracy. Compared with older children of less positively expressive mothers, older children of more positively expressive mothers were more accurate overall, because they were better at recognizing happiness. In general, children were no more or less accurate in decoding their parent's emotions than they were in decoding an unknown adult's emotions.  相似文献   

13.
《Brain and cognition》2014,84(3):252-261
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   

14.
Previous research has highlighted theoretical and empirical links between measures of both personality and trait emotional intelligence (EI), and the ability to decode facial expressions of emotion. Research has also found that the posed, static characteristics of the photographic stimuli used to explore these links affects the decoding process and differentiates them from the natural expressions they represent. This undermines the ecological validity of established trait-emotion decoding relationships.This study addresses these methodological shortcomings by testing relationships between the reliability of participant ratings of dynamic, spontaneously elicited expressions of emotion with personality and trait EI. Fifty participants completed personality and self-report EI questionnaires, and used a computer-logging program to continuously rate change in emotional intensity expressed in video clips. Each clip was rated twice to obtain an intra-rater reliability score. The results provide limited support for links between both trait EI and personality variables and how reliably we decode natural expressions of emotion. Limitations and future directions are discussed.  相似文献   

15.
This paper describes the development and preliminary studies of a test of the ability to decode affect in others. Items are videotaped sequences showing spontaneous unposed facial expressions and gestures of college student “senders” to emotionally-loaded color slides. Subjects make judgments about what kind of slide elicited the affect and how pleasant or unpleasant the sender's subjective response was. Satisfactory test-retest reliability was demonstrated for the former kind of judgment but not the latter. There was evidence that females are slightly better receivers than males, and business and fine arts majors are relatively good receivers while science majors are relatively poor.  相似文献   

16.
The present study examined the relation between children's abilities to decode the emotional meanings in facial expressions and tones of voice, and their popularity, locus of control or reinforcement orientation, and academic achievement. Four hundred fifty-six elementary school children were given tests that measured their abilities to decode emotions in facial expressions and tones of voice. Children who were better at decoding nonverbal emotional information in faces and tones of voice were more popular, more likely to be internally controlled, and more likely to have higher academic achievement scores. The results were interpreted as supporting the importance of nonverbal communication in the academic as well as the social realms.  相似文献   

17.
Emotion processing impairments are common in patients undergoing brain surgery for fronto-temporal tumour resection, with potential consequences on social interactions. However, evidence is controversial concerning side and site of lesions causing such deficits. This study investigates visual and auditory emotion recognition in brain tumour patients with the aim of clarifying which lesion sites are related to impairments in emotion processing from different modalities. Thirty-four patients were evaluated, before and after surgery, on facial expression and emotional prosody recognition; voxel-based lesion–symptom mapping (VLSM) analyses were performed on patients’ post-surgery MRI images. Results showed that patients’ performance decreased after surgery in both visual and auditory modalities, but, in general, recovered 3 months after surgery. In facial expression recognition, left brain-damaged patients showed greater post-surgery deterioration than right brain-damaged ones, whose performance specifically decreased for sadness and fear. VLSM analysis revealed two segregated areas in the left hemisphere accounting for post-surgery scores for happy (fronto-temporo-insular region) and surprised (middle frontal gyrus and inferior fronto-occipital fasciculus) facial expressions. Our findings demonstrate that surgical removal of tumours in the fronto-temporal region produces impairment in facial emotion recognition with an overall recovery at 3 months, suggesting a partially different representation of positive and negative emotions in the left and right hemispheres for visually – but not auditory – presented emotions; moreover, we show that deficits in specific expression recognition are associated with discrete lesion locations.  相似文献   

18.
Body dysmorphic disorder (BDD) is characterized by perceived appearance-related defects, often tied to aspects of the face or head (e.g., acne). Deficits in decoding emotional expressions have been examined in several psychological disorders including BDD. Previous research indicates that BDD is associated with impaired facial emotion recognition, particularly in situations that involve the BDD sufferer him/herself. The purpose of this study was to further evaluate the ability to read other people's emotions among 31 individuals with BDD, and 31 mentally healthy controls. We applied the Reading the Mind in the Eyes task, in which participants are presented with a series of pairs of eyes, one at a time, and are asked to identify the emotion that describes the stimulus best. The groups did not differ with respect to decoding other people's emotions by looking into their eyes. Findings are discussed in light of previous research examining emotion recognition in BDD.  相似文献   

19.
Due to mood-congruency effects, we expect the emotion perceived on a face to be biased towards one's own mood. But the findings in the scant literature on such mood effects in normal healthy populations have not consistently and adequately supported this expectation. Employing effective mood manipulation techniques that ensured that the intended mood was sustained throughout the perception task, we explored mood-congruent intensity and recognition accuracy biases in emotion perception. Using realistic face stimuli with expressive cues of happiness and sadness, we demonstrated that happy, neutral and ambiguous expressions were perceived more positively in the positive than in the negative mood. The mood-congruency effect decreased with the degree of perceived negativity in the expression. Also, males were more affected by the mood-congruency effect in intensity perception than females. We suggest that the greater salience and better processing of negative stimuli and the superior cognitive ability of females in emotion perception are responsible for these observations. We found no evidence for mood-congruency effect in the recognition accuracy of emotions and suggest with supporting evidence that past reports of this effect may be attributed to response bias driven by mood.  相似文献   

20.
Although the configurations of psychoacoustic cues signalling emotions in human vocalizations and instrumental music are very similar, cross‐domain links in recognition performance have yet to be studied developmentally. Two hundred and twenty 5‐ to 10‐year‐old children were asked to identify musical excerpts and vocalizations as happy, sad, or fearful. The results revealed age‐related increases in overall recognition performance with significant correlations across vocal and musical conditions at all developmental stages. Recognition scores were greater for musical than vocal stimuli and were superior in females compared with males. These results confirm that recognition of emotions in vocal and musical stimuli is linked by 5 years and that sensitivity to emotions in auditory stimuli is influenced by age and gender.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号