首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
As a multi‐ethnic country that is comprised of diverse cultural systems, there has been little research on the subcultural differences in emotional preferences in China. Also, little attention has been paid to examine how explicit and implicit attitudes towards emotions influence emotional preferences interactively. In this study, we manipulated explicit attitudes towards emotions among Han (N = 62) and Mongolian Chinese individuals (N = 70). We assessed participants' implicit attitudes towards emotions to explore their contributions to emotional preferences. (a) Han Chinese had lower preferences for pleasant emotions than Mongolian Chinese after inducing contra‐hedonic attitudes towards emotions, and (b) after priming contra‐hedonic attitudes towards emotions, the more Han Chinese participants evaluated pleasant emotions as negative implicitly, the less they preferred to engage in pleasant emotional activities. These findings contribute to the growing literature of subcultural differences and demonstrate that explicit and implicit attitudes towards emotions interactively influence individuals' emotional preferences between different subculture groups.  相似文献   

2.
Synthetic images of facial expression were used to assess whether judges can correctly recognize emotions exclusively on the basis of configurations of facial muscle movements. A first study showed that static, synthetic images modeled after a series of photographs that are widely used in facial expression research yielded recognition rates and confusion patterns comparable to posed photos. In a second study, animated synthetic images were used to examine whether schematic facial expressions consisting entirely of theoretically postulated facial muscle configurations can be correctly recognized. Recognition rates for the synthetic expressions were far above chance, and the confusion patterns were comparable to those obtained with posed photos. In addition, the effect of static versus dynamic presentation of the expressions was studied. Dynamic presentation increased overall recognition accuracy and reduced confusions between unrelated emotions.  相似文献   

3.
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.  相似文献   

4.
This study investigated how the cultural match or mismatch between observer and perceiver can affect the accuracy of judgements of facial emotion, and how acculturation can affect cross-cultural recognition accuracy. The sample consisted of 51 Caucasian-Australians, 51 people of Chinese heritage living in Australia (PCHA) and 51 Mainland Chinese. Participants were required to identify the emotions of happiness, sadness, fear, anger, surprise and disgust displayed in photographs of Caucasian and Chinese faces. The PCHA group also responded to an acculturation measure that assessed their adoption of Australian cultural values and adherence to heritage (Chinese) cultural values. Counter to the hypotheses, the Caucasian-Australian and PCHA groups were found to be significantly more accurate at identifying both the Chinese and Caucasian facial expressions than the Mainland Chinese group. Adoption of Australian culture was found to predict greater accuracy in recognising the emotions displayed on Caucasian faces for the PCHA group.  相似文献   

5.
Film clips are widely used in emotion research due to their relatively high ecological validity. Although researchers have established various film clip sets for different cultures, the few that exist related to Chinese culture do not adequately address positive emotions. The main purposes of the present study were to establish a standardised database of Chinese emotional film clips that could elicit more categories of reported positive emotions compared to the existing databases and to expand the available film clips that can be used as neutral materials. Two experiments were conducted to construct the database. In experiment 1, 111 film clips were selected from more than one thousand Chinese movies for preliminary screening. After 315 participants viewed and evaluated these film clips, 39 excerpts were selected for further validation. In experiment 2, 147 participants watched and rated these 39 film clips, as well as another 8 excerpts chosen from the existing databases, to compare their validity. Eventually, 22 film excerpts that successfully evoked three positive emotions (joy, amusement, and tenderness), four negative emotions (moral disgust, anger, fear, and sadness), and neutrality formed the standardised database of Chinese emotional film clips.  相似文献   

6.
Perceived relationships among nine facial expressions were structurally represented in a two-space configuration by multidimensional scaling procedure. The two dimensions operative for Chinese judges were interpreted as positive versus negative emotions and open versus controlled styles of expressions. While there was some consensus in identifying seven of the nine intended emotions, interest-excitement and disgust-revulsion were often not recognized. Implications of cross-cultural comparisons in identification rates were discussed.  相似文献   

7.
To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These expressions were identified by a group of native Mandarin listeners in a seven-alternative forced choice task, and items reaching a recognition rate of at least three times chance performance in the seven-choice task were selected as a valid database and then subjected to acoustic analysis. The results demonstrated expected variations in both perceptual and acoustic patterns of the seven vocal emotions in Mandarin. For instance, fear, anger, sadness, and neutrality were associated with relatively high recognition, whereas happiness, disgust, and pleasant surprise were recognized less accurately. Acoustically, anger and pleasant surprise exhibited relatively high mean f0 values and large variation in f0 and amplitude; in contrast, sadness, disgust, fear, and neutrality exhibited relatively low mean f0 values and small amplitude variations, and happiness exhibited a moderate mean f0 value and f0 variation. Emotional expressions varied systematically in speech rate and harmonics-to-noise ratio values as well. This validated database is available to the research community and will contribute to future studies of emotional prosody for a number of purposes. To access the database, please contact pan.liu@mail.mcgill.ca.  相似文献   

8.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

9.
Perceptual advantages for own-race compared to other-race faces have been demonstrated for the recognition of facial identity and expression. However, these effects have not been investigated in the same study with measures that can determine the extent of cross-cultural agreement as well as differences. To address this issue, we used a photo sorting task in which Chinese and Caucasian participants were asked to sort photographs of Chinese or Caucasian faces by identity or by expression. This paradigm matched the task demands of identity and expression recognition and avoided constrained forced-choice or verbal labelling requirements. Other-race effects of comparable magnitude were found across the identity and expression tasks. Caucasian participants made more confusion errors for the identities and expressions of Chinese than Caucasian faces, while Chinese participants made more confusion errors for the identities and expressions of Caucasian than Chinese faces. However, analyses of the patterns of responses across groups of participants revealed a considerable amount of underlying cross-cultural agreement. These findings suggest that widely repeated claims that members of other cultures “all look the same” overstate the cultural differences.  相似文献   

10.
High- and low-trait socially anxious individuals classified the emotional expressions of photographic quality continua of interpolated ("morphed") facial images that were derived from combining 6 basic prototype emotional expressions to various degrees, with the 2 adjacent emotions arranged in an emotion hexagon. When fear was 1 of the 2 component emotions, the high-trait group displayed enhanced sensitivity for fear. In a 2nd experiment where a mood manipulation was incorporated, again, the high-trait group exhibited enhanced sensitivity for fear. The low-trait group was sensitive for happiness in the control condition. The mood-manipulated group had increased sensitivity for anger expressions, and trait anxiety did not moderate these effects. Interpretations of the results related to the classification of fearful expressions are discussed.  相似文献   

11.
Facial expressions play a crucial role in emotion recognition as compared to other modalities. In this work, an integrated network, which is capable of recognizing emotion intensity levels from facial images in real time using deep learning technique is proposed. The cognitive study of facial expressions based on expression intensity levels are useful in applications such as healthcare, coboting, Industry 4.0 etc. This work proposes to augment emotion recognition with 2 other important parameters, valence and emotion intensity. This helps in better automated responses by a machine to an emotion. The valence model helps in classifying emotion as positive and negative emotions and discrete model classifies emotions as happy, anger, disgust, surprise and neutral state using Convolution Neural Network (CNN). Feature extraction and classification are carried out using CMU Multi-PIE database. The proposed architecture achieves 99.1% and 99.11% accuracy for valence model and discrete model respectively for offline image data with 5-fold cross validation. The average accuracy achieved in real time for valance model and discrete model is 95% & 95.6% respectively. Also, this work contributes to build a new database using facial landmarks, with three intensity levels of facial expressions which helps to classify expressions into low, mild and high intensities. The performance is also tested for different classifiers. The proposed integrated system is configured for real time Human Robot Interaction (HRI) applications on a test bed consisting of Raspberry Pi and RPA platform to assess its performance.  相似文献   

12.
Many research fields concerned with the processing of information contained in human faces would benefit from face stimulus sets in which specific facial characteristics are systematically varied while other important picture characteristics are kept constant. Specifically, a face database in which displayed expressions, gaze direction, and head orientation are parametrically varied in a complete factorial design would be highly useful in many research domains. Furthermore, these stimuli should be standardised in several important, technical aspects. The present article presents the freely available Radboud Faces Database offering such a stimulus set, containing both Caucasian adult and children images. This face database is described both procedurally and in terms of content, and a validation study concerning its most important characteristics is presented. In the validation study, all frontal images were rated with respect to the shown facial expression, intensity of expression, clarity of expression, genuineness of expression, attractiveness, and valence. The results show very high recognition of the intended facial expressions.  相似文献   

13.
ABSTRACT

Previous research has found that individuals vary greatly in emotion differentiation, that is, the extent to which they distinguish between different emotions when reporting on their own feelings. Building on previous work that has shown that emotion differentiation is associated with individual differences in intrapersonal functions, the current study asks whether emotion differentiation is also related to interpersonal skills. Specifically, we examined whether individuals who are high in emotion differentiation would be more accurate in recognising others’ emotional expressions. We report two studies in which we used an established paradigm tapping negative emotion differentiation and several emotion recognition tasks. In Study 1 (N?=?363), we found that individuals high in emotion differentiation were more accurate in recognising others’ emotional facial expressions. Study 2 (N?=?217), replicated this finding using emotion recognition tasks with varying amounts of emotional information. These findings suggest that the knowledge we use to understand our own emotional experience also helps us understand the emotions of others.  相似文献   

14.
There is a common belief that wrinkles in the aging face reflect frequently experienced emotions and hence resemble these affective displays. This implies that the wrinkles and folds in elderly faces interfere with the perception of other emotions currently experienced by the elderly as well as with the inferences perceivers draw from these expressions. Whereas there is ample research on the impact of aging on emotion recognition, almost no research has focused on how emotions expressed by the elderly are perceived by others. The present research addresses this latter question. Young participants rated the emotion expressions and behavioral intentions of old and young faces displaying identical expressions. The findings suggest that emotions shown on older faces have reduced signal clarity and may consequently have less impact on inferences regarding behavioral intentions. Both effects can be expected to have negative consequences for rapport achieved in everyday interactions involving the elderly.  相似文献   

15.
杨晓莉  闫红丽刘力 《心理科学》2015,(6):14751481-14751481
本研究从双文化人对于文化认同的主观感知视角出发,采用问卷调查法,以藏汉杂居区的藏族大学生和汉文化区域的藏族寄居大学生为例,探讨双文化认同整合、辩证性自我与心理适应之间的关系,以及辩证性自我在两者之间的中介作用。结果表明:(1)对于两类被试来说,双文化认同整合问卷的和谐冲突维度与心理适应呈显著正相关,与辩证性自我呈显著负相关;但不同的是,藏族杂居区的被试,双文化认同整合的混合区分维度与辩证性自我和心理适应的相关都不显著(2)辩证性自我在双文化认同整合(和谐-冲突维度)与心理适应的关系中起中介作用。实践中,应通过加强少数民族大学生的双文化认同整合,降低其冲突心理、增强矛盾容忍性等辩证主义思想,进而增强其心理适应。  相似文献   

16.
声音情绪跨文化识别是在跨文化前提下对无语义声音所传递情绪信息的识别。目前, 对声音情绪跨文化识别的研究热点主要集中于:跨文化一致性、群内优势效应、性别和年龄问题、自发声音情绪识别、语言材料的选取等五个方面的研究。今后, 声音情绪跨文化识别的研究应在声音情绪的材料获取、影响因素、双语研究这三个方面作进一步的探索和完善。  相似文献   

17.
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.  相似文献   

18.
The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices.  相似文献   

19.
This research investigated how self‐presentation goals can influence public expressions of negative emotions. In Study 1, participants were asked how individuals would present their negative emotions if they were trying to create each of 5 different impressions, which corresponded to 5 self‐presentation strategies identified by Jones and Pittman (1982). Results show that individuals were expected to systematically understate or exaggerate their negative emotions, depending on the impression/strategy. In Study 2, participants discussed with another person a course they were taking where they were not doing as well as they had hoped. They were instructed either to present their feelings honestly, to ingratiate, or to intimidate the other person. Compared to the honesty condition, ingratia‐tion led to fewer negative emotions being expressed, whereas intimidation led to more negative emotions being expressed. Taken together, these studies provide initial evidence about when and how self‐presentation motives can influence reports of negative emotions.  相似文献   

20.
Facial emotions are important for human communication. Unfortunately, traditional facial emotion recognition tasks do not inform about how respondents might behave towards others expressing certain emotions. Approach‐avoidance tasks do measure behaviour, but only on one dimension. In this study 81 participants completed a novel Facial Emotion Response Task. Images displaying individuals with emotional expressions were presented in random order. Participants simultaneously indicated how communal (quarrelsome vs. agreeable) and how agentic (dominant vs. submissive) they would be in response to each expression. We found that participants responded differently to happy, angry, fearful, and sad expressions in terms of both dimensions of behaviour. Higher levels of negative affect were associated with less agreeable responses specifically towards happy and sad expressions. The Facial Emotion Response Task might complement existing facial emotion recognition and approach‐avoidance tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号