首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Affect bursts consist of spontaneous and short emotional expressions in which facial, vocal, and gestural components are highly synchronized. Although the vocal characteristics have been examined in several recent studies, the facial modality remains largely unexplored. This study investigated the facial correlates of affect bursts that expressed five different emotions: anger, fear, sadness, joy, and relief. Detailed analysis of 59 facial actions with the Facial Action Coding System revealed a reasonable degree of emotion differentiation for individual action units (AUs). However, less convergence was shown for specific AU combinations for a limited number of prototypes. Moreover, expression of facial actions peaked in a cumulative-sequential fashion with significant differences in their sequential appearance between emotions. When testing for the classification of facial expressions within a dimensional approach, facial actions differed significantly as a function of the valence and arousal level of the five emotions, thereby allowing further distinction between joy and relief. The findings cast doubt on the existence of fixed patterns of facial responses for each emotion, resulting in unique facial prototypes. Rather, the results suggest that each emotion can be portrayed by several different expressions that share multiple facial actions.  相似文献   

2.
3.
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion.  相似文献   

4.
Posed stimuli dominate the study of nonverbal communication of emotion, but concerns have been raised that the use of posed stimuli may inflate recognition accuracy relative to spontaneous expressions. Here, we compare recognition of emotions from spontaneous expressions with that of matched posed stimuli. Participants made forced-choice judgments about the expressed emotion and whether the expression was spontaneous, and rated expressions on intensity (Experiments 1 and 2) and prototypicality (Experiment 2). Listeners were able to accurately infer emotions from both posed and spontaneous expressions, from auditory, visual, and audiovisual cues. Furthermore, perceived intensity and prototypicality were found to play a role in the accurate recognition of emotion, particularly from spontaneous expressions. Our findings demonstrate that perceivers can reliably recognise emotions from spontaneous expressions, and that depending on the comparison set, recognition levels can even be equivalent to that of posed stimulus sets.  相似文献   

5.
It has been the matter of much debate whether perceivers are able to distinguish spontaneous vocal expressions of emotion from posed vocal expressions (e.g., emotion portrayals). In this experiment, we show that such discrimination can be manifested in the autonomic arousal of listeners during implicit processing of vocal emotions. Participants (N = 21, age: 20–55 years) listened to two consecutive blocks of brief voice clips and judged the gender of the speaker in each clip, while we recorded three measures of sympathetic arousal of the autonomic nervous system (skin conductance level, mean arterial blood pressure, pulse rate). Unbeknownst to the listeners, the blocks consisted of two types of emotional speech: spontaneous and posed clips. As predicted, spontaneous clips yielded higher arousal levels than posed clips, suggesting that listeners implicitly distinguished between the two kinds of expression, even in the absence of any requirement to retrieve emotional information from the voice. We discuss the results with regard to theories of emotional contagion and the use of posed stimuli in studies of emotions.  相似文献   

6.
Prior findings of emotional numbness (rather than distress) among socially excluded persons led the authors to investigate whether exclusion causes a far-reaching insensitivity to both physical and emotional pain. Experiments 1-4 showed that receiving an ostensibly diagnostic forecast of a lonesome future life reduced sensitivity to physical pain, as indicated by both (higher) thresholds and tolerance. Exclusion also caused emotional insensitivity, as indicated by reductions in affective forecasting of joy or woe over a future football outcome (Experiment 3), as well as lesser empathizing with another person's suffering from either romantic breakup (Experiment 4) or a broken leg (Experiment 5). The insensitivities to pain and emotion were highly intercorrelated.  相似文献   

7.
Several experimental studies have shown that there exists an association between emotion words and the vertical spatial axis. However, the specific conditions under which this conceptual–physical interaction emerges are still unknown, and no study has been devised to test whether longer linguistic units than words can lead to a mapping of emotions on vertical space. In Experiment 1, Spanish and Colombian participants performed a representative verbal emotional contexts production task (RVEC task) requiring participants to produce RVEC for the emotions of joy, sadness, surprise, anger, fear, and disgust. The results showed gender and cultural differences regarding the average number of RVEC produced. The most representative contexts of joy and sadness obtained in Experiment 1 were used in Experiment 2 in a novel spatial–emotional congruency verification task (SECV task). After reading a sentence, the participants had to judge whether a probe word, displayed in either a high or low position on the screen, was congruent or incongruent with the previous sentence. The question was whether the emotion induced by the sentence could modulate the responses to the probes as a function of their position in a vertical axis by means of a metaphorical conceptual–spatial association. Overall, the results indicate that a mapping of emotions on vertical space can occur for linguistic units larger than words, but only when the task demands an explicit affective evaluation of the target.  相似文献   

8.
Emotional facial expressions provide important insights into various valenced feelings of humans. Recent cross-species neuroscientific advances offer insights into molecular foundations of mammalian affects and hence, by inference, the related emotional/affective facial expressions in humans. This is premised on deep homologies based on affective neuroscience studies of valenced primary emotional systems across species. Thus, emerging theoretical perspectives suggest that ancient cross-species emotional systems are intimately linked not only to emotional action patterns evident in all mammals, but also, by inference, distinct emotional facial expressions studied intensively in humans. Thus, the goal of the present theoretical work was to relate categories of human emotional facial expressions—e.g. especially of anger fear, joy and sadness—to respective underlying primary cross-mammalian emotional circuits. This can potentially provide coherent theoretical frameworks for the eventual molecular study of emotional facial expressions in humans.  相似文献   

9.
The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.  相似文献   

10.
We examined what determines the typicality, or graded structure, of vocal emotion expressions. Separate groups of judges rated acted and spontaneous expressions of anger, fear, and joy with regard to their typicality and three main determinants of the graded structure of categories: category members' similarity to the central tendency of their category (CT); category members' frequency of instantiation, i.e., how often they are encountered as category members (FI); and category members' similarity to ideals associated with the goals served by its category, i.e., suitability to express particular emotions. Partial correlations and multiple regression analysis revealed that similarity to ideals, rather than CT or FI, explained most variance in judged typicality. Results thus suggest that vocal emotion expressions constitute ideal-based goal-derived categories, rather than taxonomic categories based on CT and FI. This could explain how prototypical expressions can be acoustically distinct and highly recognisable but occur relatively rarely in everyday speech.  相似文献   

11.
The present research draws on cognitive dissonance theory [Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press] and social identity theory [Tajfel, H. (Ed.). (1978). Differentiation between social groups. London: Academic Press] to examine how group members respond to discrepancies between their personal values and the behavior of an ingroup. In two experiments we manipulated whether participants’ ingroup violated a personal value (providing basic healthcare in Experiment 1 and self-reliance in Experiment 2) and measured participants’ emotional responses and strategies for reducing discomfort. As expected, individuals experienced psychological discomfort (but not negative self-directed emotion), when an ingroup, but not when an outgroup, violated a personal value, and this discomfort mediated participants’ disidentification with their group (Experiment 1) and value-adherence activism (Experiment 2).  相似文献   

12.
Although it has long been recognized that stereotypes achieve much of their force from being shared by members of social groups, relatively little empirical work has examined the process by which such consensus is reached. This paper tests predictions derived from self-categorization theory that stereotype consensus will be enhanced (a) by factors which make the shared social identity of perceivers salient and (b) by group interaction that is premised upon that shared identity. In Experiment 1 (N=40) the consensus of ingroup stereotypes is enhanced where an ingroup is judged after (rather than before) an outgroup. In Experiment 2 (N=80) when only one group is judged, group interaction is shown to enhance the consensus of outgroup stereotypes more than those of the ingroup—an apparent ‘outgroup consensus effect’. In Experiment 3 (N=135) this asymmetry is extinguished and group interaction found to produce equally high consensus in both ingroup and outgroup stereotypes when the ingroup is explicitly contrasted from an outgroup. Implications for alternative models of consensus development are discussed. © 1998 John Wiley & Sons, Ltd.  相似文献   

13.
Emotion expressions convey valuable information about others’ internal states and likely behaviours. Accurately identifying expressions is critical for social interactions, but so is perceiver confidence when decoding expressions. Even if a perceiver correctly labels an expression, uncertainty may impair appropriate behavioural responses and create uncomfortable interactions. Past research has found that perceivers report greater confidence when identifying emotions displayed by cultural ingroup members, an effect attributed to greater perceptual skill and familiarity with own-culture than other-culture faces. However, the current research presents novel evidence for an ingroup advantage in emotion decoding confidence across arbitrary group boundaries that hold culture constant. In two experiments using different stimulus sets participants not only labeled minimal ingroup expressions more accurately, but did so with greater confidence. These results offer novel evidence that ingroup advantages in emotion decoding confidence stem partly from social-cognitive processes.  相似文献   

14.
Caricaturing facial expressions   总被引:1,自引:0,他引:1  
The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.  相似文献   

15.
Extrapolating from the broaden-and-build theory, we hypothesized that positive emotion may reduce the own-race bias in facial recognition. In Experiments 1 and 2, Caucasian participants (N = 89) viewed Black and White faces for a recognition task. They viewed videos eliciting joy, fear, or neutrality before the learning (Experiment 1) or testing (Experiment 2) stages of the task. Results reliably supported the hypothesis. Relative to fear or a neutral state, joy experienced before either stage improved recognition of Black faces and significantly reduced the own-race bias. Discussion centers on possible mechanisms for this reduction of the own-race bias, including improvements in holistic processing and promotion of a common in-group identity due to positive emotions.  相似文献   

16.
H. Tajfel's (1970) minimal group paradigm (MGP) research suggests that social categorization is a sufficient antecedent of ingroup-favoring discrimination. Two experiments examined whether discrimination in the MGP arises from categorization or processes of outcome dependence, that is, ingroup reciprocity and outgroup fear. Experiment 1 unconfounded categorization from outcome dependence. Categorized men discriminated only when dependent on others. Categorized women discriminated regardless of the structure of dependence. Experiment 2 examined dependence on the ingroup versus the outgroup as the locus of male-initiated discrimination. Consistently with an ingroup reciprocity effect, men discriminated when dependent on ingroup, but not outgroup, members. Sex differences are discussed in regard to women's heightened ingroup dependence produced by biological or environmental constraints.  相似文献   

17.
Perceivers individuate cognitively the ingroup more than the outgroup; that is, perceivers use person categories to process information about the ingroup, but use stereotypic attribute categories to process information about the outgroup. This phenomenon is labelled the differential processing effect (DPE). Is the DPE moderated by relative group status? In two experiments, either high- or low-status members of permeable-boundary groups (i.e. groups that encourage upward mobility) read through information about unfamiliar ingroup and outgroup members. Relative group status moderated the DPE. Clustering indices in recall and confusions in a name-matching task indicated that high-status members individuated the ingroup more than the outgroup, thus replicating the DPE. However, low-status members individuated the outgroup more than the ingroup, thus reversing the DPE. A third experiment suggested that these findings are predicated on the ingroup information being stereotype-consistent. © 1997 John Wiley & Sons, Ltd.  相似文献   

18.
If intergroup emotions are functional, successfully implementing an emotion-linked behavioral tendency should discharge the emotion, whereas impeding the behavioral tendency should intensify the emotion. We investigated the emotional consequences of satisfying or thwarting emotionally induced intergroup behavioral intentions. Study 1 showed that if an attack on the ingroup produced anger, retaliation increased satisfaction, but if an attack produced fear, retaliation increased fear and guilt. Study 2 showed that outgroup-directed anger instigated via group insult dissipated when the ingroup successfully responded, but was exacerbated by an unsuccessful response. Responding in an emotionally appropriate way was satisfying, but ingroup failure to respond elicited anger directed at the ingroup. Study 3 showed that intergroup guilt following aggression was diminished when the ingroup made reparations, but was exacerbated when the ingroup aggressed again. Satisfying behavioral intentions associated with intergroup emotions fulfills a regulatory function.  相似文献   

19.
The research in this article explores the structure and content of attributed intergroup beliefs: to what extent do perceivers think others of their ingroup and their outgroup display intergroup evaluative bias and outgroup homogeneity? We report studies that address this question in ethnicity, gender, and nationality intergroup contexts. In all of these, we show that perceivers attribute to others more biased intergroup beliefs than they themselves espouse. Even when perceivers themselves do not show intergroup bias or outgroup homogeneity, they attribute such biases to others, both others from their ingroup and others from their outgroup. We argue that such attributed intergroup beliefs are fundamentally important to expectations concerning intergroup interaction. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
Affective individual differences and startle reflex modulation   总被引:6,自引:0,他引:6  
Potentiation of startle has been demonstrated in experimentally produced aversive emotional states, and clinical reports suggest that potentiated startle may be associated with fear or anxiety. To test the generalizability of startle potentiation across a variety of emotional states as well as its sensitivity to individual differences in fearfulness, the acoustic startle response of 17 high- and 15 low-fear adult subjects was assessed during fear, anger, joy, sadness, pleasant relaxation, and neutral imagery. Startle responses were larger in all aversive affective states than during pleasant imagery. This effect was enhanced among high fear subjects, although followup testing indicated that other affective individual differences (depression and anger) may also be related to increased potentiation of startle in negative affect. Startle latency was reduced during high- rather than low-arousal imagery but was unaffected by emotional valence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号