首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article provides an overview of the special issue of Motivation and Emotion, which will appear in two parts. This special issue examines the enduring contributions of the research of John T. Lanzetta and his colleagues on facial expression and emotion. In its entirety, the special issue consists of five articles and an epilogue. Part 1 (this issue of Motivation and Emotion) consists of the first three articles, and Part 2 (to appear as part of the next issue of Motivation and Emotion) consists of the final two articles and the epilogue. The first article provides an in-depth review of the Lanzetta research program, and describes this program as developing along four distinct lines that, respectively, cover work on (a) the facial feedback hypothesis, (b) the power of facial expression as an emotionally evocative stimulus, (c) the role of facial expression in empathy and counter-empathy, and (d) the relations between facial displays of powerful political leaders and observers' attitudes toward those leaders. Each of the subsequent four articles considers, in turn, the current status and future promise of one of these research lines as it has continued to grow and develop outside of the Lanzetta research program. Part 2 of the special issue concludes with an epilogue that highlights the major themes and conclusions that course through the entire body of research considered in this special issue.We would like to express our appreciation to Basil Englis, Arvid Kappas, Bob Kleck, and Scott Orr, each of whom contributed to the development of this special issue in a variety of ways.  相似文献   

2.
This study reanalyzes American and Japanese multiscalar ratings of universal facial expressions originally collected by Matsumoto (1986), of which only single emotion scales were analyzed and reported by Matsumoto and Ekman (1989). The nonanalysis of the entire data set ignored basic and important questions about the nature of judgments of universal facial expressions of emotion. These were addressed in this study. We found that (1) observers in both cultures perceived multiple emotions in universal facial expressions, not just one; (2) cultural differences occurred on multiple emotion scales for each expression, not just the target scale; (3) the directions of those differences differed according to the rating scale used and the expression being observed; and (4) no underlying dimension was evidenced that would account for these differences. These findings raise new questions about the nature of the judgment process and the role of judgment studies in supporting the universality thesis, the bases of which need to be explored in future research and incorporated in future theories of emotion and universality.  相似文献   

3.
Provides a comprehensive review of John T. Lanzetta's research program on facial expression and emotion. After reviewing the study that initiated this research program (Lanzetta & Kleck, 1970), the program is described as developing along four distinct lines of research: (1) the role of facial expression in the modulation and self-regulation of emotion, (2) the evocative power of the face as an emotional stimulus, (3) the role of facial expression in empathy and counterempathy, and (4) the role of facial displays in human politics. Beyond reviewing the major studies and key findings to emerge from each of these lines, the progression of thought underlying the development of this research program as a whole and the interrelations among the individual research lines are also emphasized.  相似文献   

4.
The topic of language as power (LaP) in individual therapeutic encounters has thus far been overlooked, and as bilingual therapists have the ability to use more than one language in the therapy room, their experience of LaP is a compelling research area that this paper attempts to explore. This qualitative, inductive, phenomenological study used interviews and interpretative phenomenological analysis to explore five bilingual Arabic–English-speaking therapists' experiences of LaP in the therapeutic encounter. The study identifies two overarching themes: (a) the emergence of identity and power from language and (b) comparisons of power in the English and Arabic languages. Within these themes, the study finds that therapists experience LaP through multiple avenues: self-disclosure, intersectionality, being transported to different identities and expressions of power and power of expression in Arabic–English. These multiple avenues illustrate the complexity of LaP in the therapeutic encounter. The study sheds light on an underexplored area in psychotherapy, illuminating an important area for psychotherapists and training institutions to consider when working with clients.  相似文献   

5.
Recently, investigators have challenged long‐standing assumptions that facial expressions of emotion follow specific emotion‐eliciting events and relate to other emotion‐specific responses. We address these challenges by comparing spontaneous facial expressions of anger, sadness, laughter, and smiling with concurrent, “on‐line” appraisal themes from narrative data, and by examining whether coherence between facial and appraisal components were associated with increased experience of emotion. Consistent with claims that emotion systems are loosely coupled, facial expressions of anger and sadness co‐occurred to a moderate degree with the expected appraisal themes, and when this happened, the experience of emotion was stronger. The results for the positive emotions were more complex, but lend credence to the hypothesis that laughter and smiling are distinct. Smiling co‐occurred with appraisals of pride, but never occurred with appraisals of anger. In contrast, laughter occurred more often with appraisals of anger, a finding consistent with recent evidence linking laughter to the dissociation or undoing of negative emotion.  相似文献   

6.
7.
本文在对《颐卦》“口实”、《易传》“养”之义考释和对殷周之人对龟卜的态度进行考察的基础上,对《颐卦》卦爻辞进行解释。认为《颐卦》“口实”当取借口、依据之义,经传“养”当训为“象”、“相”,卦辞“观颐,自求口实”是“观察人的面部表情动作所传达的意见.为自己的行为寻找借口”之义,爻辞“朵颐”、“颠颐”、“拂经于丘颐”、“拂颐”、“拂经”、“虎视眈眈,其欲逐逐”等都是一些暗示人不同意见的面部表情语言、动作语言。全卦实际上反映了商周之际人们思想的转变以及在人意、龟卜结果之间取舍的矛盾复杂心态。  相似文献   

8.
Early experience likely plays an important role in the development of the ability to discriminate facial expressions of emotion. We posited that compared to children reared with their biological families (n=72), abandoned children being reared in institutions (n=39) should demonstrate impairments in this ability. The visual paired comparison procedure was utilized to assess the abilities of 13- to 30-month-old children to discriminate among multiple pairs of photographs of facial expressions. Both groups exhibited a normative profile of discrimination, with no group differences evident. Such findings suggest that early institutionalization does not affect the ability of 1- to 3-year-olds to discriminate facial expressions of emotion, at least as inferred by the Visual Paired Comparison Procedure.  相似文献   

9.
探讨了表情因素作为身份特征对多身份追踪中分组效应的影响。采用多身份追踪范式, 8个客体为表情图片(正性、负性、中性)其中4个为目标, 对45名被试进行了有无眉毛线索对追踪表现影响的测查。实验条件分别为:目标表情与非目标表情完全不相同组(4水平, 简称分组)、目标表情与非目标表情都由两种相同数量的表情混合组成(3水平, 简称配对)与基线组(8个客体表情相同)。发现分组水平的成绩显著优于基线组, 配对水平的成绩显著低于基线组, 并表现出了负性表情的追踪优势; 另外, 有无眉毛线索对追踪表现无显著影响。这一结果说明多身份追踪中存在基于表情特征的分组效应, 负性表情的分组知觉高于正性表情。  相似文献   

10.
Previous research has demonstrated an interaction between eye gaze and selected facial emotional expressions, whereby the perception of anger and happiness is impaired when the eyes are horizontally averted within a face, but the perception of fear and sadness is enhanced under the same conditions. The current study reexamined these claims over six experiments. In the first three experiments, the categorization of happy and sad expressions (Experiments 1 and 2) and angry and fearful expressions (Experiment 3) was impaired when eye gaze was averted, in comparison to direct gaze conditions. Experiment 4 replicated these findings in a rating task, which combined all four expressions within the same design. Experiments 5 and 6 then showed that previous findings, that the perception of selected expressions is enhanced under averted gaze, are stimulus and task-bound. The results are discussed in relation to research on facial expression processing and visual attention.  相似文献   

11.
The current study aimed to extend the understanding of the early development of spontaneous facial reactions toward observed facial expressions. Forty-six 9- to 10-month-old infants observed video clips of dynamic human facial expressions that were artificially created with morphing technology. The infants’ facial responses were recorded, and the movements of the facial action unit 12 (e.g., lip-corner raising, associated with happiness) and facial action unit 4 (e.g., brow-lowering, associated with anger) were visually evaluated by multiple naïve raters. Results showed that (1) infants make congruent, observable facial responses to facial expressions, and (2) these specific facial responses are enhanced during repeated observation of the same emotional expressions. These results suggest the presence of observable congruent facial responses in the first year of life, and that they appear to be influenced by contextual information, such as the repetition of presentation of the target emotional expressions.  相似文献   

12.
The present study investigated whether facial expressions of emotion are recognized holistically, i.e., all at once as an entire unit, as faces are or featurally as other nonface stimuli. Evidence for holistic processing of faces comes from a reliable decrement in recognition performance when faces are presented inverted rather than upright. If emotion is recognized holistically, then recognition of facial expressions of emotion should be impaired by inversion. To test this, participants were shown schematic drawings of faces showing one of six emotions (surprise, sadness, anger, happiness, disgust, and fear) in either an upright or inverted orientation and were asked to indicate the emotion depicted. Participants were more accurate in the upright than in the inverted orientation, providing evidence in support of holistic recognition of facial emotion. Because recognition of facial expressions of emotion is important in social relationships, this research may have implications for treatment of some social disorders.  相似文献   

13.
Facial expressions frequently involve multiple individual facial actions. How do facial actions combine to create emotionally meaningful expressions? Infants produce positive and negative facial expressions at a range of intensities. It may be that a given facial action can index the intensity of both positive (smiles) and negative (cry-face) expressions. Objective, automated measurements of facial action intensity were paired with continuous ratings of emotional valence to investigate this possibility. Degree of eye constriction (the Duchenne marker) and mouth opening were each uniquely associated with smile intensity and, independently, with cry-face intensity. In addition, degree of eye constriction and mouth opening were each unique predictors of emotion valence ratings. Eye constriction and mouth opening index the intensity of both positive and negative infant facial expressions, suggesting parsimony in the early communication of emotion.  相似文献   

14.
《Brain and cognition》2014,84(3):252-261
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced.  相似文献   

15.
Although maternal contingent responses to their infant's facial expressions of emotions is thought to play an important role in the socialization of emotions, available data are still scarce and often inconsistent To further investigate how mothers' contingent facial expressions might influence infant emotional development, we undertook to study mother‐infant dyads in four episodes of face‐to‐face interaction during the first year. Mothers' facial expressions were strongly related to their infant's facial expressions of emotions, most of their contingent responses being produced within one second following infants' facial expressions Specific patterns of responses were also found. The impact of maternal contingent responding on infants' expressive development was also examined.  相似文献   

16.
Much research on emotional facial expression employs posed expressions and expressive subjects. To test the generalizability of this research to more spontaneous expressions of both expressive and nonexpressive posers, subjects engaged in happy, sad, angry, and neutral imagery, and voluntarily posed happy, sad, and angry facial expressions while facial muscle activity (brow, cheek, and mouth regions) and autonomic activity (skin resistance and heart period) were recorded. Subjects were classified as expressive or nonexpressive on the basis of the intensity of their posed expressions. The posed and imagery-induced expressions were similar, but not identical. Brow activity present in the imagery-induced sad expressions was weak or absent in the posed ones. Both nonexpressive and expressive subjects demonstrated similar heart rate acceleration during emotional imagery and demonstrated similar posed and imagery-induced happy expressions, but nonexpressive subjects showed little facial activity during both their posed and imagery-induced sad and angry expressions. The implications of these findings are discussed.  相似文献   

17.
为探讨高特质焦虑者在前注意阶段对情绪刺激的加工模式以明确其情绪偏向性特点, 本研究采用偏差-标准反转Oddball范式探讨了特质焦虑对面部表情前注意加工的影响。结果发现: 对于低特质焦虑组, 悲伤面孔所诱发的早期EMMN显著大于快乐面孔, 而对于高特质焦虑组, 快乐和悲伤面孔所诱发的早期EMMN差异不显著。并且, 高特质焦虑组的快乐面孔EMMN波幅显著大于低特质焦虑组。结果表明, 人格特质是影响面部表情前注意加工的重要因素。不同于普通被试, 高特质焦虑者在前注意阶段对快乐和悲伤面孔存在相类似的加工模式, 可能难以有效区分快乐和悲伤情绪面孔。  相似文献   

18.
Perception of a facial expression can be altered or biased by a prolonged viewing of other facial expressions, known as the facial expression adaptation aftereffect (FEAA). Recent studies using antiexpressions have demonstrated a monotonic relation between the magnitude of the FEAA and adaptor extremity, suggesting that facial expressions are opponent coded and represented continuously from one expression to its antiexpression. However, it is unclear whether the opponent-coding scheme can account for the FEAA between two facial expressions. In the current study, we demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as a function of the intensity of adapting facial expressions, consistent with the predictions based on the opponent-coding model. Further, the monotonic increase in the FEAA occurred even when the intensity of an adapting face was too weak for its expression to be recognized. These results together suggest that multiple facial expressions are encoded and represented by balanced activity of neural populations tuned to different facial expressions.  相似文献   

19.
Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.  相似文献   

20.
There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号