首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hosie  J. A.  Gray  C. D.  Russell  P. A.  Scott  C.  Hunter  N. 《Motivation and emotion》1998,22(4):293-313
This paper reports the results of three tasks comparing the development of the understanding of facial expressions of emotion in deaf and hearing children. Two groups of hearing and deaf children of elementary school age were tested for their ability to match photographs of facial expressions of emotion, and to produce and comprehend emotion labels for the expressions of happiness, sadness, anger, fear, disgust, and surprise. Accuracy data showed comparable levels of performance for deaf and hearing children of the same age. Happiness and sadness were the most accurately matched expressions and the most accurately produced and comprehended labels. Anger was the least accurately matched expression and the most poorly comprehended emotion label. Disgust was the least accurately labeled expression; however, deaf children were more accurate at labeling this expression, and also at labeling fear, than hearing children. Error data revealed that children confused anger with disgust, and fear with surprise. However, the younger groups of deaf and hearing children also showed a tendency to confuse the negative expressions of anger, disgust, and fear with sadness. The results suggest that, despite possible differences in the early socialisation of emotion, deaf and hearing children share a common understanding of the emotions conveyed by distinctive facial expressions.  相似文献   

2.
预期焦虑是由于对未发生结果的负性投射造成的焦虑情绪。本文比较了被试在不同预期焦虑水平下观看愉快、中性及恐惧面孔时的行为和脑电数据。发现在高预期焦虑水平下,恐惧面孔诱发的N170波幅显著大于低预期焦虑水平,而愉快和中性面孔在两种水平下诱发的波幅差异不显著;且预期焦虑水平对恐惧面孔诱发的N170波幅的调节强度与个体的特质焦虑得分显著相关。这些结果表明,预期焦虑水平会影响面孔表情加工,且不同情绪面孔受到的影响不同。我们推测,预期焦虑水平会通过影响杏仁核等脑区的激活来调节颞上回对恐惧面孔的加工。  相似文献   

3.
Perceiving distress cues appears to be associated with prosocial responding. This being the case, it was hypothesised that the fear facial expression would elicit prosocial responding in perceivers. In Study 1, participants indicated that fear and sadness expressions would be associated with greater sympathy and willingness to help the expresser than would neutral expressions. In Study 2, participants were primed with fear or neutral expressions before reading vignettes featuring protagonists in mild distress. Fear-primed participants reported more sympathy and desire to help the protagonists than neutral-primed participants. Moreover, participants who recognised fear most accurately, as measured by a standard facial expression recognition task, showed the greatest increases in prosocial responding following fear expression primes. This corroborates the notion, supported by research as disparate as behavioural research on bystander intervention and clinical research on psychopaths, that exposure to and correct interpretation of certain distress cues may predict an individual's likelihood of behaving prosocially.  相似文献   

4.
白鹭  毛伟宾  王蕊  张文海 《心理学报》2017,(9):1172-1183
本研究以消极情绪间感知相似性较低的厌恶、恐惧面孔表情为材料,提供5个情绪性语言标签减少文字背景对面孔识别的促进作用,通过2个实验对自然场景以及身体动作对面孔表情识别的影响进行了研究,旨在考察面孔表情与自然场景间的情绪一致性对情绪面孔识别和自然场景加工的影响,以及加入与自然场景情绪相冲突的身体动作对面孔表情识别可能产生的影响。研究结果表明:(1)尽管增加了情绪性语言标签选项数量,自然场景的情绪对面孔表情识别的影响依旧显著;(2)当面孔表情与自然场景情绪不一致时,面孔识别需要更多依赖对自然场景的加工,因此对自然场景的加工程度更高;(3)身体动作会在一定程度上干扰自然场景对面孔表情识别的影响,但自然场景依然对情绪面孔的表情识别有重要作用。  相似文献   

5.
Two studies aimed to examine whether high socially anxious individuals are more likely to negatively interpret ambiguous social scenarios and facial expressions compared to low socially anxious individuals. We also examined whether interpretation bias serves as a mediator of the relationship between trait social anxiety and state anxiety responses, in particular current state anxiety, bodily sensations, and perceived probability and cost of negative evaluation pertaining to a speech task. Study 1 used ambiguous social scenarios and Study 2 used ambiguous facial expressions as stimuli to objectively assess interpretation bias. Undergraduate students with high and low social anxiety completed measures of state anxiety responses at three time points: baseline, after the interpretation bias task, and after the preparation for an impromptu speech. Results showed that high socially anxious individuals were more likely to endorse threat interpretations for ambiguous social scenarios and to interpret ambiguous faces as negative than low socially anxious individuals. Furthermore, negative interpretations mediated the relationship between trait social anxiety and perceived probability of negative evaluation pertaining to the speech task in Study 1 but not Study 2. The present studies provide new insight into the role of interpretation bias in social anxiety.  相似文献   

6.
People high in social anxiety experience fear of social situations due to the likelihood of social evaluation. Whereas happy faces are generally processed very quickly, this effect is impaired by high social anxiety. Mouth regions are implicated during emotional face processing, therefore differences in mouth salience might affect how social anxiety relates to emotional face discrimination. We designed an emotional facial expression recognition task to reveal how varying levels of sub-clinical social anxiety (measured by questionnaire) related to the discrimination of happy and fearful faces, and of happy and angry faces. We also categorised the facial expressions by the salience of the mouth region (i.e. high [open mouth] vs. low [closed mouth]). In a sample of 90 participants higher social anxiety (relative to lower social anxiety) was associated with a reduced happy face reaction time advantage. However, this effect was mainly driven by the faces with less salient closed mouths. Our results are consistent with theories of anxiety that incorporate an oversensitive valence evaluation system.  相似文献   

7.
Very few large-scale studies have focused on emotional facial expression recognition (FER) in 3-year-olds, an age of rapid social and language development. We studied FER in 808 healthy 3-year-olds using verbal and nonverbal computerized tasks for four basic emotions (happiness, sadness, anger, and fear). Three-year-olds showed differential performance on the verbal and nonverbal FER tasks, especially with respect to fear. That is to say, fear was one of the most accurately recognized facial expressions as matched nonverbally and the least accurately recognized facial expression as labeled verbally. Sex did not influence emotion-matching nor emotion-labeling performance after adjusting for basic matching or labeling ability. Three-year-olds made systematic errors in emotion-labeling. Namely, happy expressions were often confused with fearful expressions, whereas negative expressions were often confused with other negative expressions. Together, these findings suggest that 3-year-olds' FER skills strongly depend on task specifications. Importantly, fear was the most sensitive facial expression in this regard. Finally, in line with previous studies, we found that recognized emotion categories are initially broad, including emotions of the same valence, as reflected in the nonrandom errors of 3-year-olds.  相似文献   

8.
The facial expressions of fear and anger are universal social signals in humans. Both expressions have been frequently presumed to signify threat to perceivers and therefore are often used in studies investigating responses to threatening stimuli. Here the authors show that the anger expression facilitates avoidance-related behavior in participants, which supports the notion of this expression being a threatening stimulus. The fear expression, on the other hand, facilitates approach behaviors in perceivers. This contradicts the notion of the fear expression as predominantly threatening or aversive and suggests it may represent an affiliative stimulus. Although the fear expression may signal that a threat is present in the environment, the effect of the expression on conspecifics may be in part to elicit approach.  相似文献   

9.
We move our eyes not only to get information, but also to supply information to our fellows. The latter eye movements can be considered as goal-directed actions to elicit changes in our counterparts. In two eye-tracking experiments, participants looked at neutral faces that changed facial expression 100 ms after the gaze fell upon them. We show that participants anticipate a change in facial expression and direct their first saccade more often to the mouth region of a neutral face about to change into a happy one and to the eyebrows region of a neutral face about to change into an angry expression. Moreover, saccades in response to facial expressions are initiated more quickly to the position where the expression was previously triggered. Saccade–effect associations are easily acquired and are used to guide the eyes if participants freely select where to look next (Experiment 1), but not if saccades are triggered by external stimuli (Experiment 2).  相似文献   

10.
It is well-known that patients having sustained frontal-lobe traumatic brain injury (TBI) are severely impaired on tests of emotion recognition. Indeed, these patients have significant difficulty recognizing facial expressions of emotion, and such deficits are often associated with decreased social functioning and poor quality of life. As of yet, no studies have examined the response patterns which underlie facial emotion recognition impairment in TBI and which may lend clarity to the interpretation of deficits. Therefore, the present study aimed to characterize response patterns in facial emotion recognition in 14 patients with frontal TBI compared to 22 matched control subjects, using a task which required participants to rate the intensity of each emotion (happiness, sadness, anger, disgust, surprise and fear) of a series of photographs of emotional and neutral faces. Results first confirmed the presence of facial emotion recognition impairment in TBI, and further revealed that patients displayed a liberal bias when rating facial expressions, leading them to associate intense ratings of incorrect emotional labels to sad, disgusted, surprised and fearful facial expressions. These findings are generally in line with prior studies which also report important facial affect recognition deficits in TBI patients, particularly for negative emotions.  相似文献   

11.
Multi-label tasks confound age differences in perceptual and cognitive processes. We examined age differences in emotion perception with a technique that did not require verbal labels. Participants matched the emotion expressed by a target to two comparison stimuli, one neutral and one emotional. Angry, disgusted, fearful, happy, and sad facial expressions of varying intensity were used. Although older adults took longer to respond than younger adults, younger adults only outmatched older adults for the lowest intensity disgust and fear expressions. Some participants also completed an identity matching task in which target stimuli were matched on personal identity instead of emotion. Although irrelevant to the judgment, expressed emotion still created interference. All participants were less accurate when the apparent difference in expressive intensity of the matched stimuli was large, suggesting that salient emotion cues increased difficulty of identity matching. Age differences in emotion perception were limited to very low intensity expressions.  相似文献   

12.
Facial expressions of anger and fear have been seen to elicit avoidance behavior in the perceiver due to their negative valence. However, recent research uncovered discrepancies regarding these immediate motivational implications of fear and anger, suggesting that not all negative emotions trigger avoidance to a comparable extent. To clarify those discrepancies, we considered recent theoretical and methodological advances, and investigated the role of social preferences and processing focus on approach-avoidance tendencies (AAT) to negative facial expressions. We exposed participants to dynamic facial expressions of anger, disgust, fear, or sadness, while they processed either the emotional expression or the gender of the faces. AATs were assessed by reaction times of lever movements, and by posture changes via head-tracking. We found that—relative to angry faces-, fearful and sad faces triggered more approach, with a larger difference between fear and anger in prosocial compared to individualistic participants. Interestingly, these findings are in line with a recently developed concern hypothesis, suggesting that—relative to other negative expressions—expressions of distress may facilitate approach, especially in participants with prosocial preferences.  相似文献   

13.
Emotion theorists assume certain facial displays to convey information about the expresser's emotional state. In contrast, behavioral ecologists assume them to indicate behavioral intentions or action requests. To test these contrasting positions, over 2,000 online participants were presented with facial expressions and asked what they revealed-feeling states, behavioral intentions, or action requests. The majority of the observers chose feeling states as the message of facial expressions of disgust, fear, sadness, happiness, and surprise, supporting the emotions view. Only the anger display tended to elicit more choices of behavioral intention or action request, partially supporting the behavioral ecology view. The results support the view that facial expressions communicate emotions, with emotions being multicomponential phenomena that comprise feelings, intentions, and wishes.  相似文献   

14.
In 3 experiments, we investigate how anxiety influences interpretation of ambiguous facial expressions of emotion. Specifically, we examine whether anxiety modulates the effect of contextual cues on interpretation. Participants saw ambiguous facial expressions. Simultaneously, positive or negative contextual information appeared on the screen. Participants judged whether each expression was positive or negative. We examined the impact of verbal and visual contextual cues on participants' judgements. We used 3 different anxiety induction procedures and measured levels of trait anxiety (Experiment 2). Results showed that high state anxiety resulted in greater use of contextual information in the interpretation of the facial expressions. Trait anxiety was associated with mood-congruent effects on interpretation, but not greater use of contextual information.  相似文献   

15.
In 3 experiments, we investigate how anxiety influences interpretation of ambiguous facial expressions of emotion. Specifically, we examine whether anxiety modulates the effect of contextual cues on interpretation. Participants saw ambiguous facial expressions. Simultaneously, positive or negative contextual information appeared on the screen. Participants judged whether each expression was positive or negative. We examined the impact of verbal and visual contextual cues on participants' judgements. We used 3 different anxiety induction procedures and measured levels of trait anxiety (Experiment 2). Results showed that high state anxiety resulted in greater use of contextual information in the interpretation of the facial expressions. Trait anxiety was associated with mood-congruent effects on interpretation, but not greater use of contextual information.  相似文献   

16.
Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

17.
We investigated whether moral violations involving harm selectively elicit anger, whereas purity violations selectively elicit disgust, as predicted by the Moral Foundations Theory (MFT). We analysed participants’ spontaneous facial expressions as they listened to scenarios depicting moral violations of harm and purity. As predicted by MFT, anger reactions were elicited more frequently by harmful than by impure actions. However, violations of purity elicited more smiling reactions and expressions of anger than of disgust. This effect was found both in a classic set of scenarios and in a new set in which the different kinds of violations were matched on weirdness. Overall, these findings are at odds with predictions derived from MFT and provide support for “monist” accounts that posit harm at the basis of all moral violations. However, we found that smiles were differentially linked to purity violations, which leaves open the possibility of distinct moral modules.  相似文献   

18.
谷莉  白学军 《心理科学》2014,37(1):101-105
本研究选取45名3-5岁幼儿和39名大学本科生作为被试。实验材料为恐惧、愤怒、悲伤、惊讶和高兴五种面部表情图片。用Tobbi眼动仪记录被试观察表情图片时的眼动轨迹。结果发现:(1)成人偏好高兴表情,并在高兴表情上的注视时间和次数显著大于幼儿;(2)成人偏好注视眼部,幼儿偏好注视嘴部。结果表明,面部表情注意偏好的发展具有社会依存性,趋向于偏好积极情绪,这种发展变化与面部表情部位的注意偏好相关。  相似文献   

19.
Faces with expressions (happy, surprise, anger, fear) were presented at study. Memory for facial expressions was tested by presenting the same faces with neutral expressions and asking participants to determine the expression that had been displayed at study. In three experiments, happy expressions were remembered better than other expressions. The advantage of a happy face was observed even when faces were inverted (upside down) and even when the salient perceptual feature (broad grin) was controlled across conditions. These findings are couched in terms of source monitoring, in which memory for facial expressions reflects encoding of the dispositional context of a prior event.  相似文献   

20.

This paper describes a method to measure the sensitivity of an individual to different facial expressions. It shows that individual participants are more sensitive to happy than to fearful expressions and that the differences are statistically significant using the model-comparison approach. Sensitivity is measured by asking participants to discriminate between an emotional facial expression and a neutral expression of the same face. The expression was diluted to different degrees by combining it in different proportions with the neutral expression using morphing software. Sensitivity is defined as measurement of the proportion of neutral expression in a stimulus required for participants to discriminate the emotional expression on 75% of presentations. Individuals could reliably discriminate happy expressions diluted with a greater proportion of the neutral expression compared with that required for discrimination of fearful expressions. This tells us that individual participants are more sensitive to happy compared with fearful expressions. Sensitivity is equivalent when measured on two different testing sessions, and greater sensitivity to happy expressions is maintained with short stimulus durations and stimuli generated using different morphing software. Increased sensitivity to happy compared with fear expressions was affected at smaller image sizes for some participants. Application of the approach for use with clinical populations, as well as understanding the relative contribution of perceptual processing and affective processing in facial expression recognition, is discussed.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号