首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
It is well established that categorising the emotional content of facial expressions may differ depending on contextual information. Whether this malleability is observed in the auditory domain and in genuine emotion expressions is poorly explored. We examined the perception of authentic laughter and crying in the context of happy, neutral and sad facial expressions. Participants rated the vocalisations on separate unipolar scales of happiness and sadness and on arousal. Although they were instructed to focus exclusively on the vocalisations, consistent context effects were found: For both laughter and crying, emotion judgements were shifted towards the information expressed by the face. These modulations were independent of response latencies and were larger for more emotionally ambiguous vocalisations. No effects of context were found for arousal ratings. These findings suggest that the automatic encoding of contextual information during emotion perception generalises across modalities, to purely non-verbal vocalisations, and is not confined to acted expressions.  相似文献   

2.
Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults’ perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants’ rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.  相似文献   

3.
It is vital that new mothers quickly and accurately recognize their child’s facial expressions. There is evidence that during pregnancy women develop enhanced processing of facial features associated with infancy and distress, as these cues signal vulnerability and are therefore biologically salient. In this study, 51 pregnant women at 17–36 weeks gestation watched neutral infant and adult faces gradually morph into either happy or sad expressions. We measured the speed and accuracy with which participants were able to recognize facial affect (happy vs. sad) across facial ages (infant vs. adult). Participants were faster and more accurate at recognizing happy versus sad faces and adult versus infant faces. We discuss how prior exposure to a certain face type may explain faster recognition. We also consider these results in the context of evidence indicating positive affect is recognized more quickly, but associated with slower attention and detection.  相似文献   

4.
为探讨高特质焦虑者在前注意阶段对情绪刺激的加工模式以明确其情绪偏向性特点, 本研究采用偏差-标准反转Oddball范式探讨了特质焦虑对面部表情前注意加工的影响。结果发现: 对于低特质焦虑组, 悲伤面孔所诱发的早期EMMN显著大于快乐面孔, 而对于高特质焦虑组, 快乐和悲伤面孔所诱发的早期EMMN差异不显著。并且, 高特质焦虑组的快乐面孔EMMN波幅显著大于低特质焦虑组。结果表明, 人格特质是影响面部表情前注意加工的重要因素。不同于普通被试, 高特质焦虑者在前注意阶段对快乐和悲伤面孔存在相类似的加工模式, 可能难以有效区分快乐和悲伤情绪面孔。  相似文献   

5.
We systematically examined the impact of emotional stimuli on time perception in a temporal reproduction paradigm where participants reproduced the duration of a facial emotion stimulus using an oval-shape stimulus or vice versa. Experiment 1 asked participants to reproduce the duration of an angry face (or the oval) presented for 2,000 ms. Experiment 2 included a range of emotional expressions (happy, sad, angry, and neutral faces as well as the oval stimulus) presented for different durations (500, 1,500, and 2,000 ms). We found that participants over-reproduced the durations of happy and sad faces using the oval stimulus. By contrast, there was a trend of under-reproduction when the duration of the oval stimulus was reproduced using the angry face. We suggest that increased attention to a facial emotion produces the relativity of time perception.  相似文献   

6.
The goal of this research was to examine the effects of facial expressions on the speed of sex recognition. Prior research revealed that sex recognition of female angry faces was slower compared with male angry faces and that female happy faces are recognized faster than male happy faces. We aimed to replicate and extend the previous research by using different set of facial stimuli, different methodological approach and also by examining the effects of some other previously unexplored expressions (such as crying) on the speed of sex recognition. In the first experiment, we presented facial stimuli of men and women displaying anger, fear, happiness, sadness, crying and three control conditions expressing no emotion. Results showed that sex recognition of angry females was significantly slower compared with sex recognition in any other condition, while sad, crying, happy, frightened and neutral expressions did not impact the speed of sex recognition. In the second experiment, we presented angry, neutral and crying expressions in blocks and again only sex recognition of female angry expressions was slower compared with all other expressions. The results are discussed in a context of perceptive features of male and female facial configuration, evolutionary theory and social learning context.  相似文献   

7.
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.  相似文献   

8.
The aim of the current study was to examine how emotional expressions displayed by the face and body influence the decision to approach or avoid another individual. In Experiment 1, we examined approachability judgments provided to faces and bodies presented in isolation that were displaying angry, happy, and neutral expressions. Results revealed that angry expressions were associated with the most negative approachability ratings, for both faces and bodies. The effect of happy expressions was shown to differ for faces and bodies, with happy faces judged more approachable than neutral faces, whereas neutral bodies were considered more approachable than happy bodies. In Experiment 2, we sought to examine how we integrate emotional expressions depicted in the face and body when judging the approachability of face-body composite images. Our results revealed that approachability judgments given to face-body composites were driven largely by the facial expression. In Experiment 3, we then aimed to determine how the categorization of body expression is affected by facial expressions. This experiment revealed that body expressions were less accurately recognized when the accompanying facial expression was incongruent than when neutral. These findings suggest that the meaning extracted from a body expression is critically dependent on the valence of the associated facial expression.  相似文献   

9.
We examined the effect of induced mood, varying in valence and longevity, on local processing of emotional faces. It was found that negative facial expression conveyed by the global level of the face interferes with efficient processing of the local features. The results also showed that the duration of involvement with a mood influenced the local processing. We observed that attending to the local level of faces is not different in short-lived happy and sad mood states. However, as the mood state is experienced for a longer period, local processing was impaired in happy mood compared to sad mood. Taken together, we concluded that both facial expressions and affective states influence processing of the local parts of faces. Moreover, we suggest that mediating factors like the duration of involvement with the mood play a role in the interrelation between mood, attention, and perception.  相似文献   

10.
The present study was designed to examine the operation of depression-specific biases in the identification or labeling of facial expression of emotions. Participants diagnosed with major depression and social phobia and control participants were presented with faces that expressed increasing degrees of emotional intensity, slowly changing from a neutral to a full-intensity happy, sad, or angry expression. The authors assessed individual differences in the intensity of facial expression of emotion that was required for the participants to accurately identify the emotion being expressed. The depressed participants required significantly greater intensity of emotion than did the social phobic and the control participants to correctly identify happy expressions and less intensity to identify sad than angry expressions. In contrast, social phobic participants needed less intensity to correctly identify the angry expressions than did the depressed and control participants and less intensity to identify angry than sad expressions. Implications of these results for interpersonal functioning in depression and social phobia are discussed.  相似文献   

11.
The aim of the present study was to investigate the time course of the positive advantage in the expression classification of faces by recording event-related potentials (ERPs). Although neutral faces were classified more quickly than either happy or sad faces, a significant positive classification advantage (PCA)—that is, faster classification for happy than for sad faces—was found. For ERP data, as compared with sad faces, happy faces elicited a smaller N170 and a larger posterior N2 component. The P3 was modulated by facial expressions with higher amplitudes and shorter latencies for both happy and neutral stimuli than for sad stimuli, and the reaction times were significantly correlated with the amplitude and latency of the P3. Overall, these data showed robust PCA in expression classification, starting when the stimulus has been recognized as a face revealed by the N170 component.  相似文献   

12.
Emotion influences memory in many ways. For example, when a mood-dependent processing shift is operative, happy moods promote global processing and sad moods direct attention to local features of complex visual stimuli. We hypothesized that an emotional context associated with to-be-learned facial stimuli could preferentially promote global or local processing. At learning, faces with neutral expressions were paired with a narrative providing either a happy or a sad context. At test, faces were presented in an upright or inverted orientation, emphasizing configural or analytical processing, respectively. A recognition advantage was found for upright faces learned in happy contexts relative to those in sad contexts, whereas recognition was better for inverted faces learned in sad contexts than for those in happy contexts. We thus infer that a positive emotional context prompted more effective storage of holistic, configural, or global facial information, whereas a negative emotional context prompted relatively more effective storage of local or feature-based facial information  相似文献   

13.
Attractive individuals are perceived as having various positive personality qualities. Positive personality qualities can in turn increase perceived attractiveness. However, the developmental origins of the link between attractiveness and personality are not understood. This is important because infant attractiveness (‘cuteness’) elicits caregiving from adults, and infant personality (‘temperament’) shapes caregiving behaviour. While research suggests that adults have more positive attitudes towards cuter infants, it is not known whether positive infant temperament can increase the perception of infant cuteness. We investigated the impact of experimentally established infant temperament on adults' perception of cuteness and desire to view individual faces. Ataseline, adults rated the cuteness of, and keypressed to view, images of unfamiliar infants with neutral facial expressions. Training required adults to learn about an infant's ‘temperament’, through repeated pairing of the neutral infant face with positive or negative facial expressions and vocalizations. Adults then re‐rated the original neutral infant faces. Post‐training, there were significant changes from baseline: infants who were mostly happy were perceived as cuter and adults expended greater effort to view them. Infants who were mostly sad were not perceived as cuter and adults expended less effort to view them. Our results suggest that temperament has clear consequences for how adults perceive ‘bonnie’ babies. Perception of infant cuteness is not based on physical facial features alone, and is modifiable through experience.  相似文献   

14.
Using a visual search paradigm, we investigated how age affected attentional bias to emotional facial expressions. In Experiments 1 and 2, participants searched for a discrepant facial expression in a matrix of otherwise homogeneous faces. Both younger and older adults showed a more effective search when the discrepant face was angry rather than happy or neutral. However, when the angry faces served as non-target distractors, younger adults' search was less effective than happy or neutral distractor conditions. In contrast, older adults showed a more efficient search with angry distractors than happy or neutral distractors, indicating that older adults were better able to inhibit angry facial expressions. In Experiment 3, we found that even a top-down search goal could not override the angry face superiority effect in guiding attention. In addition, RT distribution analyses supported that both younger and older adults performed the top-down angry face search qualitatively differently from the top-down happy face search. The current research indicates that threat face processing involves automatic attentional shift and a controlled attentional process. The current results suggest that age only influenced the controlled attentional process.  相似文献   

15.
In this functional magnetic resonance imaging (fMRI) study we examined neural processing of infant faces associated with a happy or a sad temperament in nulliparous women. We experimentally manipulated adult perception of infant temperament in a probabilistic learning task. In this task, participants learned about an infant's temperament through repeated pairing of the infant face with positive or negative facial expressions and vocalizations. At the end of the task, participants were able to differentiate between “mostly sad” infants who cried often and “mostly happy” infants who laughed often. Afterwards, brain responses to neutral faces of infants with a happy or a sad temperament were measured with fMRI and compared to brain responses to neutral infants with no temperament association. Our findings show that a brief experimental manipulation of temperament can change brain responses to infant signals. We found increased amygdala connectivity with frontal regions and the visual cortex, including the occipital fusiform gyrus, during the perception of infants with a happy temperament. In addition, amygdala connectivity was positively related to the post-manipulation ratings of infant temperament, indicating that amygdala connectivity is involved in the encoding of the rewarding value of an infant with a happy temperament.  相似文献   

16.
Two experiments competitively test 3 potential mechanisms (negativity inhibiting responses, feature-based accounts, and evaluative context) for the response latency advantage for recognizing happy expressions by investigating how the race of a target can moderate the strength of the effect. Both experiments indicate that target race modulates the happy face advantage, such that European American participants displayed the happy face advantage for White target faces, but displayed a response latency advantage for angry (Experiments 1 and 2) and sad (Experiment 2) Black target faces. This pattern of findings is consistent with an evaluative context mechanism and inconsistent with negativity inhibition and feature-based accounts of the happy face advantage. Thus, the race of a target face provides an evaluative context in which facial expressions are categorized.  相似文献   

17.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   

18.
Emotional expression and how it is lateralized across the two sides of the face may influence how we detect audiovisual speech. To investigate how these components interact we conducted experiments comparing the perception of sentences expressed with happy, sad, and neutral emotions. In addition we isolated the facial asymmetries for affective and speech processing by independently testing the two sides of a talker's face. These asymmetrical differences were exaggerated using dynamic facial chimeras in which left- or right-face halves were paired with their mirror image during speech production. Results suggest that there are facial asymmetries in audiovisual speech such that the right side of the face and right-facial chimeras supported better speech perception than their left-face counterparts. Affective information was also found to be critical in that happy expressions tended to improve speech performance on both sides of the face relative to all other emotions, whereas sad emotions generally inhibited visual speech information, particularly from the left side of the face. The results suggest that approach information may facilitate visual and auditory speech detection.  相似文献   

19.
Expression influences the recognition of familiar faces   总被引:3,自引:0,他引:3  
Face recognition has been assumed to be independent of facial expression. We used familiar and unfamiliar faces that were morphed from a happy to an angry expression within a given identity. Participants performed speeded two-choice decisions according to whether or not a face was familiar. Consistent with earlier findings, reaction times for classifications of unfamiliar faces were independent of facial expressions. In contrast, expression clearly influenced the recognition of familiar faces. with fastest recognition for moderately happy expressions. This suggests that representations of familiar faces for recognition preserve some information about typical emotional expressions.  相似文献   

20.
We investigated the source of the visual search advantage of some emotional facial expressions. An emotional face target (happy, surprised, disgusted, fearful, angry, or sad) was presented in an array of neutral faces. A faster detection was found for happy targets, with angry and, especially, sad targets being detected more poorly. Physical image properties (e.g., luminance) were ruled out as a potential source of these differences in visual search. In contrast, the search advantage is partly due to the facilitated processing of affective content, as shown by an emotion identification task. Happy expressions were identified faster than the other expressions and were less likely to be confounded with neutral faces, whereas misjudgements occurred more often for angry and sad expressions. Nevertheless, the distinctiveness of some local features (e.g., teeth) that are consistently associated with emotional expressions plays the strongest role in the search advantage pattern. When the contribution of these features to visual search was factored out statistically, the advantage disappeared.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号