首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers’ discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum “felt the same” or “felt different.” In the identification task, images were presented individually and participants were asked to label the emotion displayed on the face (e.g., “Does she look happy or sad?”). Results suggest that 3.5-year-olds have the same category boundary as adults. They were more likely to report that the image pairs felt “different” at the image pair that crossed the category boundary. These results suggest that 3.5-year-olds perceive happy and sad emotional facial expressions categorically as adults do. Categorizing emotional expressions is advantageous for children if it allows them to use social information faster and more efficiently.  相似文献   

2.
Correctly perceiving emotions in others is a crucial part of social interactions. We constructed a set of dynamic stimuli to determine the relative contributions of the face and body to the accurate perception of basic emotions. We also manipulated the length of these dynamic stimuli in order to explore how much information is needed to identify emotions. The findings suggest that even a short exposure time of 250 milliseconds provided enough information to correctly identify an emotion above the chance level. Furthermore, we found that recognition patterns from the face alone and the body alone differed as a function of emotion. These findings highlight the role of the body in emotion perception and suggest an advantage for angry bodies, which, in contrast to all other emotions, were comparable to the recognition rates from the face and may be advantageous for perceiving imminent threat from a distance.  相似文献   

3.
A total of 64 children, aged 7 and 10, watched a clown performing three sketches rated as very funny by the children. Two experimental conditions were created by asking half of the participants to suppress their laughter. Facial expressions were videotaped and analysed with FACS. For both ages, the results show a significant shorter duration (but not a lower frequency) of episodes of laughter and Duchenne smiles, and greater frequency of facial control movements in the suppression compared to the free expression group. The detailed results on individual facial action units used to control amusement expressions suggest hypotheses on the nature of the underlying processes. The participants' explicit knowledge of their control strategies was assessed through standardised interviews. Although behavioural control strategies were reported equally frequently by the two age groups, 10-year-olds verbalised more mental control strategies than 7-year-olds. This theoretically expected difference was not related to the actual ability to control facial expression. This result challenges the commonly held assumption that explicit knowledge of control strategies results in a greater ability to execute such control in ongoing social interactions.  相似文献   

4.
We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles: restrained, standard, and exaggerated intention. Participants used a 5-point Likert scale to rate each performance on 19 different emotional qualities. The data analysis revealed that variations in expressive intention had their greatest impact when the performances could be seen; the ratings from participants who could only hear the performances were the same across the three expressive styles. Evidence was also found for an interaction effect leading to an emergent property, intensity of positive emotion, when participants both heard and saw the musical performances. An exploratory factor analysis revealed orthogonal dimensions for positive and negative emotions, which may account for the subjective experience that many listeners report of having multi-valent or complex reactions to music, such as “bittersweet.”  相似文献   

5.
Emotional facial expressions are perceived categorically. Little is known about individual differences in the position of the category boundary, nor whether the category boundaries differ across stimulus continua. Similarly, little is known about whether individuals’ category boundaries are stable over time. We investigated these topics in a series of experiments designed to locate category boundaries using converging evidence from identification and discrimination tasks. We compared both across individuals and within individuals across two sessions that spanned a week. Results show differences between individuals in the location of category boundaries, and suggest that these differences are stable over time. We also found differences in boundary location when we compared images depicting different models.  相似文献   

6.
Recognising a facial expression is more difficult when the expresser's body conveys incongruent affect. Existing research has documented such interference for universally recognisable bodily expressions. However, it remains unknown whether learned, conventional gestures can interfere with facial expression processing. Study 1 participants (N?=?62) viewed videos of people simultaneously producing facial expressions and hand gestures and reported the valence of either the face or hand. Responses were slower and less accurate when the face-hand pairing was incongruent compared to congruent. We hypothesised that hand gestures might exert an even stronger influence on facial expression processing when other routes to understanding the meaning of a facial expression, such as with sensorimotor simulation, are disrupted. Participants in Study 2 (N?=?127) completed the same task, but the facial mobility of some participants was restricted, which disrupted face processing in prior work. The hand-face congruency effect from Study 1 was replicated. The facial mobility manipulation affected males only, and it did not moderate the congruency effect. The present work suggests the affective meaning of conventional gestures is processed automatically and can interfere with face perception, but does not suggest that perceivers rely more on gestures when sensorimotor face processing is disrupted.  相似文献   

7.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

8.
There remains conflict in the literature about the lateralisation of affective face perception. Some studies have reported a right hemisphere advantage irrespective of valence, whereas others have found a left hemisphere advantage for positive, and a right hemisphere advantage for negative, emotion. Differences in injury aetiology and chronicity, proportion of male participants, participant age, and the number of emotions used within a perception task may contribute to these contradictory findings. The present study therefore controlled and/or directly examined the influence of these possible moderators. Right brain-damaged (RBD; n = 17), left brain-damaged (LBD; n = 17), and healthy control (HC; n = 34) participants completed two face perception tasks (identification and discrimination). No group differences in facial expression perception according to valence were found. Across emotions, the RBD group was less accurate than the HC group, however RBD and LBD group performance did not differ. The lack of difference between RBD and LBD groups indicates that both hemispheres are involved in positive and negative expression perception. The inclusion of older adults and the well-defined chronicity range of the brain-damaged participants may have moderated these findings. Participant sex and general face perception ability did not influence performance. Furthermore, while the RBD group was less accurate than the LBD group when the identification task tested two emotions, performance of the two groups was indistinguishable when the number of emotions increased (four or six). This suggests that task demand moderates a study’s ability to find hemispheric differences in the perception of facial emotion.  相似文献   

9.
A rapid response to a threatening face in a crowd is important to successfully interact in social environments. Visual search tasks have been employed to determine whether there is a processing advantage for detecting an angry face in a crowd, compared to a happy face. The empirical findings supporting the “anger superiority effect” (ASE), however, have been criticized on the basis of possible low-level visual confounds and because of the limited ecological validity of the stimuli. Moreover, a “happiness superiority effect” is usually found with more realistic stimuli. In the present study, we tested the ASE by using dynamic (and static) images of realistic human faces, with validated emotional expressions having similar intensities, after controlling the bottom-up visual saliency and the amount of image motion. In five experiments, we found strong evidence for an ASE when using dynamic displays of facial expressions, but not when the emotions were expressed by static face images.  相似文献   

10.
We examined how the perceived age of adult faces is affected by adaptation to younger or older adult faces. Observers viewed images of a synthetic male face simulating ageing over a modelled range from 15 to 65 years. Age was varied by changing shape cues or textural cues. Age level was varied in a staircase to find the observer's subjective category boundary between “old” and “young”. These boundaries were strongly biased by adaptation to the young or old face, with significant aftereffects induced by either shape or textural cues. A further experiment demonstrated comparable aftereffects for photorealistic images of average older or younger adult faces, and found that aftereffects showed some selectivity for a change in gender but also strongly transferred across gender. This transfer shows that adaptation can adjust to the attribute of age somewhat independently of other facial attributes. These findings suggest that perceived age, like many other natural facial dimensions, is highly susceptible to adaptation, and that this adaptation can be carried by both the structural and textural changes that normally accompany facial ageing.  相似文献   

11.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

12.
Previous research has shown that automatic evaluations can be highly context dependent. Expanding on past research demonstrating context effects for existing attitudes toward familiar objects, the present research examined basic principles that guide the formation of context-dependent versus context-independent automatic attitudes. Results from four experiments showed that: (a) newly formed attitudes generalised to novel contexts when prior experiences with the attitude object were evaluatively homogeneous; (b) when prior experiences were evaluatively heterogeneous, automatic evaluations became context sensitive, such that they reflected the contingency between the valence of prior experiences and the context in which these experiences occurred; and (c) when prior experiences were evaluatively heterogeneous across different contexts, novel contexts elicited automatic evaluations that reflected the valence of first experiences with the attitude object. Implications for research on automatic evaluation and attitude change are discussed.  相似文献   

13.
Numerous studies have shown an exacerbation of attentional bias towards threat in anxiety states. However, the cognitive mechanisms responsible for these attentional biases remain largely unknown. Further, the authors outline the need to consider the nature of the attentional processes in operation (hypervigilance, avoidance, or disengagement). We adapted a dot-probe paradigm to record behavioral and electrophysiological responses in 26 participants reporting high or low fear of evaluation, a major component of social anxiety. Pairs of faces including a neutral and an emotional face (displaying anger, fear, disgust, or happiness) were presented during 200 ms and then replaced by a neutral target to discriminate. Results show that anxious participants were characterized by an increased P1 in response to pairs of faces, irrespective of the emotional expression included in the pair. They also showed an increased P2 in response to angry–neutral pairs selectively. Finally, in anxious participants, the P1 response to targets was enhanced when replacing emotional faces, whereas non-anxious subjects showed no difference between the two conditions. These results indicate an early hypervigilance to face stimuli in social anxiety, coupled with difficulty in disengaging from threat and sustained attention to emotional stimuli. They are discussed within the framework of current models of anxiety and psychopathology.  相似文献   

14.
Negotiators often fail to reach integrative (”win-win”) agreements because they think that their own and other’s preferences are diametrically opposed—the so-called fixed-pie perception. We examined how verbal (Experiment 1) and nonverbal (Experiment 2) emotional expressions may reduce fixed-pie perception and promote integrative behavior. In a two-issue computer-simulated negotiation, participants negotiated with a counterpart emitting one of the following emotional response patterns: (1) anger on both issues, (2) anger on participant’s high priority issue and happiness on participant’s low-priority issue, (3) happiness on high priority issue and anger on low-priority issue, or (4) happiness on both issues. In both studies, the third pattern reduced fixed-pie perception and increased integrative behavior, whereas the second pattern amplified bias and reduced integrative behavior. Implications for how emotions shape social exchange are discussed.  相似文献   

15.
Previous research has highlighted theoretical and empirical links between measures of both personality and trait emotional intelligence (EI), and the ability to decode facial expressions of emotion. Research has also found that the posed, static characteristics of the photographic stimuli used to explore these links affects the decoding process and differentiates them from the natural expressions they represent. This undermines the ecological validity of established trait-emotion decoding relationships.This study addresses these methodological shortcomings by testing relationships between the reliability of participant ratings of dynamic, spontaneously elicited expressions of emotion with personality and trait EI. Fifty participants completed personality and self-report EI questionnaires, and used a computer-logging program to continuously rate change in emotional intensity expressed in video clips. Each clip was rated twice to obtain an intra-rater reliability score. The results provide limited support for links between both trait EI and personality variables and how reliably we decode natural expressions of emotion. Limitations and future directions are discussed.  相似文献   

16.
The overestimation of the duration of fear-inducing stimuli relative to neutral stimuli is a robust finding within the temporal perception literature. Whilst this effect is consistently reported with auditory and visual stimuli, there has been little examination of whether it can be replicated using painful stimulation. The aim of the current study was, therefore, to explore how pain and the anticipation of pain affected perceived duration of time. A modified verbal estimation paradigm was developed in which participants estimated the duration of shapes previously conditioned to be associated with pain, compared to those not associated with pain. Duration estimates were significantly longer on trials in which pain was received or anticipated than on control trials. Slope and intercept analysis revealed that the anticipation of pain resulted in steeper slopes and greater intercept values than for control trials. The results suggest that increased arousal and attention, when anticipating and experiencing pain, result in longer perceived durations. The results are discussed in relation to internal clock theory and neurocognitive models of time perception.  相似文献   

17.
Event-related brain potentials were measured in 7- and 12-month-old infants to examine the development of processing happy and angry facial expressions. In 7-month-olds a larger negativity to happy faces was observed at frontal, central, temporal and parietal sites (Experiment 1), whereas 12-month-olds showed a larger negativity to angry faces at occipital sites (Experiment 2). These data suggest that processing of these facial expressions undergoes development between 7 and 12 months: while 7-month-olds exhibit heightened sensitivity to happy faces, 12-month-olds resemble adults in their heightened sensitivity to angry faces. In Experiment 3 infants' visual preference was assessed behaviorally, revealing that the differences in ERPs observed at 7 and 12 months do not simply reflect differences in visual preference.  相似文献   

18.
It is often assumed that intimacy and familiarity will lead to better and more effective emotional communication between two individuals. However, research has failed to unequivocally support this claim. The present study proposes that close dyads exhibit superiority in the decoding of subdued facial cues than in the decoding of highly intense expressions. A total of 43 close friend dyads and 49 casual acquaintance dyads (all women) were compared on their recognition of their partner's and a stranger's subdued facial expressions. Dyadic analyses indicate that close friends were more accurate and also improved more rapidly than casual acquaintances in decoding one other's subdued expressions of sadness, anger, and happiness, especially the two negative emotions, but not in detecting the stranger's subdued expressions. The results strongly suggest that intimacy fosters more accurate decoding of subdued facial expressions.  相似文献   

19.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

20.
Two studies examined the association between attachment avoidance and empathic accuracy when perceiving strangers. In Study 1, participants with high attachment avoidance revealed lower accuracy in identifying the thoughts and feelings of their interaction partner compared with participants with low attachment avoidance. High‐avoidance participants also tended to mentally distance themselves from the other and thought less often about him or her. Study 2 replicated the pattern of lower empathic accuracy for high‐attachment‐avoidance participants, this time, when respondents did not actually interact with the target of perception. We discuss reasons for why people with high attachment avoidance might show impaired empathic accuracy while interacting with strangers. We also consider more general influences of attachment avoidance on perception processes and, consequently, on social success.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号