首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The valence hypothesis suggests that the right hemisphere is specialised for negative emotions and the left hemisphere is specialised for positive emotions (Silberman & Weingartner, 1986). It is unclear to what extent valence-specific effects in facial emotion perception depend upon the gender of the perceiver. To explore this question 46 participants completed a free view lateralised emotion perception task which involved judging which of two faces expressed a particular emotion. Eye fixations of 24 of the participants were recorded using an eye tracker. A significant valence-specific laterality effect was obtained, with positive emotions more accurately identified when presented to the right of centre, and negative emotions more accurately identified when presented to the left of centre. The valence-specific laterality effect did not depend on the gender of the perceiver. Analysis of the eye tracking data showed that males made more fixations while recognising the emotions and that the left-eye was fixated substantially more than the right-eye during emotion perception. Finally, in a control condition where both faces were identical, but expressed a faint emotion, the participants were significantly more likely to select the right side when the emotion label was positive. This finding adds to evidence suggesting that valence effects in facial emotion perception are not only caused by the perception of the emotion but by other processes.  相似文献   

2.
The right hemisphere has often been viewed as having a dominant role in the processing of emotional information. Other evidence indicates that both hemispheres process emotional information but their involvement is valence specific, with the right hemisphere dealing with negative emotions and the left hemisphere preferentially processing positive emotions. This has been found under both restricted (Reuter-Lorenz & Davidson, 1981) and free viewing conditions (Jansari, Tranel, & Adophs, 2000). It remains unclear whether the valence-specific laterality effect is also sex specific or is influenced by the handedness of participants. To explore this issue we repeated Jansari et al.'s free-viewing laterality task with 78 participants. We found a valence-specific laterality effect in women but not men, with women discriminating negative emotional expressions more accurately when the face was presented on the left-hand side and discriminating positive emotions more accurately when those faces were presented on the right-hand side. These results indicate that under free viewing conditions women are more lateralised for the processing of facial emotion than are men. Handedness did not affect the lateralised processing of facial emotion. Finally, participants demonstrated a response bias on control trials, where facial emotion did not differ between the faces. Participants selected the left-hand side more frequently when they believed the expression was negative and the right-hand side more frequently when they believed the expression was positive. This response bias can cause a spurious valence-specific laterality effect which might have contributed to the conflicting findings within the literature.  相似文献   

3.
The majority of studies have demonstrated a right hemisphere (RH) advantage for the perception of emotions. Other studies have found that the involvement of each hemisphere is valence specific, with the RH better at perceiving negative emotions and the LH better at perceiving positive emotions [Reuter-Lorenz, P., & Davidson, R.J. (1981) Differential contributions of the 2 cerebral hemispheres to the perception of happy and sad faces. Neuropsychologia, 19, 609-613]. To account for valence laterality effects in emotion perception we propose an 'expectancy' hypothesis which suggests that valence effects are obtained when the top-down expectancy to perceive an emotion outweighs the strength of bottom-up perceptual information enabling the discrimination of an emotion. A dichotic listening task was used to examine alternative explanations of valence effects in emotion perception. Emotional sentences (spoken in a happy or sad tone of voice), and morphed-happy and morphed-sad sentences (which blended a neutral version of the sentence with the pitch of the emotion sentence) were paired with neutral versions of each sentence and presented dichotically. A control condition was also used, consisting of two identical neutral sentences presented dichotically, with one channel arriving before the other by 7 ms. In support of the RH hypothesis there was a left ear advantage for the perception of sad and happy emotional sentences. However, morphed sentences showed no ear advantage, suggesting that the RH is specialised for the perception of genuine emotions and that a laterality effect may be a useful tool for the detection of fake emotion. Finally, for the control condition we obtained an interaction between the expected emotion and the effect of ear lead. Participants tended to select the ear that received the sentence first, when they expected a 'sad' sentence, but not when they expected a 'happy' sentence. The results are discussed in relation to the different theoretical explanations of valence laterality effects in emotion perception.  相似文献   

4.
Findings from subjects with unilateral brain damage, as well as from normal subjects studied with tachistoscopic paradigms, argue that emotion is processed differently by each brain hemisphere. An open question concerns the extent to which such lateralised processing might occur under natural, freeviewing conditions. To explore this issue, we asked 28 normal subjects to discriminate emotions expressed by pairs of faces shown side-by-side, with no time or viewing constraints. Images of neutral expressions were shown paired with morphed images of very faint emotional expressions (happiness, surprise, disgust, fear, anger, or sadness). We found a surprising and robust laterality effect: When discriminating negative emotional expressions, subjects performed significantly better when the emotional face was to the left of the neutral face; conversely, when discriminating positive expressions, subjects performed better when the emotional face was to the right. We interpret this valence-specific laterality effect as consistent with the idea that the right hemisphere is specialised to process negative emotions, whereas the left is specialised to process positive emotions. The findings have important implications for how humans perceive facial emotion under natural conditions.  相似文献   

5.
Older adults have greater difficulty than younger adults perceiving vocal emotions. To better characterise this effect, we explored its relation to age differences in sensory, cognitive and emotional functioning. Additionally, we examined the role of speaker age and listener sex. Participants (N?=?163) aged 19–34 years and 60–85 years categorised neutral sentences spoken by ten younger and ten older speakers with a happy, neutral, sad, or angry voice. Acoustic analyses indicated that expressions from younger and older speakers denoted the intended emotion with similar accuracy. As expected, younger participants outperformed older participants and this effect was statistically mediated by an age-related decline in both optimism and working-memory. Additionally, age differences in emotion perception were larger for younger as compared to older speakers and a better perception of younger as compared to older speakers was greater in younger as compared to older participants. Last, a female perception benefit was less pervasive in the older than the younger group. Together, these findings suggest that the role of age for emotion perception is multi-faceted. It is linked to emotional and cognitive change, to processing biases that benefit young and own-age expressions, and to the different aptitudes of women and men.  相似文献   

6.
Recent research has looked at whether the expectancy of an emotion can account for subsequent valence specific laterality effects of prosodic emotion, though no research has examined this effect for facial emotion. In the study here (n=58), we investigated this issue using two tasks; an emotional face perception task and a novel word task that involved categorising positive and negative words. In the face perception task a valence specific laterality effect was found for surprise (positive) and anger (negative) faces in the control but not expectancy condition. Interestingly, lateralisation differed for face gender, revealing a left hemisphere advantage for male faces and a right hemisphere advantage for female faces. In the word task, an affective priming effect was found, with higher accuracy when valence of picture prime and word target were congruent. Target words were also responded to faster when presented to the LVF versus RVF in the expectancy but not control condition. These findings suggest that expecting an emotion influences laterality processing but that this differs in terms of the perceptual/experience dimension of the task. Further, that hemispheric processing of emotional expressions appear to differ in the gender of the image.  相似文献   

7.
《Brain and cognition》2011,75(3):324-331
Recent research has looked at whether the expectancy of an emotion can account for subsequent valence specific laterality effects of prosodic emotion, though no research has examined this effect for facial emotion. In the study here (n = 58), we investigated this issue using two tasks; an emotional face perception task and a novel word task that involved categorising positive and negative words. In the face perception task a valence specific laterality effect was found for surprise (positive) and anger (negative) faces in the control but not expectancy condition. Interestingly, lateralisation differed for face gender, revealing a left hemisphere advantage for male faces and a right hemisphere advantage for female faces. In the word task, an affective priming effect was found, with higher accuracy when valence of picture prime and word target were congruent. Target words were also responded to faster when presented to the LVF versus RVF in the expectancy but not control condition.These findings suggest that expecting an emotion influences laterality processing but that this differs in terms of the perceptual/experience dimension of the task. Further, that hemispheric processing of emotional expressions appear to differ in the gender of the image.  相似文献   

8.
Facial emotion-recognition difficulties have been reported in school-aged children with behavior problems; little is known, however, about either this association in preschool children or with regard to vocal emotion recognition. The current study explored the association between facial and vocal emotion recognition and behavior problems in a sample of 3 to 6-year-old children. A sample of 57 children enriched for risk of behavior problems (41 were recruited from the general population while 16 had been referred for behavior problems to local clinics) were each presented with a series of vocal and facial stimuli expressing different emotions (i.e., angry, happy, and sad) of low and high intensity. Parents rated children’s externalizing and internalizing behavior problems. Vocal and facial emotion recognition accuracy was negatively correlated with externalizing but not internalizing behavior problems independent of emotion type. The effects with the externalizing domain were independently associated with hyperactivity rather than conduct problems. The results highlight the importance of using vocal as well as facial stimuli when studying the relationship between emotion-recognition and behavior problems. Future studies should test the hypothesis that difficulties in responding to adult instructions and commands seen in children with attention deficit/hyperactivity disorder (ADHD) may be due to deficits in the processing of vocal emotions.  相似文献   

9.
Pell MD 《Brain and cognition》2002,48(2-3):499-504
This report describes some preliminary attributes of stimuli developed for future evaluation of nonverbal emotion in neurological populations with acquired communication impairments. Facial and vocal exemplars of six target emotions were elicited from four male and four female encoders and then prejudged by 10 young decoders to establish the category membership of each item at an acceptable consensus level. Representative stimuli were then presented to 16 additional decoders to gather indices of how category membership and encoder gender influenced recognition accuracy of emotional meanings in each nonverbal channel. Initial findings pointed to greater facility in recognizing target emotions from facial than vocal stimuli overall and revealed significant accuracy differences among the six emotions in both the vocal and facial channels. The gender of the encoder portraying emotional expressions was also a significant factor in how well decoders recognized specific emotions (disgust, neutral), but only in the facial condition.  相似文献   

10.
Due to mood-congruency effects, we expect the emotion perceived on a face to be biased towards one's own mood. But the findings in the scant literature on such mood effects in normal healthy populations have not consistently and adequately supported this expectation. Employing effective mood manipulation techniques that ensured that the intended mood was sustained throughout the perception task, we explored mood-congruent intensity and recognition accuracy biases in emotion perception. Using realistic face stimuli with expressive cues of happiness and sadness, we demonstrated that happy, neutral and ambiguous expressions were perceived more positively in the positive than in the negative mood. The mood-congruency effect decreased with the degree of perceived negativity in the expression. Also, males were more affected by the mood-congruency effect in intensity perception than females. We suggest that the greater salience and better processing of negative stimuli and the superior cognitive ability of females in emotion perception are responsible for these observations. We found no evidence for mood-congruency effect in the recognition accuracy of emotions and suggest with supporting evidence that past reports of this effect may be attributed to response bias driven by mood.  相似文献   

11.
Empirical tests of the "right hemisphere dominance" versus "valence" theories of emotion processing are confounded by known sex differences in lateralization. Moreover, information about the sex of the person posing an emotion might be processed differently by men and women because of an adaptive male bias to notice expressions of threat and vigilance in other male faces. The purpose of this study was to investigate whether sex of poser and emotion displayed influenced lateralization in men and women by analyzing "laterality quotient" scores on a test which depicts vertically split chimeric faces, formed with one half showing a neutral expression and the other half showing an emotional expression. We found that men (N = 50) were significantly more lateralized for emotions indicative of vigilance and threat (happy, sad, angry, and surprised) in male faces relative to female faces and compared to women (N = 44). These data indicate that sex differences in functional cerebral lateralization for facial emotion may be specific to the emotion presented and the sex of face presenting it.  相似文献   

12.
Research on vocal expressions of emotion indicates that persons can identify emotions from voice with relatively high accuracy rates. In addition, fairly consistent vocal profiles for specific emotions have been identified. However, important methodological issues remain to be addressed. In this paper, we address the issue of whether there are individual differences in the manner in which particular emotions may be expressed vocally and whether trained speakers’ portrayals of emotion are in some sense superior to untrained speakers’ portrayals. Consistent support was found for differences across speakers in the manner in which they expressed the same emotions. No accompanying relationship was found between differences in expression and accuracy of identification of those expressions. Little evidence for the superiority of trained speakers was found. Implications of these findings for future studies of vocal expressions of emotion, as well as our understanding of emotions in general, are discussed.  相似文献   

13.
The present study examined acoustic cue utilisation for perception of vocal emotions. Two sets of vocal-emotional stimuli were presented to 35 German and 30 American listeners: (1) sentences in German spoken with five different vocal emotions; and (2) systematically rate- or pitch-altered versions of the original emotional stimuli. In addition to response frequencies on emotional categories, activity ratings were obtained. For the systematically altered stimuli, slow rate was reliably associated with the “sad” label. In contrast, fast rate was classified as angry, frightened, or neutral. Manipulation of pitch variation was less potent than rate manipulation in influencing vocal emotional category choices. Reduced pitch variation was associated with perception as sad or neutral; greater pitch variation increased frightened, angry, and happy responses. Performance was highly similar for the two samples, although across tasks, German subjects perceived greater variability of activity in the emotional stimuli than did American participants.  相似文献   

14.
Young and old adults’ ability to recognize emotions from vocal expressions and music performances was compared. The stimuli consisted of (a) acted speech (anger, disgust, fear, happiness, and sadness; each posed with both weak and strong emotion intensity), (b) synthesized speech (anger, fear, happiness, and sadness), and (c) short melodies played on the electric guitar (anger, fear, happiness, and sadness; each played with both weak and strong emotion intensity). The listeners’ recognition of discrete emotions and emotion intensity was assessed and the recognition rates were controlled for various response biases. Results showed emotion-specific age-related differences in recognition accuracy. Old adults consistently received significantly lower recognition rates for negative, but not for positive, emotions for both speech and music stimuli. Some age-related differences were also evident in the listeners’ ratings of emotion intensity. The results show the importance of considering individual emotions in studies on age-related differences in emotion recognition.  相似文献   

15.
Recognition of facial affect in Borderline Personality Disorder   总被引:1,自引:0,他引:1  
Patients with Borderline Personality Disorder (BPD) have been described as emotionally hyperresponsive, especially to anger and fear in social contexts. The aim was to investigate whether BPD patients are more sensitive but less accurate in terms of basic emotion recognition, and show a bias towards perceiving anger and fear when evaluating ambiguous facial expressions. Twenty-five women with BPD were compared with healthy controls on two different facial emotion recognition tasks. The first task allowed the assessment of the subjective detection threshold as well as the number of evaluation errors on six basic emotions. The second task assessed a response bias to blends of basic emotions. BPD patients showed no general deficit on the affect recognition task, but did show enhanced learning over the course of the experiment. For ambiguous emotional stimuli, we found a bias towards the perception of anger in the BPD patients but not towards fear. BPD patients are accurate in perceiving facial emotions, and are probably more sensitive to familiar facial expressions. They show a bias towards perceiving anger, when socio-affective cues are ambiguous. Interpersonal training should focus on the differentiation of ambiguous emotion in order to reduce a biased appraisal of others.  相似文献   

16.
This experiment examines how emotion is perceived by using facial and vocal cues of a speaker. Three levels of facial affect were presented using a computer-generated face. Three levels of vocal affect were obtained by recording the voice of a male amateur actor who spoke a semantically neutral word in different simulated emotional states. These two independent variables were presented to subjects in all possible permutations—visual cues alone, vocal cues alone, and visual and vocal cues together—which gave a total set of 15 stimuli. The subjects were asked to judge the emotion of the stimuli in a two-alternative forced choice task (either HAPPY or ANGRY). The results indicate that subjects evaluate and integrate information from both modalities to perceive emotion. The influence of one modality was greater to the extent that the other was ambiguous (neutral). The fuzzy logical model of perception (FLMP) fit the judgments significantly better than an additive model, which weakens theories based on an additive combination of modalities, categorical perception, and influence from only a single modality.  相似文献   

17.
Research suggests that infants progress from discrimination to recognition of emotions in faces during the first half year of life. It is unknown whether the perception of emotions from bodies develops in a similar manner. In the current study, when presented with happy and angry body videos and voices, 5-month-olds looked longer at the matching video when they were presented upright but not when they were inverted. In contrast, 3.5-month-olds failed to match even with upright videos. Thus, 5-month-olds but not 3.5-month-olds exhibited evidence of recognition of emotions from bodies by demonstrating intermodal matching. In a subsequent experiment, younger infants did discriminate between body emotion videos but failed to exhibit an inversion effect, suggesting that discrimination may be based on low-level stimulus features. These results document a developmental change from discrimination based on non-emotional information at 3.5 months to recognition of body emotions at 5 months. This pattern of development is similar to face emotion knowledge development and suggests that both the face and body emotion perception systems develop rapidly during the first half year of life.  相似文献   

18.
Correctly perceiving emotions in others is a crucial part of social interactions. We constructed a set of dynamic stimuli to determine the relative contributions of the face and body to the accurate perception of basic emotions. We also manipulated the length of these dynamic stimuli in order to explore how much information is needed to identify emotions. The findings suggest that even a short exposure time of 250 milliseconds provided enough information to correctly identify an emotion above the chance level. Furthermore, we found that recognition patterns from the face alone and the body alone differed as a function of emotion. These findings highlight the role of the body in emotion perception and suggest an advantage for angry bodies, which, in contrast to all other emotions, were comparable to the recognition rates from the face and may be advantageous for perceiving imminent threat from a distance.  相似文献   

19.
The present study examined the reliability of a dichotic listening task using nonverbal stimuli. Twenty undergraduate students (all right-handed native English speakers) had to report whether they had heard a target emotion. The task used English words (bower, dower, power, tower) pronounced with an angry, happy, neutral, or sad emotional tone. Results showed a relatively high level of test-retest reliability for the laterality effect. In addition, a significant gender by ear of presentation interaction was obtained. The interaction reflected the fact that a strong left ear advantage was found in females but not in males. The findings indicate that the task used here should be considered a reliable means to assess the lateralization of emotions. Issues concerning the relation between gender and laterality are addressed in the discussion.  相似文献   

20.
It has been the matter of much debate whether perceivers are able to distinguish spontaneous vocal expressions of emotion from posed vocal expressions (e.g., emotion portrayals). In this experiment, we show that such discrimination can be manifested in the autonomic arousal of listeners during implicit processing of vocal emotions. Participants (N = 21, age: 20–55 years) listened to two consecutive blocks of brief voice clips and judged the gender of the speaker in each clip, while we recorded three measures of sympathetic arousal of the autonomic nervous system (skin conductance level, mean arterial blood pressure, pulse rate). Unbeknownst to the listeners, the blocks consisted of two types of emotional speech: spontaneous and posed clips. As predicted, spontaneous clips yielded higher arousal levels than posed clips, suggesting that listeners implicitly distinguished between the two kinds of expression, even in the absence of any requirement to retrieve emotional information from the voice. We discuss the results with regard to theories of emotional contagion and the use of posed stimuli in studies of emotions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号