首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.  相似文献   

2.
In this study we used an affective priming task to address the issue of whether the processing of emotional facial expressions occurs automatically independent of attention or attentional resources. Participants had to attend to the emotion expression of the prime face, or to a nonemotional feature of the prime face, the glasses. When participants attended to glasses (emotion unattended), they had to report whether the face wore glasses or not (the glasses easy condition) or whether the glasses were rounded or squared (the shape difficult condition). Affective priming, measured on valence decisions on target words, was mainly defined as interference from incongruent rather than facilitation from congruent trials. Significant priming effects were observed just in the emotion and glasses tasks but not in the shape task. When the key–response mapping increased in complexity, taxing working memory load, affective priming effects were reduced equally for the three types of tasks. Thus, attentional load and working memory load affected additively to the observed reduction in affective priming. These results cast some doubts on the automaticity of processing emotional facial expressions.  相似文献   

3.
This study examined the relationships between trait emotional intelligence (EI) and tasks involving the recognition of facial expressions of emotion. Two facial expression recognition tasks using the inspection time (IT) paradigm assessed speed of emotional information processing. An unspeeded emotion recognition task was also included, and a symbol IT task was used to assess speed of processing of non-emotional information. It was found that scores on all three emotion-related tasks were strongly intercorrelated, as were scores on the three IT tasks. The two emotional IT scores remained significantly correlated when symbol IT performance was partialled out. This finding, together with the associations between the speeded (IT) and unspeeded face tasks suggests that the association between the emotional IT tasks is not entirely accounted for by general processing speed, and that a general emotion-processing ability also contributes to performance on these tasks. An EI subscale assessing Appraisal of Emotions was significantly correlated with performance on the emotional IT tasks, suggesting that self-reports of emotional perception ability do relate to performance measures.  相似文献   

4.
Previous research with speeded-response interference tasks modeled on the Garner paradigm has demonstrated that task-irrelevant variations in either emotional expression or facial speech do not interfere with identity judgments, but irrelevant variations in identity do interfere with expression and facial speech judgments. Sex, like identity, is a relatively invariant aspect of faces. Drawing on a recent model of face processing according to which invariant and changeable aspects of faces are represented in separate neurological systems, we predicted asymmetric interference between sex and emotion classification. The results of Experiment 1, in which the Garner paradigm was employed, confirmed this prediction: Emotion classifications were influenced by the sex of the faces, but sex classifications remained relatively unaffected by facial expression. A second experiment, in which the difficulty of the tasks was equated, corroborated these findings, indicating that differences in processing speed cannot account for the asymmetric relationship between facial emotion and sex processing. A third experiment revealed the same pattern of asymmetric interference through the use of a variant of the Simon paradigm. To the extent that Garner interference and Simon interference indicate interactions at perceptual and response-selection stages of processing, respectively, a challenge for face processing models is to show how the same asymmetric pattern of interference could occur at these different stages. The implications of these findings for the functional independence of the different components of face processing are discussed.  相似文献   

5.
This investigation examined whether impairment in configural processing could explain deficits in face emotion recognition in people with Parkinson’s disease (PD). Stimuli from the Radboud Faces Database were used to compare recognition of four negative emotion expressions by older adults with PD (n = 16) and matched controls (n = 17). Participants were tasked with categorizing emotional expressions from upright and inverted whole faces and facial composites; it is difficult to derive configural information from these two types of stimuli so featural processing should play a larger than usual role in accurate recognition of emotional expressions. We found that the PD group were impaired relative to controls in recognizing anger, disgust and fearful expressions in upright faces. Then, consistent with a configural processing deficit, participants with PD showed no composite effect when attempting to identify facial expressions of anger, disgust and fear. A face inversion effect, however, was observed in the performance of all participants in both the whole faces and facial composites tasks. These findings can be explained in terms of a configural processing deficit if it is assumed that the disruption caused by facial composites was specific to configural processing, whereas inversion reduced performance by making it difficult to derive both featural and configural information from faces.  相似文献   

6.
Children 3, 6, 9, and 12 years of age were assessed on their ability to recognize and identify facial expressions of emotion. In an emotion recognition (ER) task, children were presented with three facial photographs and were asked to select the photo representing a particular emotion (e.g., happiness, disgust, sadness). Another task, emotion labeling (EL), required that the child name the emotional state expressed in a facial photograph. An age trend was found for both tasks: Accuracy in judging emotional states increased with age. Results showed that scores on the ER task were significantly better than scores on the EL task, suggesting that recognition of an emotional state was less difficult than verbally identifying an emotional state. The order in which the two tasks were given significantly affected results on the EL task: When the ER task preceded the EL task, scores on EL were higher than when EL was given first. Accuracy in judging the individual emotion categories also varied with the order of tasks.  相似文献   

7.
There is evidence that some emotional expressions are characterized by diagnostic cues from individual face features. For example, an upturned mouth is indicative of happiness, whereas a furrowed brow is associated with anger. The current investigation explored whether motivating people to perceive stimuli in a local (i.e., feature-based) rather than global (i.e., holistic) processing orientation was advantageous for recognizing emotional facial expressions. Participants classified emotional faces while primed with local and global processing orientations, via a Navon letter task. Contrary to previous findings for identity recognition, the current findings are indicative of a modest advantage for face emotion recognition under conditions of local processing orientation. When primed with a local processing orientation, participants performed both significantly faster and more accurately on an emotion recognition task than when they were primed with a global processing orientation. The impacts of this finding for theories of emotion recognition and face processing are considered.  相似文献   

8.
Three age groups of participants (6–8 years, 9–11 years, adults) performed two tasks: A face recognition task and a Garner task. In the face recognition task, the participants were presented with 20 faces and then had to recognize them among 20 new faces. In the Garner tasks, the participants had to sort, as fast as possible, the photographs of two persons expressing two emotions by taking into account only one of the two dimensions (identity or emotion). When the sorting task was on one dimension, the other dimension was varied either in a correlated, a constant or an orthogonal way in distinct subsessions. The results indicated an increase in face recognition abilities. They also showed an interference of identity in the emotion-sorting task that was similar in the three age groups. Nevertheless, an interference of emotion in the identity-sorting task was significant only for the children and was more important for the youngest group. These observations suggest that the development in face recognition ability rests on the development of the ability to attend selectively to identity, without paying attention to emotional facial expression.  相似文献   

9.
Sensorimotor models suggest that understanding the emotional content of a face recruits a simulation process in which a viewer partially reproduces the facial expression in their own sensorimotor system. An important prediction of these models is that disrupting simulation should make emotion recognition more difficult. Here we used electroencephalogram (EEG) and facial electromyogram (EMG) to investigate how interfering with sensorimotor signals from the face influences the real-time processing of emotional faces. EEG and EMG were recorded as healthy adults viewed emotional faces and rated their valence. During control blocks, participants held a conjoined pair of chopsticks loosely between their lips. During interference blocks, participants held the chopsticks horizontally between their teeth and lips to generate motor noise on the lower part of the face. This noise was confirmed by EMG at the zygomaticus. Analysis of EEG indicated that faces expressing happiness or disgust—lower face expressions—elicited larger amplitude N400 when they were presented during the interference than the control blocks, suggesting interference led to greater semantic retrieval demands. The selective impact of facial motor interference on the brain response to lower face expressions supports sensorimotor models of emotion understanding.  相似文献   

10.
Faces provide a complex source of information via invariant (e.g., race, sex and age) and variant (e.g., emotional expressions) cues. At present, it is not clear whether these different cues are processed separately or whether they interact. Using the Garner Paradigm, Experiment 1 confirmed that race, sex, and age cues affected the categorization of faces according to emotional expression whereas emotional expression had no effect on the categorization of faces by sex, age, or race. Experiment 2 used inverted faces and replicated this pattern of asymmetrical interference for race and age cues, but not for sex cues for which no interference on emotional expression categorization was observed. Experiment 3 confirmed this finding with a more stringently matched set of facial stimuli. Overall, this study shows that invariant cues interfere with the processing of emotional expressions. It indicates that the processing of invariant cues, but not of emotional expressions, is obligatory and that it precedes that of emotional expressions.  相似文献   

11.
Looking at another person’s facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals’ facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual—and not just conceptual—processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.  相似文献   

12.
The right hemisphere has often been viewed as having a dominant role in the processing of emotional information. Other evidence indicates that both hemispheres process emotional information but their involvement is valence specific, with the right hemisphere dealing with negative emotions and the left hemisphere preferentially processing positive emotions. This has been found under both restricted (Reuter-Lorenz & Davidson, 1981) and free viewing conditions (Jansari, Tranel, & Adophs, 2000). It remains unclear whether the valence-specific laterality effect is also sex specific or is influenced by the handedness of participants. To explore this issue we repeated Jansari et al.'s free-viewing laterality task with 78 participants. We found a valence-specific laterality effect in women but not men, with women discriminating negative emotional expressions more accurately when the face was presented on the left-hand side and discriminating positive emotions more accurately when those faces were presented on the right-hand side. These results indicate that under free viewing conditions women are more lateralised for the processing of facial emotion than are men. Handedness did not affect the lateralised processing of facial emotion. Finally, participants demonstrated a response bias on control trials, where facial emotion did not differ between the faces. Participants selected the left-hand side more frequently when they believed the expression was negative and the right-hand side more frequently when they believed the expression was positive. This response bias can cause a spurious valence-specific laterality effect which might have contributed to the conflicting findings within the literature.  相似文献   

13.
Recent research has looked at whether the expectancy of an emotion can account for subsequent valence specific laterality effects of prosodic emotion, though no research has examined this effect for facial emotion. In the study here (n=58), we investigated this issue using two tasks; an emotional face perception task and a novel word task that involved categorising positive and negative words. In the face perception task a valence specific laterality effect was found for surprise (positive) and anger (negative) faces in the control but not expectancy condition. Interestingly, lateralisation differed for face gender, revealing a left hemisphere advantage for male faces and a right hemisphere advantage for female faces. In the word task, an affective priming effect was found, with higher accuracy when valence of picture prime and word target were congruent. Target words were also responded to faster when presented to the LVF versus RVF in the expectancy but not control condition. These findings suggest that expecting an emotion influences laterality processing but that this differs in terms of the perceptual/experience dimension of the task. Further, that hemispheric processing of emotional expressions appear to differ in the gender of the image.  相似文献   

14.
The present study examined whether information processing bias against emotional facial expressions is present among individuals with social anxiety. College students with high (high social anxiety group; n  = 26) and low social anxiety (low social anxiety group; n  = 26) performed three different types of working memory tasks: (a) ordering positive and negative facial expressions according to the intensity of emotion; (b) ordering pictures of faces according to age; and (c) ordering geometric shapes according to size. The high social anxiety group performed significantly more poorly than the low social anxiety group on the facial expression task, but not on the other two tasks with the nonemotional stimuli. These results suggest that high social anxiety interferes with processing of emotionally charged facial expressions.  相似文献   

15.
Age effects on social cognition: faces tell a different story   总被引:1,自引:0,他引:1  
The authors administered social cognition tasks to younger and older adults to investigate age-related differences in social and emotional processing. Although slower, older adults were as accurate as younger adults in identifying the emotional valence (i.e., positive, negative, or neutral) of facial expressions. However, the age difference in reaction time was largest for negative faces. Older adults were significantly less accurate at identifying specific facial expressions of fear and sadness. No age differences specific to social function were found on tasks of self-reference, identifying emotional words, or theory of mind. Performance on the social tasks in older adults was independent of performance on general cognitive tasks (e.g., working memory) but was related to personality traits and emotional awareness. Older adults also showed more intercorrelations among the social tasks than did the younger adults. These findings suggest that age differences in social cognition are limited to the processing of facial emotion. Nevertheless, with age there appears to be increasing reliance on a common resource to perform social tasks, but one that is not shared with other cognitive domains.  相似文献   

16.
《Brain and cognition》2011,75(3):324-331
Recent research has looked at whether the expectancy of an emotion can account for subsequent valence specific laterality effects of prosodic emotion, though no research has examined this effect for facial emotion. In the study here (n = 58), we investigated this issue using two tasks; an emotional face perception task and a novel word task that involved categorising positive and negative words. In the face perception task a valence specific laterality effect was found for surprise (positive) and anger (negative) faces in the control but not expectancy condition. Interestingly, lateralisation differed for face gender, revealing a left hemisphere advantage for male faces and a right hemisphere advantage for female faces. In the word task, an affective priming effect was found, with higher accuracy when valence of picture prime and word target were congruent. Target words were also responded to faster when presented to the LVF versus RVF in the expectancy but not control condition.These findings suggest that expecting an emotion influences laterality processing but that this differs in terms of the perceptual/experience dimension of the task. Further, that hemispheric processing of emotional expressions appear to differ in the gender of the image.  相似文献   

17.
Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD). Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.  相似文献   

18.
为了考察拥挤感启动对威胁性面部表情识别的影响,以28名大学生为被试,进行不同拥挤启动条件下的愤怒-中性和恐惧-中性表情识别任务。信号检测论分析发现,拥挤感启动降低了愤怒表情识别的辨别力,不影响其判断标准,也不影响恐惧表情识别的辨别力和判断标准;主观报告的愤怒表情强度在拥挤感启动条件下显著高于非拥挤条件,恐惧、中性表情强度则不受拥挤感启动的影响。结果表明,拥挤感启动使人们辨别愤怒表情的知觉敏感性下降。  相似文献   

19.
Recent studies have shown that cueing eye gaze can affect the processing of visual information, and this phenomenon is called the gaze-orienting effect (visual-GOE). Emerging evidence has shown that the cueing eye gaze also affects the processing of auditory information (auditory-GOE). However, it is unclear whether the auditory-GOE is modulated by emotion. We conducted three behavioural experiments to investigate whether cueing eye gaze influenced the orientation judgement to a sound, and whether the effect was modulated by facial expressions. The current study set four facial expressions (angry, fearful, happy, and neutral), manipulated the display type of facial expressions, and changed the sequence of gaze and emotional expressions. Participants were required to judge the sound orientation after facial expressions and gaze cues. The results showed that the orientation judgement of sound was influenced by gaze direction in all three experiments, and the orientation judgement of sound was faster when the face was oriented to the target location (congruent trials) than when the face was oriented away from the target location (incongruent trials). The modulation of emotion on auditory-GOE was observed only when gaze shifted followed by facial expression (Exp3); the auditory-GOE was significantly greater for angry faces than for neutral faces. These findings indicate that auditory-GOE as a social phenomenon exists widely, and the effect was modulated by facial expression. Gaze shift before the presentation of emotion was the key influencing factor for the emotional modulation in an auditory target gaze-orienting task. Our findings suggest that the integration of facial expressions and eye gaze was context-dependent.  相似文献   

20.
胡治国  刘宏艳 《心理科学》2015,(5):1087-1094
正确识别面部表情对成功的社会交往有重要意义。面部表情识别受到情绪背景的影响。本文首先介绍了情绪背景对面部表情识别的增强作用,主要表现为视觉通道的情绪一致性效应和跨通道情绪整合效应;然后介绍了情绪背景对面部表情识别的阻碍作用,主要表现为情绪冲突效应和语义阻碍效应;接着介绍了情绪背景对中性和歧义面孔识别的影响,主要表现为背景的情绪诱发效应和阈下情绪启动效应;最后对现有研究进行了总结分析,提出了未来研究的建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号