共查询到20条相似文献,搜索用时 15 毫秒
1.
《Quarterly journal of experimental psychology (2006)》2013,66(12):2426-2442
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness. 相似文献
2.
There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information. 相似文献
3.
The differential effects of thalamus and basal ganglia on facial emotion recognition 总被引:1,自引:0,他引:1
This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched normal controls volunteered to participate. The participants were individually presented with morphed photographs of facial emotion expressions over multiple trials. They were requested to classify each of these morphed photographs according to Ekman's six basic emotion categories. The findings indicated that the clinical participants had impaired facial emotion recognition, though no clear lateralization pattern of impairment was observed. The patients with localized thalamic damage performed significantly worse in recognizing sadness than the controls. Longitudinal studies on patients with subcortical brain damage should be conducted to examine how cognitive reorganization post-stroke would affect emotion recognition. 相似文献
4.
《Brain and cognition》2014,84(3):252-261
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person’s true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer’s left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer’s left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person’s left ear, which also avoids the social stigma of eye-to-eye contact, one’s ability to decode facial expressions should be enhanced. 相似文献
5.
Jean-Yves Baudouin Mathieu Gallay Fabrice Robichon 《Journal of experimental child psychology》2010,107(3):195-206
This study investigated children’s perceptual ability to process second-order facial relations. In total, 78 children in three age groups (7, 9, and 11 years) and 28 adults were asked to say whether the eyes were the same distance apart in two side-by-side faces. The two faces were similar on all points except the space between the eyes, which was either the same or different, with various degrees of difference. The results showed that the smallest eye spacing children were able to discriminate decreased with age. This ability was sensitive to face orientation (upright or upside-down), and this inversion effect increased with age. It is concluded here that, despite early sensitivity to configural/holistic information, the perceptual ability to process second-order relations in faces improves with age and constrains the development of the face recognition ability. 相似文献
6.
The relative dominance of component and configural properties in face processing is a controversial issue. We examined this issue by testing whether the discriminability of components predicts the discrimination of faces with similar versus dissimilar configurations. Discrimination of faces with similar configurations was determined by components discriminability, indicating independent processing of facial components. The presence of configural variation had no effect on discriminating faces with highly discriminable components, suggesting that discrimination was based on the components. The presence of configural variation, however, facilitated the discrimination of faces with more difficult-to-discriminate components, above and beyond what would be predicted by the configural or componential discriminability, indicating interactive processing. No effect of configural variation was observed in discriminating inverted faces. These results suggest that both component and configural properties contribute to the processing of upright faces and no property necessarily dominates the other. Upright face discrimination can rely on components, configural properties, or interactive processing of component and configural properties, depending on the information available and the discriminability of the properties. Inverted faces are dominated by componential processing. The finding that interactive processing of component and configural properties surfaced when the properties were of similar, not very high discriminability, suggests that such interactive processing may be the dominant form of face processing in everyday life. 相似文献
7.
Patricia M. Rodriguez Mosquera Agneta H. Fischer Antony S. R. Manstead Ruud Zaalberg 《Cognition & emotion》2013,27(8):1471-1498
Insults elicit intense emotion. This study tests the hypothesis that one's social image, which is especially salient in honour cultures, influences the way in which one reacts to an insult. Seventy-seven honour-oriented and 72 non-honour oriented participants answered questions about a recent insult episode. Participants experienced both anger and shame in reaction to the insult. However, these emotions resulted in different behaviours. Anger led to verbal attack (i.e., criticising, insulting in return) among all participants. This relationship was explained by participants’ motivation to punish the wrongdoer. Shame, on the other hand, was moderated by honour. Shame led to verbal disapproval of the wrongdoers behaviour, but only among the honour-oriented participants. This relationship was explained by these participants’ motivation to protect their social image. By contrast, shame led to withdrawal among non-honour-oriented participants. 相似文献
8.
Takuma Takehara Fumio Ochiai Naoto Suzuki 《Quarterly journal of experimental psychology (2006)》2016,69(8):1508-1529
Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks—one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions. 相似文献
9.
Nils S. van den Berg Edward H. F. de Haan Rients B. Huitema Jacoba M. Spikman the visual brain group 《Journal of Neuropsychology》2021,15(3):516-532
Deficits in facial emotion recognition occur frequently after stroke, with adverse social and behavioural consequences. The aim of this study was to investigate the neural underpinnings of the recognition of emotional expressions, in particular of the distinct basic emotions (anger, disgust, fear, happiness, sadness and surprise). A group of 110 ischaemic stroke patients with lesions in (sub)cortical areas of the cerebrum was included. Emotion recognition was assessed with the Ekman 60 Faces Test of the FEEST. Patient data were compared to data of 162 matched healthy controls (HC’s). For the patients, whole brain voxel-based lesion–symptom mapping (VLSM) on 3-Tesla MRI images was performed. Results showed that patients performed significantly worse than HC’s on both overall recognition of emotions, and specifically of disgust, fear, sadness and surprise. VLSM showed significant lesion–symptom associations for FEEST total in the right fronto-temporal region. Additionally, VLSM for the distinct emotions showed, apart from overlapping brain regions (insula, putamen and Rolandic operculum), also regions related to specific emotions. These were: middle and superior temporal gyrus (anger); caudate nucleus (disgust); superior corona radiate white matter tract, superior longitudinal fasciculus and middle frontal gyrus (happiness) and inferior frontal gyrus (sadness). Our findings help in understanding how lesions in specific brain regions can selectively affect the recognition of the basic emotions. 相似文献
10.
Hayley C. Leonard Annette Karmiloff-Smith Mark H. Johnson 《Journal of experimental child psychology》2010,106(4):193-207
Previous research has suggested that a mid-band of spatial frequencies is critical to face recognition in adults, but few studies have explored the development of this bias in children. We present a paradigm adapted from the adult literature to test spatial frequency biases throughout development. Faces were presented on a screen with particular spatial frequencies blocked out by noise masks. A mid-band bias was found in adults and 9- and 10-year-olds for upright faces but not for inverted faces, suggesting a face-sensitive effect. However, 7- and 8-year-olds did not demonstrate the mid-band bias for upright faces but rather processed upright and inverted faces similarly. This suggests that specialization toward the mid-band for upright face recognition develops gradually during childhood and may relate to an advanced level of face expertise. 相似文献
11.
Helmut Leder Vicki Bruce 《The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology》2000,53(2):513-536
The identification of upright faces seems to involve a special sensitivity to "configural" information, the processing of which is less effective when the face is inverted. However the precise meaning of "configural" remains unclear. Five experiments are presented, which showed that the disruption of the processing of relational, rather than holistic, information largely determines the occurrence as well as the size of the face-inversion effect. In Experiment 1, faces could be identified either by unique combinations of local information (e.g. a specific eye colour plus hair colour) or by unique relational information (e.g. nose-mouth distance). The former showed no inversion effect, whereas the latter did. A combination of local and relational information (Experiment 2) again produced an inversion effect, although this effect was smaller than that found when only relational information was used. The results were replicated in Experiment 3 when differences in the brightness of local features were used instead of specific colour combinations. Experiment 4 used different retrieval conditions to distinguish relational from holistic processing, and demonstrated again that spatial relations between single features appeared to provide crucial information for face recognition. In Experiment 5, the importance of relational information was confirmed using faces that also varied in the shapes of local features. 相似文献
12.
Face recognition is widely held to rely on ‘configural processing’, an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty – participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of ‘configural’. Alternatively, familiar face recognition might rely on more complex criteria – based on tolerance to within-person variation rather than highly specific measurement. 相似文献
13.
Although empathy deficits are thought to be associated with callous-unemotional (CU) traits, findings remain equivocal and little is known about what specific abilities may underlie these purported deficits. Affective perspective-taking (APT) and facial emotion recognition may be implicated, given their independent associations with both empathy and CU traits. The current study examined how CU traits relate to cognitive and affective empathy and whether APT and facial emotion recognition mediate these relations. Participants were 103 adolescents (70 males) aged 16–18 attending a residential programme. CU traits were negatively associated with cognitive and affective empathy to a similar degree. The association between CU traits and affective empathy was partially mediated by APT. Results suggest that assessing mechanisms that may underlie empathic deficits, such as perspective-taking, may be important for youth with CU traits and may inform targets of intervention. 相似文献
14.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed. 相似文献
15.
《Quarterly journal of experimental psychology (2006)》2013,66(6):1159-1181
A smile is visually highly salient and grabs attention automatically. We investigated how extrafoveally seen smiles influence the viewers' perception of non-happy eyes in a face. A smiling mouth appeared in composite faces with incongruent non-happy (fearful, neutral, etc.) eyes, thus producing blended expressions, or it appeared in intact faces with genuine expressions. Attention to the eye region was spatially cued while foveal vision of the mouth was blocked by gaze-contingent masking. Participants judged whether the eyes were happy or not. Results indicated that the smile biased the evaluation of the eye expression: The same non-happy eyes were more likely to be judged as happy and categorized more slowly as not happy in a face with a smiling mouth than in a face with a non-smiling mouth or with no mouth. This bias occurred when the mouth and the eyes appeared simultaneously and aligned, but also to some extent when they were misaligned and when the mouth appeared after the eyes. We conclude that the highly salient smile projects to other facial regions, thus influencing the perception of the eye expression. Projection serves spatial and temporal integration of face parts and changes. 相似文献
16.
Gijsbert Bijlstra Rob W. Holland Daniël H.J. Wigboldus 《Journal of experimental social psychology》2010,46(4):657-663
The goal of the present paper was to demonstrate the influence of general evaluations and stereotype associations on emotion recognition. Earlier research has shown that evaluative connotations between social category members and emotional expression predict whether recognition of positive or negative emotional expressions will be facilitated (e.g. Hugenberg, 2005). In the current paper we tested the hypothesis that stereotype associations influence emotion recognition processes, especially when the difference between valences of emotional expressions does not come into play. In line with this notion, when participants in the present two studies were asked to classify positive versus negative emotional expressions (i.e. happy versus anger, or happy versus sadness), valence congruency effects were found. Importantly, however, in a comparative context without differences in valence in which participants were asked to classify two distinct negative emotions (i.e. anger versus sadness) we found that recognition facilitation occurred for stereotypically associated discrete emotional expressions. With this, the current results indicate that a distinction between general evaluative and cognitive routes can be made in emotion recognition processes. 相似文献
17.
Kevin D. Cassidy Kimberly A. Quinn Glyn W. Humphreys 《Journal of experimental social psychology》2011,47(4):811-817
We investigated the impact of ingroup/outgroup categorization on the encoding of same-race and other-race faces presented in inter-racial and intra-racial contexts (Experiments 1 and 2, respectively). White participants performed a same/different matching task on pairs of upright and inverted faces that were either same-race (White) or other-race (Black), and labeled as being from the same university or a different university. In Experiment 1, the same- and other-race faces were intermixed. For other-race faces, participants demonstrated greater configural processing following same- than other-university labeling. Same-race faces showed strong configural coding irrespective of the university labeling. In Experiment 2, faces were blocked by race. Participants demonstrated greater configural processing of same- than other-university faces, but now for both same- and other-race faces. These results demonstrate that other-race face processing is sensitive to non-racial ingroup/outgroup status regardless of racial context, but that the sensitivity of same-race face processing to the same cues depends on the racial context in which targets are encountered. 相似文献
18.
Wolf K Mass R Ingenbleek T Kiefer F Naber D Wiedemann K 《Scandinavian journal of psychology》2005,46(5):403-409
The purpose of the study was to investigate the facial muscle pattern of disgust in comparison to appetence and joy, using an improved facial EMG method. We analyzed the activity of nine facial muscles in forty healthy subjects. The subject group was randomly divided into two groups (oversaturated vs. hungry) of ten women and ten men each. Four different emotions (disgust, appetence, excited-joy and relaxed-joy) were induced by showing pictures from the IAPS. Pre-visible facial muscle activity was measured with a new facial EMG. A Visual Analog Scale (VAS) was established. Disgust is represented by a specific facial muscle pattern involving M.corrugator and M.orbicularis oculi, clearly distinguishing it from the facial patterns of appetence and joy. The intensity of disgust is stronger in a state of hunger than under oversaturation and is altogether stronger in females than in males. Our findings indicate the possibility to explore the entire emotion system successfully through a state-of-the-art psychophysiological method like our EMG device. 相似文献
19.
《Quarterly journal of experimental psychology (2006)》2013,66(5):952-970
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. 相似文献
20.
Denise Soria Bauser Boris Suchan 《British journal of psychology (London, England : 1953)》2018,109(3):564-582
The present study aimed to further explore the role of the head for configural body processing by comparing complete bodies with headless bodies and faceless heads (Experiment 1). A second aim was to further explore the role of the eye region in configural face processing (Experiment 2). Due to that, we conducted a second experiment with complete faces, eyeless faces, and eyes. In addition, we used two effects to manipulate configural processing: the effect of stimulus inversion and scrambling. The current data clearly show an inversion effect for intact bodies presented with head and faces including the eye region. Thus, the head and the eye region seem to be central for configural processes that are manipulated by the effect of stimulus inversion. Furthermore, the behavioural and electrophysiological body inversion effect depends on the intact configuration of bodies and is associated with the N170 as the face inversion effect depends on the intact face configuration. Hence, configural body processing depends not only on the presence of the head but rather on a complete representation of human bodies that includes the body and the head. Furthermore, configural face processing relies on intact and complete face representations that include faces and eyes. 相似文献