首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments investigated the perception of facial displays of emotions. Using a morphing technique, Experiment 1 (identification task) and Experiment 2 (ABX discrimination task) evaluated the merits of categorical and dimensional models of the representation of these stimuli. We argue that basic emotions—as they are usually defined verbally—do not correspond to primary perceptual categories emerging from the visual analysis of facial expressions. Instead, the results are compatible with the hypothesis that facial expressions are coded in a continuous anisotropic space structured by valence axes. Experiment 3 (identification task) introduces a new technique for generating chimeras to address the debate between feature-based and holistic models of the processing of facial expressions. Contrary to the pure holistic hypothesis, the results suggest that an independent assessment of discrimination features is possible, and may be sufficient for identifying expressions even when the global facial configuration is ambiguous. However, they also suggest that top-down processing may improve identification accuracy by assessing the coherence of local features.  相似文献   

2.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

3.
In previous research we used the composite paradigm (Young, Hellawell, & Hay, ) to demonstrate that configural cues are important for interpreting facial expressions. However, different configural cues in face perception have been identified, including holistic processing (i.e., perception of facial features as a single gestalt) and second‐order spatial relations (i.e., the spatial relationship between individual features). Previous research has suggested that the composite effect for facial identity operates at the level of holistic encoding. Here we show that the composite effect for facial expression has a similar perceptual basis by using different graphic manipulations (stimulus inversion and photographic negative) in conjunction with the composite paradigm. In relation to Bruce and Young's () functional model of face recognition, a suitable level for the composite effect is a stage of front‐end processing referred to as structural encoding, that is common to both facial identity and facial expression perception.  相似文献   

4.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

5.
Sato W  Yoshikawa S 《Cognition》2007,104(1):1-18
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing (Experiment 1) and videos (Experiment 2). The subjects' facial actions were unobtrusively videotaped and blindly coded using Facial Action Coding System [FACS; Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologist]. In the dynamic presentations common to both experiments, brow lowering, a prototypical action in angry expressions, occurred more frequently in response to angry expressions than to happy expressions. The pulling of lip corners, a prototypical action in happy expressions, occurred more frequently in response to happy expressions than to angry expressions in dynamic presentations. Additionally, the mean latency of these actions was less than 900 ms after the onset of dynamic changes in facial expression. Naive raters recognized the subjects' facial reactions as emotional expressions, with the valence corresponding to the dynamic facial expressions that the subjects were viewing. These results indicate that dynamic facial expressions elicit spontaneous and rapid facial mimicry, which functions both as a form of intra-individual processing and as inter-individual communication.  相似文献   

6.
Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.  相似文献   

7.
面孔作为一种高级的视觉刺激,在人际交往中有着无可替代的作用。其中,面孔吸引力更是影响着日常生活的重要社交决策,如择偶、交友、求职、社会交换等。长久以来,研究者们从面孔特征、社会信息和观察者因素等角度不断探索着人们对静态面孔吸引力的感知,且多从进化角度加以解释。但是,人们如何表征面孔吸引力以及其动态性增强机制仍未可知。本项目通过两个研究,分别从面孔吸引力的整体表征,以及面孔动态性通过影响整体加工、影响对整体信息和特征信息的注意、以及影响社会信息来增强吸引力,从这两个角度尝试回答这一问题。在研究1中,本项目从整体加工的角度探索了面孔吸引力的认知表征。研究1.1通过评分任务和适应范式探索高空间频率(更多局部特征)和低空间频率(更多整体特征)对面孔吸引力的影响,旨在从空间频率探讨面孔吸引力的整体表征。研究1.2通过操纵面孔对称性和面孔常态性探索面孔常态性在面孔对称性和面孔吸引力间的中介作用,探讨面孔吸引力的常态构型表征。研究1.3引入“三庭五眼”这一中国传统面孔审美理论,通过评分任务和适应范式研究“三庭五眼” 构型是否符合中国人对高吸引力中国面孔的表征,以此探讨面孔吸引力的整体表征。研究1.4通过评分任务和适应范式考察局部面孔遮挡是否促进整体面孔吸引力,以及这种促进作用是否由于人们通过局部特征“脑补”出了完整面孔。研究2从整体加工、注意和生命力的角度探讨面孔吸引力的动态性增强机制。研究2.1使用合成效应范式测量动态面孔吸引力的整体加工,探索动静态面孔的吸引力差异是否源于其整体加工程度的不同。研究2.2使用注意分散范式,并结合眼动技术,探讨人们对动静态面孔的注视模式是否存在差异,这种差异是否能解释动态面孔吸引力的增强。研究2.3结合问卷法、实验法和结构方程模型,考察了生命力这一社会因素对动静态面孔吸引力的影响。本项目探讨了面孔吸引力的认知表征以及其动态性增强机制,有助于我们进一步理解人们对面孔吸引力的认知加工以及人类欣赏美这一高级智能。同时,本项目的结果对于日常人际交往和面孔吸引力相关算法的优化等方面也有潜在的应用价值。  相似文献   

8.
A rapid response to a threatening face in a crowd is important to successfully interact in social environments. Visual search tasks have been employed to determine whether there is a processing advantage for detecting an angry face in a crowd, compared to a happy face. The empirical findings supporting the “anger superiority effect” (ASE), however, have been criticized on the basis of possible low-level visual confounds and because of the limited ecological validity of the stimuli. Moreover, a “happiness superiority effect” is usually found with more realistic stimuli. In the present study, we tested the ASE by using dynamic (and static) images of realistic human faces, with validated emotional expressions having similar intensities, after controlling the bottom-up visual saliency and the amount of image motion. In five experiments, we found strong evidence for an ASE when using dynamic displays of facial expressions, but not when the emotions were expressed by static face images.  相似文献   

9.
Dynamic properties influence the perception of facial expressions   总被引:8,自引:0,他引:8  
Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.  相似文献   

10.
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.  相似文献   

11.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

12.
Three studies investigated the importance of movement for the recognition of subtle and intense expressions of emotion. In the first experiment, 36 facial emotion displays were duplicated in three conditions either upright or inverted in orientation. A dynamic condition addressed the perception of motion by using four still frames run together to encapsulate a moving sequence to show the expression emerging from neutral to the subtle emotion. The multi‐static condition contained the same four stills presented in succession, but with a visual noise mask (200 ms) between each frame to disrupt the apparent motion, whilst in the single‐static condition, only the last still image (subtle expression) was presented. Results showed a significant advantage for the dynamic condition, over the single‐ and multi‐static conditions, suggesting that motion signals provide a more accurate and robust mental representation of the expression. A second experiment demonstrated that the advantage of movement was reduced with expressions of a higher intensity, and the results of the third experiment showed that the advantage for the dynamic condition for recognizing subtle emotions was due to the motion signal rather than additional static information contained in the sequence. It is concluded that motion signals associated with the emergence of facial expressions can be a useful cue in the recognition process, especially when the expressions are subtle.  相似文献   

13.
Abbas ZA  Duchaine B 《Perception》2008,37(8):1187-1196
Previous work has demonstrated that facial identity recognition, expression recognition, gender categorisation, and race categorisation rely on a holistic representation. Here we examine whether a holistic representation is also used for judgments of facial attractiveness. Like past studies, we used the composite paradigm to assess holistic processing (Young et al 1987, Perception 16 747-759). Experiment 1 showed that top halves of upright faces are judged to be more attractive when aligned with an attractive bottom half than when aligned with an unattractive bottom half. To assess whether this effect resulted from holistic processing or more general effects, we examined the impact of the attractive and unattractive bottom halves when upright halves were misaligned and when aligned and misaligned halves were presented upside-down. The bottom halves had no effect in either condition. These results demonstrate that the perceptual processes underlying upright facial-attractiveness judgments represent the face holistically. Our findings with attractiveness judgments and previous demonstrations involving other aspects of face processing suggest that a common holistic representation is used for most types of face processing.  相似文献   

14.
Three studies examined the nature of the contributions of each hemisphere to the processing of facial expressions and facial identity. A pair of faces, the members of which differed in either expression or identity, were presented to the right or left field. Subjects were required to compare the members of the pair to each other (experiments 1 and 2) or to a previously presented sample (experiment 3). The results revealed that both face and expression perception show an LVF superiority although the two tasks could be differentiated in terms of overall processing time and the interaction of laterality differences with sex. No clear-cut differences in laterality emerged for processing of positive and negative expressions.  相似文献   

15.
We examined dysfunctional memory processing of facial expressions in relation to alexithymia. Individuals with high and low alexithymia, as measured by the Toronto Alexithymia Scale (TAS-20), participated in a visual search task (Experiment 1A) and a change-detection task (Experiments 1B and 2), to assess differences in their visual short-term memory (VSTM). In the visual search task, the participants were asked to judge whether all facial expressions (angry and happy faces) in the search display were the same or different. In the change-detection task, they had to decide whether all facial expressions changed between successive two displays. We found individual differences only in the change-detection task. Individuals with high alexithymia showed lower sensitivity for the happy faces compared to the angry faces, while individuals with low alexithymia showed sufficient recognition for both facial expressions. Experiment 2 examined whether individual differences were observed during early storage or later retrieval stage of the VSTM process using a single-probe paradigm. We found no effect of single-probe, indicating that individual differences occurred at the storage stage. The present results provide new evidence that individuals with high alexithymia show specific impairment in VSTM processes (especially the storage stage) related to happy but not to angry faces.  相似文献   

16.
Looking at another person’s facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals’ facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual—and not just conceptual—processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.  相似文献   

17.
Two experiments were conducted to explore whether representational momentum (RM) emerges in the perception of dynamic facial expression and whether the velocity of change affects the size of the effect. Participants observed short morphing animations of facial expressions from neutral to one of the six basic emotions. Immediately afterward, they were asked to select the last images perceived. The results of the experiments revealed that the RM effect emerged for dynamic facial expressions of emotion: The last images of dynamic stimuli that an observer perceived were of a facial configuration showing stronger emotional intensity than the image actually presented. The more the velocity increased, the more the perceptual image of facial expression intensified. This perceptual enhancement suggests that dynamic information facilitates shape processing in facial expression, which leads to the efficient detection of other people's emotional changes from their faces.  相似文献   

18.
The current study conceptualizes facial attractiveness as a dual-process judgment, combining sexual and aesthetic value. We hypothesized that holistic face processing is more integral to perceiving aesthetic preference and feature-based processing is more integral to sexual preference. In order to manipulate holistic versus feature-based processing, we used a variation of the composite face paradigm. Previous work indicates that slightly shifting the top from the bottom half of a face disrupts holistic processing and enhances feature-based processing. In the present study, while nonsexual judgments best explained facial attraction in whole-face images, a reversal occurred for split-face images such that sexual judgments best explained facial attraction, but only for mate-relevant faces (i.e., other-sex). These findings indicate that disrupting holistic processing can decouple sexual from nonsexual judgments of facial attraction, thereby establishing the presence of a dual-process.  相似文献   

19.
Background: Neuroanatomical evidence suggests that the human brain has dedicated pathways to rapidly process threatening stimuli. This processing bias for threat was examined using the repetition blindness (RB) paradigm. RB (i.e., failure to report the second instance of an identical stimulus rapidly following the first) has been established for words, objects and faces but not, to date, facial expressions. Methods: 78 (Study 1) and 62 (Study 2) participants identified repeated and different, threatening and non-threatening emotional facial expressions in rapid serial visual presentation (RSVP) streams. Results: In Study 1, repeated facial expressions produced more RB than different expressions. RB was attenuated for threatening expressions. In Study 2, attenuation of RB for threatening expressions was replicated. Additionally, semantically related but non-identical threatening expressions reduced RB relative to non-threatening stimuli. Conclusions: These findings suggest that the threat bias is apparent in the temporal processing of facial expressions, and expands the RB paradigm by demonstrating that identical facial expressions are susceptible to the effect.  相似文献   

20.
In this study, we presented computer‐morphing animations of the facial expressions of six emotions to 43 subjects and asked them to evaluate the naturalness of the rate of change of each expression. The results showed that the naturalness of the expressions depended on the velocity of change, and the patterns for the four velocities differed with the emotions. Principal component analysis of the data extracted the structures that underlie the evaluation of dynamic facial expressions, which differed from previously reported structures for static expressions in some aspects. These results suggest that the representations of facial expressions include not only static but also dynamic properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号