首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three studies investigated the importance of movement for the recognition of subtle and intense expressions of emotion. In the first experiment, 36 facial emotion displays were duplicated in three conditions either upright or inverted in orientation. A dynamic condition addressed the perception of motion by using four still frames run together to encapsulate a moving sequence to show the expression emerging from neutral to the subtle emotion. The multi‐static condition contained the same four stills presented in succession, but with a visual noise mask (200 ms) between each frame to disrupt the apparent motion, whilst in the single‐static condition, only the last still image (subtle expression) was presented. Results showed a significant advantage for the dynamic condition, over the single‐ and multi‐static conditions, suggesting that motion signals provide a more accurate and robust mental representation of the expression. A second experiment demonstrated that the advantage of movement was reduced with expressions of a higher intensity, and the results of the third experiment showed that the advantage for the dynamic condition for recognizing subtle emotions was due to the motion signal rather than additional static information contained in the sequence. It is concluded that motion signals associated with the emergence of facial expressions can be a useful cue in the recognition process, especially when the expressions are subtle.  相似文献   

2.
Two studies investigated the importance of dynamic temporal characteristic information in facilitating the recognition of subtle expressions of emotion. In Experiment 1 there were three conditions, dynamic moving sequences that showed the expression emerging from neutral to a subtle emotion, a dynamic presentation containing nine static stills from the dynamic moving sequences (ran together to encapsulate a moving sequence) and a First–Last condition containing only the first (neutral) and last (subtle emotion) stills. The results showed recognition was significantly better for the dynamic moving sequences than both the Dynamic-9 and First–Last conditions. Experiments 2a and 2b then changed the dynamics of the moving sequences by speeding up, slowing down or disrupting the rhythm of the motion sequences. These manipulations significantly reduced recognition, and it was concluded that in addition to the perception of change, recognition is facilitated by the characteristic muscular movements associated with the portrayal of each emotion.  相似文献   

3.
Humans have developed a specific capacity to rapidly perceive and anticipate other people’s facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people’s levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an “immediate perceptual history” in the perceiver before leading to an emotional anticipation of the agent’s upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process—one through which we can swiftly and involuntarily detect other people’s pain.  相似文献   

4.
Three experiments investigated the perception of facial displays of emotions. Using a morphing technique, Experiment 1 (identification task) and Experiment 2 (ABX discrimination task) evaluated the merits of categorical and dimensional models of the representation of these stimuli. We argue that basic emotions—as they are usually defined verbally—do not correspond to primary perceptual categories emerging from the visual analysis of facial expressions. Instead, the results are compatible with the hypothesis that facial expressions are coded in a continuous anisotropic space structured by valence axes. Experiment 3 (identification task) introduces a new technique for generating chimeras to address the debate between feature-based and holistic models of the processing of facial expressions. Contrary to the pure holistic hypothesis, the results suggest that an independent assessment of discrimination features is possible, and may be sufficient for identifying expressions even when the global facial configuration is ambiguous. However, they also suggest that top-down processing may improve identification accuracy by assessing the coherence of local features.  相似文献   

5.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

6.
Two experiments were conducted to explore whether representational momentum (RM) emerges in the perception of dynamic facial expression and whether the velocity of change affects the size of the effect. Participants observed short morphing animations of facial expressions from neutral to one of the six basic emotions. Immediately afterward, they were asked to select the last images perceived. The results of the experiments revealed that the RM effect emerged for dynamic facial expressions of emotion: The last images of dynamic stimuli that an observer perceived were of a facial configuration showing stronger emotional intensity than the image actually presented. The more the velocity increased, the more the perceptual image of facial expression intensified. This perceptual enhancement suggests that dynamic information facilitates shape processing in facial expression, which leads to the efficient detection of other people's emotional changes from their faces.  相似文献   

7.
Sato W  Yoshikawa S 《Cognition》2007,104(1):1-18
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing (Experiment 1) and videos (Experiment 2). The subjects' facial actions were unobtrusively videotaped and blindly coded using Facial Action Coding System [FACS; Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologist]. In the dynamic presentations common to both experiments, brow lowering, a prototypical action in angry expressions, occurred more frequently in response to angry expressions than to happy expressions. The pulling of lip corners, a prototypical action in happy expressions, occurred more frequently in response to happy expressions than to angry expressions in dynamic presentations. Additionally, the mean latency of these actions was less than 900 ms after the onset of dynamic changes in facial expression. Naive raters recognized the subjects' facial reactions as emotional expressions, with the valence corresponding to the dynamic facial expressions that the subjects were viewing. These results indicate that dynamic facial expressions elicit spontaneous and rapid facial mimicry, which functions both as a form of intra-individual processing and as inter-individual communication.  相似文献   

8.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

9.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

10.
The interdependent motives of cooperation and competition are integral to adaptive social functioning. In three experiments, we provide novel evidence that both cooperation and competition goals enhance perceptual acuity for both angry and happy faces. Experiment 1 found that both cooperative and competitive motives improve perceivers?? ability to discriminate between genuine and deceptive smiles. Experiment 2 revealed that both cooperative and competitive motives improve perceivers?? perceptual sensitivity to subtle differences among happy and angry facial expressions. Finally, Experiment 3 found that the motivated increase in perceptual acuity for happy and angry expressions allows perceivers to overcome the effects of visual noise, relative to unmotivated control participants. Collectively, these results provide novel evidence that the interdependent motives of cooperation and competition can attune visual perception, accentuating the subjectively experienced signal strength of anger and happiness.  相似文献   

11.
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.  相似文献   

12.
There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information.  相似文献   

13.
《Ecological Psychology》2013,25(4):259-278
We report on two experiments that investigated the role of facial motion in the recognition of degraded famous face images. The results of these experiments suggest that seeing a face move is advantageous for the correct recognition of identity. This effect is not solely due to the extra static-based information contained in a moving sequence but is also due to additional dynamic information available from a moving face. Furthermore, famous faces were recognized more accurately when the original dynamic characteristics of the motion were maintained (Experiment 1), compared to when either the tempo or the direction of motion were altered (Experiment 2). It is suggested that there may be general benefit for viewing naturally moving faces, not specific benefit to any particular face identity. Alternatively, individual faces may have associated characteristic motion signatures.  相似文献   

14.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   

15.
We investigated whether categorical perception and dimensional perception can co-occur while decoding emotional facial expressions. In Experiment 1, facial continua with endpoints consisting of four basic emotions (i.e., happiness-fear and anger-disgust) were created by a morphing technique. Participants rated each facial stimulus using a categorical strategy and a dimensional strategy. The results show that the happiness-fear continuum was divided into two clusters based on valence, even when using the dimensional strategy. Moreover, the faces were arrayed in order of the physical changes within each cluster. In Experiment 2, we found a category boundary within other continua (i.e., surprise-sadness and excitement-disgust) with regard to the arousal and valence dimensions. These findings indicate that categorical perception and dimensional perception co-occurred when emotional facial expressions were rated using a dimensional strategy, suggesting a hybrid theory of categorical and dimensional accounts.  相似文献   

16.
Abrupt discontinuities in recognizing categories of emotion are found for the labelling of consciously perceived facial expressions. This has been taken to imply that, at a conscious level, we perceive facial expressions categorically. We investigated whether the abrupt discontinuities found in categorization for conscious recognition would be replaced by a graded transition for subthreshold stimuli. Fifteen volunteers participated in two experiments, in which participants viewed faces morphed from 100% fear to 100% disgust along seven increments. In Experiment A, target faces were presented for 30 ms, in Experiment B for 170 ms. Participants made two-alternative forced-choice decisions between fear and disgust. Results for the 30 ms presentation time indicated a significant linear trend between degree of morphing and classification of the images. Results for 170 ms presentation time followed the higher order function found in studies of categorical perception. These results provide preliminary evidence for separate processes underlying conscious and nonconscious perception of facial expressions of emotion.  相似文献   

17.
We investigated whether categorical perception and dimensional perception can co-occur while decoding emotional facial expressions. In Experiment 1, facial continua with endpoints consisting of four basic emotions (i.e., happiness–fear and anger–disgust) were created by a morphing technique. Participants rated each facial stimulus using a categorical strategy and a dimensional strategy. The results show that the happiness–fear continuum was divided into two clusters based on valence, even when using the dimensional strategy. Moreover, the faces were arrayed in order of the physical changes within each cluster. In Experiment 2, we found a category boundary within other continua (i.e., surprise–sadness and excitement–disgust) with regard to the arousal and valence dimensions. These findings indicate that categorical perception and dimensional perception co-occurred when emotional facial expressions were rated using a dimensional strategy, suggesting a hybrid theory of categorical and dimensional accounts.  相似文献   

18.
Facial expression and gaze perception are thought to share brain mechanisms but behavioural interactions, especially from gaze-cueing paradigms, are inconsistent. We conducted a series of gaze-cueing studies using dynamic facial cues to examine orienting across different emotional expression and task conditions, including face inversion. Across experiments, at a short stimulus–onset asynchrony (SOA) we observed both an expression effect (i.e., faster responses when the face was emotional versus neutral) and a cue validity effect (i.e., faster responses when the target was gazed-at), but no interaction between validity and emotion. Results from face inversion suggest that the emotion effect may have been due to both facial expression and stimulus motion. At longer SOAs, validity and emotion interacted such that cueing by emotional faces, fearful faces in particular, was enhanced relative to neutral faces. These results converge with a growing body of evidence that suggests that gaze and expression are initially processed independently and interact at later stages to direct attentional orienting.  相似文献   

19.
Doi H  Kato A  Hashimoto A  Masataka N 《Perception》2008,37(9):1399-1411
Data on the development of the perception of facial biological motion during preschool years are disproportionately scarce. We investigated the ability of preschoolers to recognise happy, angry, and surprised expressions, and eye-closing facial movements on the basis of facial biological motion. Children aged 4 years (n = 18) and 5-6 years (n = 19), and adults (n = 17) participated in a matching task, in which they were required to match the point-light displays of facial expressions to prototypic schematic images of facial expressions and facial movement. The results revealed that the ability to recognise facial expressions from biological motion emerges as early as the age of 4 years. This ability was evident for happy expressions at the age of 4 years; 5-6-year-olds reliably recognised surprised as well as happy expressions. The theoretical significances of these findings are discussed.  相似文献   

20.
Recent studies have shown that the perception of facial expressions of emotion fits the criteria of categorical perception (CP). The present paper tests whether a pattern of categories emerges when facial expressions are examined within the framework of multidimensional scaling. Blends of five “pure” expressions (Angry, Sad, Surprised, Happy, Neutral) were created using computerised “morphing”, providing the stimuli for four experiments. Instead of attempting to identify these stimuli, subjects described the proximities between them, using two quite different forms of data: similarity comparisons, and sorting partitions. Multidimensional scaling techniques were applied to integrate the resulting ordinal-level data into models which represent the interstimulus similarities at ratio level. All four experiments yielded strong evidence that the expressions were perceived in distinct categories. Adjacent pairs in the models were not spaced at equal intervals, but were clustered together as if drawn towards a “perceptual magnet”; within each category. We argue that spatial representations are compatible with CP effects, and indeed are a useful tool for investigating them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号