首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two studies investigated the importance of dynamic temporal characteristic information in facilitating the recognition of subtle expressions of emotion. In Experiment 1 there were three conditions, dynamic moving sequences that showed the expression emerging from neutral to a subtle emotion, a dynamic presentation containing nine static stills from the dynamic moving sequences (ran together to encapsulate a moving sequence) and a First–Last condition containing only the first (neutral) and last (subtle emotion) stills. The results showed recognition was significantly better for the dynamic moving sequences than both the Dynamic-9 and First–Last conditions. Experiments 2a and 2b then changed the dynamics of the moving sequences by speeding up, slowing down or disrupting the rhythm of the motion sequences. These manipulations significantly reduced recognition, and it was concluded that in addition to the perception of change, recognition is facilitated by the characteristic muscular movements associated with the portrayal of each emotion.  相似文献   

2.
Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.  相似文献   

3.
We examined how the recognition of facial emotion was influenced by manipulation of both spatial and temporal properties of 3-D point-light displays of facial motion. We started with the measurement of 3-D position of multiple locations on the face during posed expressions of anger, happiness, sadness, and surprise, and then manipulated the spatial and temporal properties of the measurements to obtain new versions of the movements. In two experiments, we examined recognition of these original and modified facial expressions: in experiment 1, we manipulated the spatial properties of the facial movement, and in experiment 2 we manipulated the temporal properties. The results of experiment 1 showed that exaggeration of facial expressions relative to a fixed neutral expression resulted in enhanced ratings of the intensity of that emotion. The results of experiment 2 showed that changing the duration of an expression had a small effect on ratings of emotional intensity, with a trend for expressions with shorter durations to have lower ratings of intensity. The results are discussed within the context of theories of encoding as related to caricature and emotion.  相似文献   

4.
There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information.  相似文献   

5.
There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children’s emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children’s emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.  相似文献   

6.
The role of movement in the recognition of famous faces   总被引:6,自引:0,他引:6  
The effects of movement on the recognition of famous faces shown in difficult conditions were investigated. Images were presented as negatives, upside down (inverted), and thresholded. Results indicate that, under all these conditions, moving faces were recognized significantly better than static ones. One possible explanation of this effect could be that a moving sequence contains more static information about the different views and expressions of the face than does a single static image. However, even when the amount of static information was equated (Experiments 3 and 4), there was still an advantage for moving sequences that contained their original dynamic properties. The results suggest that the dynamics of the motion provide additional information, helping to access an established familiar face representation. Both the theoretical and the practical implications for these findings are discussed.  相似文献   

7.
Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.  相似文献   

8.
Some researchers have interpreted findings of in‐group advantage in emotion judgements as ethnic bias by perceivers. This study is the first linking in‐group advantage to subtle differences in emotional expressions, using composites created with left and right facial hemispheres. Participants from the USA, India, and Japan judged facial expressions from all three cultures. As predicted, in‐group advantage was greater for left than right hemifacial composites. Left composites were not universally more recognisable, but relatively more recognisable to in‐group members only. There was greater pancultural agreement about the recognition levels of right hemifacial composites. This suggests the left facial hemisphere uses an expressive style less universal and more culturally specific than the right, and that bias alone does not cause the in‐group advantage.  相似文献   

9.
Dynamic properties influence the perception of facial expressions   总被引:8,自引:0,他引:8  
Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties.  相似文献   

10.
Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of children (N = 64, aged 5–10 years old) who freely labeled the emotion conveyed by static and dynamic facial expressions found no advantage of dynamic over static expressions; in fact, reliable differences favored static expressions. An alternative explanation of gradual improvement with age is that children's emotional categories change during development from a small number of broad emotion categories to a larger number of narrower categories—a pattern found here with both static and dynamic expressions.  相似文献   

11.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

12.
Recognition of emotional facial expressions is a central area in the psychology of emotion. This study presents two experiments. The first experiment analyzed recognition accuracy for basic emotions including happiness, anger, fear, sadness, surprise, and disgust. 30 pictures (5 for each emotion) were displayed to 96 participants to assess recognition accuracy. The results showed that recognition accuracy varied significantly across emotions. The second experiment analyzed the effects of contextual information on recognition accuracy. Information congruent and not congruent with a facial expression was displayed before presenting pictures of facial expressions. The results of the second experiment showed that congruent information improved facial expression recognition, whereas incongruent information impaired such recognition.  相似文献   

13.
The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices.  相似文献   

14.
隋雪  任延涛 《心理学报》2007,39(1):64-70
采用眼动记录技术探讨面部表情识别的即时加工过程。基本的面部表情可以分为三种性质:正性、中性和负性,实验一探讨大学生对这三种面部表情识别即时加工过程的基本模式;实验二采用遮蔽技术探讨面部不同部位信息对面部表情识别的影响程度。结果发现:(1)被试对不同性质面部表情识别的即时加工过程存在共性,其眼动轨迹呈“逆V型”;(2)被试对不同性质面部表情的识别存在显著差异,在行为指标和眼动指标上都有体现;(3)对不同部位进行遮蔽,眼动模式发生了根本性改变,遮蔽影响面部表情识别的反应时和正确率;(4)面部表情识别对面部不同部位信息依赖程度不同,眼部信息作用更大。上述结果提示,个体对不同性质面部表情识别的即时加工过程具有共性,但在不同性质面部表情识别上的心理能量消耗不同;并且表情识别对面部不同部位信息依赖程度不同,对眼部信息依赖程度更大  相似文献   

15.
A matching advantage for dynamic human faces   总被引:3,自引:0,他引:3  
Thornton IM  Kourtzi Z 《Perception》2002,31(1):113-132
In a series of three experiments, we used a sequential matching task to explore the impact of non-rigid facial motion on the perception of human faces. Dynamic prime images, in the form of short video sequences, facilitated matching responses relative to a single static prime image. This advantage was observed whenever the prime and target showed the same face but an identity match was required across expression (experiment 1) or view (experiment 2). No facilitation was observed for identical dynamic prime sequences when the matching dimension was shifted from identity to expression (experiment 3). We suggest that the observed dynamic advantage, the first reported for non-degraded facial images, arises because the matching task places more emphasis on visual working memory than typical face recognition tasks. More specifically, we believe that representational mechanisms optimised for the processing of motion and/or change-over-time are established and maintained in working memory and that such 'dynamic representations' (Freyd, 1987 Psychological Review 94 427-438) capitalise on the increased information content of the dynamic primes to enhance performance.  相似文献   

16.
Affective computing research has advanced emotion recognition systems using facial expressions, voices, gaits, and physiological signals, yet these methods are often impractical. This study integrates mouse cursor motion analysis into affective computing and investigates the idea that movements of the computer cursor can provide information about emotion of the computer user. We extracted 16–26 trajectory features during a choice‐reaching task and examined the link between emotion and cursor motions. Participants were induced for positive or negative emotions by music, film clips, or emotional pictures, and they indicated their emotions with questionnaires. Our 10‐fold cross‐validation analysis shows that statistical models formed from “known” participants (training data) could predict nearly 10%–20% of the variance of positive affect and attentiveness ratings of “unknown” participants, suggesting that cursor movement patterns such as the area under curve and direction change help infer emotions of computer users.  相似文献   

17.
The Emotion Recognition Task is a computer-generated paradigm for measuring the recognition of six basic facial emotional expressions: anger, disgust, fear, happiness, sadness, and surprise. Video clips of increasing length were presented, starting with a neutral face that changes into a facial expression of different intensities (20%-100%). The present study describes methodological aspects of the paradigm and its applicability in healthy participants (N=58; 34 men; ages between 22 and 75), specifically focusing on differences in recognition performance between the six emotion types and age-related change. The results showed that happiness was the easiest emotion to recognize, while fear was the most difficult. Moreover, older adults performed worse than young adults on anger, sadness, fear, and happiness, but not on disgust and surprise. These findings indicate that this paradigm is probably more sensitive than emotion perception tasks using static images, suggesting it is a useful tool in the assessment of subtle impairments in emotion perception.  相似文献   

18.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

19.
Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli.  相似文献   

20.
Looking at another person’s facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals’ facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual—and not just conceptual—processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号