首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This study explored how rapidly emotion specific facial muscle reactions were elicited when subjects were exposed to pictures of angry and happy facial expressions. In three separate experiments, it was found that distinctive facial electromyographic reactions, i.e., greater Zygomaticus major muscle activity in response to happy than to angry stimuli and greater Corrugator supercilii muscle activity in response to angry stimuli, were detectable after only 300–400 ms of exposure. These findings demonstrate that facial reactions are quickly elicited, indicating that expressive emotional reactions can be very rapidly manifested and are perhaps controlled by fast operating facial affect programs.  相似文献   

2.
This study investigated whether subjects high and low in public speaking fear react with different facial electromyographic (EMG) activities when exposed to negative and positive social stimuli. A High-fear and Low-fear group were selected by help of a questionnaire and were exposed to slides of angry and happy faces while facial-EMG from the corrugator and zygomatic muscle regions were measured. The subjects also rated the stimuli on different emotional dimensions. Consistent with earlier research it was found that Low fear subjects reacted with increased corrugator activity to angry faces and increased zygomatic activity to happy faces. The High fear group, on the other hand, did not distinguish between angry and happy faces. Rating data indicated that the High fear group perceived angry faces as being emotionally more negative. The present results are consistent with earlier studies, indicating that the facial-EMG technique is sensitive to detect differential responding among clinical interesting groups, such as people suffering from social fears.  相似文献   

3.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

4.
The facial expressions of fear and anger are universal social signals in humans. Both expressions have been frequently presumed to signify threat to perceivers and therefore are often used in studies investigating responses to threatening stimuli. Here the authors show that the anger expression facilitates avoidance-related behavior in participants, which supports the notion of this expression being a threatening stimulus. The fear expression, on the other hand, facilitates approach behaviors in perceivers. This contradicts the notion of the fear expression as predominantly threatening or aversive and suggests it may represent an affiliative stimulus. Although the fear expression may signal that a threat is present in the environment, the effect of the expression on conspecifics may be in part to elicit approach.  相似文献   

5.
Functional magnetic resonance imaging (fMRI) of the human brain was used to compare changes in amygdala activity associated with viewing facial expressions of fear and anger. Pictures of human faces bearing expressions of fear or anger, as well as faces with neutral expressions, were presented to 8 healthy participants. The blood oxygen-level dependent (BOLD) fMRI signal within the dorsal amygdala was significantly greater to Fear versus Anger, in a direct contrast. Significant BOLD signal changes in the ventral amygdala were observed in contrasts of Fear versus Neutral expressions and, in a more spatially circumscribed region, to Anger versus Neutral expressions. Thus, activity in the amygdala is greater to fearful facial expressions when contrasted with either neutral or angry faces. Furthermore, directly contrasting fear with angry faces highlighted involvement of the dorsal amygdaloid region.  相似文献   

6.
Facial expression and gaze perception are thought to share brain mechanisms but behavioural interactions, especially from gaze-cueing paradigms, are inconsistent. We conducted a series of gaze-cueing studies using dynamic facial cues to examine orienting across different emotional expression and task conditions, including face inversion. Across experiments, at a short stimulus–onset asynchrony (SOA) we observed both an expression effect (i.e., faster responses when the face was emotional versus neutral) and a cue validity effect (i.e., faster responses when the target was gazed-at), but no interaction between validity and emotion. Results from face inversion suggest that the emotion effect may have been due to both facial expression and stimulus motion. At longer SOAs, validity and emotion interacted such that cueing by emotional faces, fearful faces in particular, was enhanced relative to neutral faces. These results converge with a growing body of evidence that suggests that gaze and expression are initially processed independently and interact at later stages to direct attentional orienting.  相似文献   

7.
8.
Automated assessment of facial expressions with machine vision software opens up new opportunities for the assessment of facial expression in a shrewd and economic way in psychological and applied research. We investigated the assessment quality of one machine vision algorithm (FACET) in a study using standardized databases of dynamic facial expressions in different conditions (angle, distance, lighting and resolution). We found high reliability in terms of ratings concordance across conditions for facial expressions (intraclass correlation, ICC = 0.96) and action units (ICC = 0.78). Signal detection analyses showed good classification for both facial expressions (area under the curve, AUC > 0.99) and action unit scores (AUC = 0.91). In a second study, we investigated the convergent validity of machine vision assessment and electromyography (EMG) with regard to reaction times measured during the production of smiles (action unit 12) and frowns (action unit 4). To this end, we simultaneously measured EMG and expression classification with machine vision software in a response priming task with validly and invalidly primed responses. Both, EMG and machine vision data revealed similar performance costs in reaction times of inhibiting the falsely prepared expression and reprogramming the correct one. These results support machine vision as a suitable tool for assessing experimental effects in facial reaction times.  相似文献   

9.
The aim was to explore whether people high as opposed to low in speech anxiety react with a more pronounced differential facial response when exposed to angry and happy facial stimuli. High and low fear participants were selected based on their scores on a fear of public speaking questionnaire. All participants were exposed to pictures of angry and happy faces while facial electromyographic (EMG) activity from the Corrugator supercilii and the Zygomaticus major muscle regions was recorded. Skin conductance responses (SCR), heart rate (HR) and ratings were also collected. Participants high as opposed to low in speech anxiety displayed a larger differential corrugator responding, indicating a larger negative emotional reaction, between angry and happy faces. They also reacted with a larger differential zygomatic responding, indicating a larger positive emotional reaction, between happy and angry faces. Consistent with the facial reaction patterns, the high fear group rated angry faces as more unpleasant and as expressing more disgust, and further rated happy faces as more pleasant. There were no differences in SCR or HR responding between high and low speech anxiety groups. The present results support the hypothesis that people high in speech anxiety are disposed to show an exaggerated sensitivity and facial responsiveness to social stimuli.  相似文献   

10.
The ability to decode facial expressions is an important component of social interaction and functioning. This ability is even more fundamental early in life, prior to the development of verbal communication. However, it is still unclear whether newborns can detect, discriminate and process facial expressions, and, if so, what the mechanisms underlying this ability are. In this study, we extend the investigation of perceived emotional expression by manipulating gaze direction with different facial expressions. Specifically, newborns were presented with faces displaying neutral, fearful, or happy facial expressions accompanied with direct or averted gaze, and tested in a visual preference paradigm. Four experiments were conducted in which different combinations of expression and gaze were used. However, only in the fourth experiment did newborns show a visual preference for a specific emotional display; they looked significantly longer at a happy face than a neutral one only when both were accompanied with direct gaze. These results provide support for the advantage of happy facial expressions in the development of a face processing system and suggest that this preference reflects experience acquired during the first few days after birth.  相似文献   

11.
What expressive facial features and processing mechanisms make a person look trustworthy, relative to happy? Participants judged the un/happiness or un/trustworthiness of people with dynamic expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. Faces with an unfolding smile looked more trustworthy and happier than faces with a neutral mouth, regardless of the eye expression. Unfolding happy eyes increased both trustworthiness and happiness only in the presence of a congruent unfolding smiling mouth. Nevertheless, the contribution of the mouth was greater for happiness than for trustworthiness; and the mouth was especially visually salient for expressions favouring happiness more than trustworthiness. We conclude that the categorisation of facial happiness is more automatically driven by the visual saliency of a single feature, that is, the smiling mouth, while perception of trustworthiness is more strategic, with the eyes being necessarily incorporated into a configural face representation.  相似文献   

12.
The common within-subjects design of studies on the recognition of emotion from facial expressions allows the judgement of one face to be influenced by previous faces, thus introducing the potential for artefacts. The present study (N=344) showed that the canonical “disgust face” was judged as disgusted, provided that the preceding set of faces included “anger expressions”, but was judged as angry when the preceding set of faces excluded anger but instead included persons who looked sad or about to be sick. Chinese observers showed lower recognition of the “disgust face” than did American observers. Chinese observers also showed lower recognition of the “fear face” when responding in Chinese than in English.  相似文献   

13.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

14.
Typical adults mimic facial expressions within 1000 ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study evaluated mechanisms underlying RFRs during childhood and examined possible impairment in children with ASD. Experiment 1 found RFRs to happy and angry faces (not fear faces) in 15 typically developing children from 7 to 12 years of age. RFRs of fear (not anger) in response to angry faces indicated an emotional mechanism. In 11 children (8-13 years of age) with ASD, Experiment 2 found undifferentiated RFRs to fear expressions and no consistent RFRs to happy or angry faces. However, as children with ASD aged, matching RFRs to happy faces increased significantly, suggesting the development of processes underlying matching RFRs during this period in ASD.  相似文献   

15.
Using 20 levels of intensity, we measured children’s thresholds to discriminate the six basic emotional expressions from neutral and their misidentification rates. Combined with the results of a previous study using the same method (Journal of Experimental Child Psychology, 102 (2009) 503-521), the results indicate that by 5 years of age, children are adult-like, or nearly adult-like, for happy expressions on all measures. Children’s sensitivity to other expressions continues to improve between 5 and 10 years of age (e.g., surprise, disgust, fear) or even after 10 years of age (e.g., anger, sad). The results indicate that there is a slow development of sensitivity to the expression of all basic emotions except happy. This slow development may impact children’s social and cognitive development by limiting their sensitivity to subtle expressions of disapproval or disappointment.  相似文献   

16.
This study examined whether facial electromyographic (EMG) reactions differentiate between identical tone stimuli which subjects perceive as differently unpleasant. Subjects were repeatedly exposed to a 1000 Hz 75 dB tone stimulus while their facial EMG from the corrugator and zygomatic muscle regions were measured. Skin conductance and heart rate responses were also measured. The subjects rated the unpleasantness of the stimulus and based on these ratings they were divided into two groups, High and Low in perceived unpleasantness. As predicted the facial EMG activity reflected the perceived unpleasantness. That is, the High group but not the Low group reacted with an increased corrugator response. The autonomic data, on the other hand, did not differ between groups. The results are consistent with the proposition that the facial muscles function as a readout system for emotional reactions and that facial muscle activity is intimately related to the experiential level of the emotional response system.  相似文献   

17.
Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency.  相似文献   

18.
Rachael E. Jack 《Visual cognition》2013,21(9-10):1248-1286
With over a century of theoretical developments and empirical investigation in broad fields (e.g., anthropology, psychology, evolutionary biology), the universality of facial expressions of emotion remains a central debate in psychology. How near or far, then, is this debate from being resolved? Here, I will address this question by highlighting and synthesizing the significant advances in the field that have elevated knowledge of facial expression recognition across cultures. Specifically, I will discuss the impact of early major theoretical and empirical contributions in parallel fields and their later integration in modern research. With illustrative examples, I will show that the debate on the universality of facial expressions has arrived at a new juncture and faces a new generation of exciting questions.  相似文献   

19.
The present study investigated whether dysphoric individuals have a difficulty in disengaging attention from negative stimuli and/or reduced attention to positive information. Sad, neutral and happy facial stimuli were presented in an attention-shifting task to 18 dysphoric and 18 control participants. Reaction times to neutral shapes (squares and diamonds) and the event-related potentials to emotional faces were recorded. Dysphoric individuals did not show impaired attentional disengagement from sad faces or facilitated disengagement from happy faces. Right occipital lateralisation of P100 was absent in dysphoric individuals, possibly indicating reduced attention-related sensory facilitation for faces. Frontal P200 was largest for sad faces among dysphoric individuals, whereas controls showed larger amplitude to both sad and happy as compared with neutral expressions, suggesting that dysphoric individuals deployed early attention to sad, but not happy, expressions. Importantly, the results were obtained controlling for the participants' trait anxiety. We conclude that at least under some circumstances the presence of depressive symptoms can modulate early, automatic stages of emotional processing.  相似文献   

20.
The diffusion model (Ratcliff, 1978) and the leaky competing accumulator model (LCA, Usher & McClelland, 2001) were tested against two-choice data collected from the same subjects with the standard response time procedure and the response signal procedure. In the response signal procedure, a stimulus is presented and then, at one of a number of experimenter-determined times, a signal to respond is presented. The models were fit to the data from the two procedures simultaneously under the assumption that responses in the response signal procedure were based on a mixture of decision processes that had already terminated at response boundaries before the signal and decision processes that had not yet terminated. In the latter case, decisions were based on partial information in one variant of each model or on guessing in a second variant. Both variants of the diffusion model fit the data well and both fit better than either variant of the LCA model, although the differences in numerical goodness-of-fit measures were not large enough to allow decisive selection between the models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号