首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We examined the level of muscular tension of mentalis muscle of 36 students in graphic design at rest and during the presentation of three slides reproducing facial expressions. Analysis showed an increase in the myographic level of mentalis muscle from the third second of measurement onwards after the presentation of the slide in which contraction of the chin was involved. We interpret this result by hypothesizing that the decodification of some facial expressions is realized through a microreproduction of the stimulus from the decodifying subject.  相似文献   

2.
In an attempt to determine whether hypnotically induced affect could be reliably discriminated from simulations, three hypnotically trained female undergraduate subjects were presented with posthypnotic cues to either experience or simulate varying degrees of anxiety and pleasure. Facial expressions generated by subjects in response to these cues were recorded on videotape and coded by means of the Facial Action Coding System (FACS). It was hypothesized that simulated emotional expressions, requiring greater cortical processing, would be marked by longer onset latencies and greater irregularity or fluctuation in muscular contraction than the presumably automatic changes in facial behavior accompanying posthypnotic emotions. Statistical analyses confirmed both expectations. The results were viewed as reflecting support for the validity of posthypnotically cued affect.  相似文献   

3.
Ten males and ten females served as both senders and receivers of nonverbal expressions in an experiment designed to examine various kinds of sending-receiving relationships. While the overall sending-receiving relationship for five types of expressions combined was positive and nearly statistically significant (.10 > p > .05), the category-specific sending-receiving relationships were near zero in magnitude or slightly negative. Sending-receiving relationships that were category specific and involved same-sex communication attempts only were found to be more negative with some being statistically significant. Females were found to be significantly better receivers but not significantly better senders than males. The results were discussed in terms of recent theoretical notions concerning sending and receiving processes.  相似文献   

4.
Rule learning (RL) is an implicit learning mechanism that allows infants to detect and generalize rule-like repetition-based patterns (such as ABB and ABA) from a sequence of elements. Increasing evidence shows that RL operates both in the auditory and the visual domain and is modulated by the perceptual expertise with the to-be-learned stimuli. Yet, whether infants’ ability to detect a high-order rule from a sequence of stimuli is affected by affective information remains a largely unexplored issue. Using a visual habituation paradigm, we investigated whether the presence of emotional expressions with a positive and a negative value (i.e., happiness and anger) modulates 7- to 8-month-old infants’ ability to learn a rule-like pattern from a sequence of faces of different identities. Results demonstrate that emotional facial expressions (either positive and negative) modulate infants’ visual RL mechanism, even though positive and negative facial expressions affect infants’ RL in a different manner: while anger disrupts infants’ ability to learn the rule-like pattern from a face sequence, in the presence of a happy face infants show a familiarity preference, thus maintaining their learning ability. These findings show that emotional expressions exert an influence on infants’ RL abilities, contributing to the investigation on how emotion and cognition interact in face processing during infancy.  相似文献   

5.
We examined the relationship between experienced positive/negative affect and cardiac reactivity and facial muscle movements during laboratory tasks with different demands. Heart rate, respiratory sinus arrhythmia, pre-ejection period, and facial electromyography were measured during startle, mental arithmetic, reaction time task, and speech task. The results revealed that individuals experiencing high levels of positive affect exhibited more pronounced parasympathetic, heart rate, and orbicularis oculi reactivity than others. Individuals who experienced high levels of negative affects during the tasks showed higher corrugator supercilii responses. Men and women showed slightly different response patterns. To conclude, cardiac reactivity may be associated with positive involvement and enthusiasm in some situations and all reactivity should not automatically be considered as potentially pathological.  相似文献   

6.
Three experiments investigated the perception of facial displays of emotions. Using a morphing technique, Experiment 1 (identification task) and Experiment 2 (ABX discrimination task) evaluated the merits of categorical and dimensional models of the representation of these stimuli. We argue that basic emotions—as they are usually defined verbally—do not correspond to primary perceptual categories emerging from the visual analysis of facial expressions. Instead, the results are compatible with the hypothesis that facial expressions are coded in a continuous anisotropic space structured by valence axes. Experiment 3 (identification task) introduces a new technique for generating chimeras to address the debate between feature-based and holistic models of the processing of facial expressions. Contrary to the pure holistic hypothesis, the results suggest that an independent assessment of discrimination features is possible, and may be sufficient for identifying expressions even when the global facial configuration is ambiguous. However, they also suggest that top-down processing may improve identification accuracy by assessing the coherence of local features.  相似文献   

7.
Caricaturing facial expressions   总被引:1,自引:0,他引:1  
The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.  相似文献   

8.
Three studies examined the nature of the contributions of each hemisphere to the processing of facial expressions and facial identity. A pair of faces, the members of which differed in either expression or identity, were presented to the right or left field. Subjects were required to compare the members of the pair to each other (experiments 1 and 2) or to a previously presented sample (experiment 3). The results revealed that both face and expression perception show an LVF superiority although the two tasks could be differentiated in terms of overall processing time and the interaction of laterality differences with sex. No clear-cut differences in laterality emerged for processing of positive and negative expressions.  相似文献   

9.
Unconscious facial reactions to emotional facial expressions   总被引:22,自引:0,他引:22  
Studies reveal that when people are exposed to emotional facial expressions, they spontaneously react with distinct facial electromyographic (EMG) reactions in emotion-relevant facial muscles. These reactions reflect, in part, a tendency to mimic the facial stimuli. We investigated whether corresponding facial reactions can be elicited when people are unconsciously exposed to happy and angry facial expressions. Through use of the backward-masking technique, the subjects were prevented from consciously perceiving 30-ms exposures of happy, neutral, and angry target faces, which immediately were followed and masked by neutral faces. Despite the fact that exposure to happy and angry faces was unconscious, the subjects reacted with distinct facial muscle reactions that corresponded to the happy and angry stimulus faces. Our results show that both positive and negative emotional reactions can be unconsciously evoked, and particularly that important aspects of emotional face-to-face communication can occur on an unconscious level.  相似文献   

10.
This study explored how rapidly emotion specific facial muscle reactions were elicited when subjects were exposed to pictures of angry and happy facial expressions. In three separate experiments, it was found that distinctive facial electromyographic reactions, i.e., greater Zygomaticus major muscle activity in response to happy than to angry stimuli and greater Corrugator supercilii muscle activity in response to angry stimuli, were detectable after only 300–400 ms of exposure. These findings demonstrate that facial reactions are quickly elicited, indicating that expressive emotional reactions can be very rapidly manifested and are perhaps controlled by fast operating facial affect programs.  相似文献   

11.
Sato W  Yoshikawa S 《Cognition》2007,104(1):1-18
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing (Experiment 1) and videos (Experiment 2). The subjects' facial actions were unobtrusively videotaped and blindly coded using Facial Action Coding System [FACS; Ekman, P., & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA: Consulting Psychologist]. In the dynamic presentations common to both experiments, brow lowering, a prototypical action in angry expressions, occurred more frequently in response to angry expressions than to happy expressions. The pulling of lip corners, a prototypical action in happy expressions, occurred more frequently in response to happy expressions than to angry expressions in dynamic presentations. Additionally, the mean latency of these actions was less than 900 ms after the onset of dynamic changes in facial expression. Naive raters recognized the subjects' facial reactions as emotional expressions, with the valence corresponding to the dynamic facial expressions that the subjects were viewing. These results indicate that dynamic facial expressions elicit spontaneous and rapid facial mimicry, which functions both as a form of intra-individual processing and as inter-individual communication.  相似文献   

12.
This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals’ angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals’ happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.  相似文献   

13.
14.
Facial images can be enhanced by application of an algorithm--the caricature algorithm--that systematically manipulates their distinctiveness (Benson & Perrett, 1991c; Brennan, 1985). In this study, we first produced a composite facial image from natural images of the six facial expressions of fear, sadness, surprise, happiness, disgust, and anger shown on a number of different individual faces (Ekman & Friesen, 1975). We then caricatured the composite images with respect to a neutral (resting) expression. Experiment 1 showed that rated strength of the target expression was directly related to the degree of enhancement for all the expressions. Experiment 2, which used a free rating procedure, found that, although caricature enhanced the strength of the target expression (more extreme ratings), it did not necessarily enhance its purity, inasmuch as the attributes of nontarget expressions were also enhanced. Naming of prototypes, of original exemplar images, and of caricatures was explored in Experiment 3 and followed the pattern suggested by the free rating conditions of Experiment 2, with no overall naming advantage to caricatures under these conditions. Overall, the experiments suggested that computational methods of compositing and caricature can be usefully applied to facial images of expression. Their utility in enhancing the distinctiveness of the expression depends on the purity of expression in the source image.  相似文献   

15.
Infant attention to facial expressions and facial motion   总被引:1,自引:0,他引:1  
Three-month-old infants were shown moving faces and still faces on videotape in a paired-comparison situation. Motion type was clearly specified, and facial expression and motion were separately varied. Infants saw a still face, internal motion on the face (i.e., motion of the internal features), and whole object (i.e., side-to-side) motion, each with happy and neutral expressions. Infants showed preference for expressions when the face was still and when it showed internal motion. Facial expression and facial motion were equally preferred, and both appeared to be salient dimensions of the face for three-month-old infants.  相似文献   

16.
Recognition of facial affect in depression   总被引:1,自引:0,他引:1  
25 depressed patients recognized a sad face with more errors than 25 normal persons and labeled other expressions as sadness when affective content was not recognized. Correct recognitions for 6 affects were related to the portion of the face depicted. Comparisons of responses of 25 patients diagnosed as having anxiety neuroses showed differences in responses from the depressed patients and normal persons.  相似文献   

17.
18.
This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.  相似文献   

19.
Individuals spontaneously categorise other people on the basis of their gender, ethnicity and age. But what about the emotions they express? In two studies we tested the hypothesis that facial expressions are similar to other social categories in that they can function as contextual cues to control attention. In Experiment 1 we associated expressions of anger and happiness with specific proportions of congruent/incongruent flanker trials. We also created consistent and inconsistent category members within each of these two general contexts. The results demonstrated that participants exhibited a larger congruency effect when presented with faces in the emotional group associated with a high proportion of congruent trials. Notably, this effect transferred to inconsistent members of the group. In Experiment 2 we replicated the effects with faces depicting true and false smiles. Together these findings provide consistent evidence that individuals spontaneously utilise emotions to categorise others and that such categories determine the allocation of attentional control.  相似文献   

20.
Inversion interferes with the encoding of configural and holistic information more than it does with the encoding of explicitly represented and isolated parts. Accordingly, if facial expressions are explicitly represented in the face representation, their recognition should not be greatly affected by face orientation. In the present experiment, response times to detect a difference in hair color in line-drawn faces were unaffected by face orientation, but response times to detect the presence of brows and mouth were longer with inverted than with upright faces, independent of the emergent expression (neutral, happy, sad, and angry). Expressions are not explicitly represented; rather, they and the face configuration are represented as undecomposed wholes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号