首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced the visual perception of facial expressions. We presented a sound clip of laughter simultaneously with a happy, a neutral, or a sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of the happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces, laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distractor faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a reexamination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may be similarly context dependent.  相似文献   

2.
Event-related brain potentials were measured in 7- and 12-month-old infants to examine the development of processing happy and angry facial expressions. In 7-month-olds a larger negativity to happy faces was observed at frontal, central, temporal and parietal sites (Experiment 1), whereas 12-month-olds showed a larger negativity to angry faces at occipital sites (Experiment 2). These data suggest that processing of these facial expressions undergoes development between 7 and 12 months: while 7-month-olds exhibit heightened sensitivity to happy faces, 12-month-olds resemble adults in their heightened sensitivity to angry faces. In Experiment 3 infants' visual preference was assessed behaviorally, revealing that the differences in ERPs observed at 7 and 12 months do not simply reflect differences in visual preference.  相似文献   

3.
There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion.  相似文献   

4.
Most previous studies investigating children’s ability to recognize facial expressions used only intense exemplars. Here we compared the sensitivity of 5-, 7-, and 10-year-olds with that of adults (n = 24 per age group) for less intense expressions of happiness, sadness, and fear. The developmental patterns differed across expressions. For happiness, by 5 years of age, children were as sensitive as adults even to low intensities. For sadness, by 5 years of age, children were as accurate as adults in judging that the face was expressive (i.e., not neutral), but even at 10 years of age, children were more likely to misjudge it as fearful. For fear, children’s thresholds were not adult-like until 10 years of age, and children often confused it with sadness at 5 years of age. For all expressions, including even happy expressions, 5- and 7-year-olds were less accurate than adults in judging which of two expressions was more intense. Together, the results indicate that there is slow development of accurate decoding of subtle facial expressions.  相似文献   

5.
The aim was to explore whether people high as opposed to low in speech anxiety react with a more pronounced differential facial response when exposed to angry and happy facial stimuli. High and low fear participants were selected based on their scores on a fear of public speaking questionnaire. All participants were exposed to pictures of angry and happy faces while facial electromyographic (EMG) activity from the Corrugator supercilii and the Zygomaticus major muscle regions was recorded. Skin conductance responses (SCR), heart rate (HR) and ratings were also collected. Participants high as opposed to low in speech anxiety displayed a larger differential corrugator responding, indicating a larger negative emotional reaction, between angry and happy faces. They also reacted with a larger differential zygomatic responding, indicating a larger positive emotional reaction, between happy and angry faces. Consistent with the facial reaction patterns, the high fear group rated angry faces as more unpleasant and as expressing more disgust, and further rated happy faces as more pleasant. There were no differences in SCR or HR responding between high and low speech anxiety groups. The present results support the hypothesis that people high in speech anxiety are disposed to show an exaggerated sensitivity and facial responsiveness to social stimuli.  相似文献   

6.
Motivation and Emotion - In two experiments using a Rapid Serial Visual Presentation (RSVP) we investigated how emotional and neutral faces (T1) modulate temporal attention for a following neutral...  相似文献   

7.
We investigated the influence of happy and angry expressions on memory for new faces. Participants were presented with happy and angry faces in an intentional or incidental learning condition and were later asked to recognise the same faces displaying a neutral expression. They also had to remember what the initial expressions of the faces had been. Remember/know/guess judgements were made both for identity and expression memory. Results showed that faces were better recognised when presented with a happy rather than an angry expression, but only when learning was intentional. This was mainly due to an increase of the "remember" responses for happy faces when encoding was intentional rather than incidental. In contrast, memory for emotional expressions was not different for happy and angry faces whatever the encoding conditions. We interpret these findings according to the social meaning of emotional expressions for the self.  相似文献   

8.
Although facial information is distributed over spatial as well as temporal domains, thus far research on selective attention to disapproving faces has concentrated predominantly on the spatial domain. This study examined the temporal characteristics of visual attention towards facial expressions by presenting a Rapid Serial Visual Presentation (RSVP) paradigm to high (n=33) and low (n=34) socially anxious women. Neutral letter stimuli (p, q, d, b) were presented as the first target (T1), and emotional faces (neutral, happy, angry) as the second target (T2). Irrespective of social anxiety, the attentional blink was attenuated for emotional faces. Emotional faces as T2 did not influence identification accuracy of a preceding (neutral) target. The relatively low threshold for the (explicit) identification of emotional expressions is consistent with the view that emotional facial expressions are processed relatively efficiently.  相似文献   

9.
The current study tested whether the perception of angry faces is cross-culturally privileged over that of happy faces, by comparing perception of the offset of emotion in a dynamic flow of expressions. Thirty Chinese and 30 European-American participants saw movies that morphed an anger expression into a happy expression of the same stimulus person, or vice versa. Participants were asked to stop the movie at the point where they ceased seeing the initial emotion. As expected, participants cross-culturally continued to perceive anger longer than happiness. Moreover, anger was perceived longer in in-group than in out-group faces. The effects were driven by female rather than male targets. Results are discussed with reference to the important role of context in emotion perception.  相似文献   

10.
The current study tested whether the perception of angry faces is cross-culturally privileged over that of happy faces, by comparing perception of the offset of emotion in a dynamic flow of expressions. Thirty Chinese and 30 European-American participants saw movies that morphed an anger expression into a happy expression of the same stimulus person, or vice versa. Participants were asked to stop the movie at the point where they ceased seeing the initial emotion. As expected, participants cross-culturally continued to perceive anger longer than happiness. Moreover, anger was perceived longer in in-group than in out-group faces. The effects were driven by female rather than male targets. Results are discussed with reference to the important role of context in emotion perception.  相似文献   

11.
It is commonly assumed that threatening expressions are perceptually prioritised, possessing the ability to automatically capture and hold attention. Recent evidence suggests that this prioritisation depends on the task relevance of emotion in the case of attention holding and for fearful expressions. Using a hybrid attentional blink (AB) and repetition blindness (RB) paradigm we investigated whether task relevance also impacts on prioritisation through attention capture and perceptual salience, and if these effects generalise to angry expressions. Participants judged either the emotion (relevant condition) or gender (irrelevant condition) of two target facial stimuli (fearful, angry or neutral) imbedded in a stream of distractors. Attention holding and capturing was operationalised as modulation of AB deficits by first target (T1) and second target (T2) expression. Perceptual salience was operationalised as RB modulation. When emotion was task-relevant (Experiment 1; N?=?29) fearful expressions captured and held attention, and were more perceptually salient than neutral expressions. Angry expressions captured attention, but were less perceptually salient and capable of holding attention than fearful and neutral expressions. When emotion was task-irrelevant (Experiment 2; N?=?30), only fearful attention capture and perceptual salience effects remained significant. Our findings highlight the importance for threat-prioritisation research to heed both the type of threat and prioritisation investigated.  相似文献   

12.
Emotion researchers often categorize angry and fearful face stimuli as "negative" or "threatening". Perception of fear and anger, however, appears to be mediated by dissociable neural circuitries and often elicit distinguishable behavioral responses. The authors sought to elucidate whether viewing anger and fear expressions produce dissociable psychophysiological responses (i.e., the startle reflex). The results of two experiments using different facial stimulus sets (representing anger, fear, neutral, and happy) indicated that viewing anger was associated with a significantly heightened startle response (p < .05) relative to viewing fear, happy, and neutral. This finding suggests that while anger and fear faces convey messages of "threat", their priming effect on startle circuitry differs. Thus, angry expressions, representing viewer-directed threat with an unambiguous source (i.e., the expresser), may more effectively induce a motivational propensity to withdraw or escape. The source of threat is comparatively less clear for fearful faces. The differential effects of these two facial threat signals on the defensive motivational system adds to growing literature highlighting the importance of distinguishing between emotional stimuli of similar valence, along lines of meaning and functional impact.  相似文献   

13.
As evidence for an hypothesis that pupil size plays an important role in nonverbal communication, Hess (1975) has reported that adults draw in appropriately sized pupils on his happy and angry faces task. However, he did not report a statistical test of his data. In this study, we replicated Hess' research and found congruent with his hypothesis that college students (n = 223) draw in significantly larger pupils on the happy face.  相似文献   

14.
Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., ?hman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of the literature found inconsistent evidence for the effect and suggested that low-level visual confounds could not be ruled out as the driving force behind the anger superiority effect. We then conducted 7 experiments, carefully designed to eliminate many of the confounding variables present in past demonstrations. These experiments showed no evidence that angry faces popped out of crowds or even that they were efficiently detected. These experiments instead revealed a search asymmetry favoring happy faces. Moreover, in contrast to most previous studies, the happiness superiority effect was shown to be robust even when obvious perceptual confounds--like the contrast of white exposed teeth that are typically displayed in smiling faces--were eliminated in the happy targets. Rather than attribute this effect to the existence of innate happiness detectors, we speculate that the human expression of happiness has evolved to be more visually discriminable because its communicative intent is less ambiguous than other facial expressions.  相似文献   

15.
Several experiments have shown that anxious individuals have an attentional bias towards threat cues. It is also known, however, that exposure to a subjectively threatening but relatively harmless stimulus tends to lead to a reduction in fear. Accordingly, some authors have hypothesised that high trait anxious individuals have a vigilant-avoidant pattern of visual attention to threatening stimuli. In the present study, 52 high trait anxious and 48 low trait anxious subjects were shown pairs of emotional faces, while their direction of gaze was continuously monitored. For 0-1000 ms, both groups were found to view angry faces more than happy faces. For 2000-3000 ms, however, only high trait anxious subjects averted their gaze from angry faces more than they did from happy faces.  相似文献   

16.
From birth, infants are exposed to a wealth of emotional information in their interactions. Much research has been done to investigate the development of emotion perception, and factors influencing that development. The current study investigates the role of familiarity on 3.5-month-old infants' generalization of emotional expressions. Infants were assigned to one of two habituation sequences: in one sequence, infants were visually habituated to parental expressions of happy or sad. At test, infants viewed either a continuation of the habituation sequence, their mother depicting a novel expression, an unfamiliar female depicting the habituated expression, or an unfamiliar female depicting a novel expression. In the second sequence, a new sample of infants was matched to the infants in the first sequence. These infants viewed the same habituation and test sequences, but the actors were unfamiliar to them. Only those infants who viewed their own mothers and fathers during the habituation sequence increased looking. They dishabituated looking to maternal novel expressions, the unfamiliar female's novel expression, and the unfamiliar female depicting the habituated expression, especially when sad parental expressions were followed by an expression change to happy or to a change in person. Infants are guided in their recognition of emotional expressions by the familiarity of their parents, before generalizing to others.  相似文献   

17.
Increasing evidence indicates that evaluation of affective stimuli facilitates the execution of affect-congruent approach and avoidance responses, and vice versa. These effects are proposed to be mediated by increases or decreases in the relative distance to the stimulus, due to the participant's action. In a series of experiments we investigated whether stimulus categorisation is similarly influenced when changes in this relative distance are due to movement of the stimulus instead of movements by the participant. Participants responded to happy and angry faces that appeared to approach (move towards) or withdraw (move away) from them. In line with previous findings, affective categorisation was facilitated when the movement was congruent with stimulus valence, resulting in faster and more correct responses to approaching happy and withdrawing angry faces. These findings suggest that relative distance indeed plays a crucial role in approach–avoidance congruency effects, and that these effects do not depend on the execution of movements by the participant.  相似文献   

18.
Thiessen ED 《Cognitive Science》2010,34(6):1093-1106
Infant and adult learners are able to identify word boundaries in fluent speech using statistical information. Similarly, learners are able to use statistical information to identify word-object associations. Successful language learning requires both feats. In this series of experiments, we presented adults and infants with audio-visual input from which it was possible to identify both word boundaries and word-object relations. Adult learners were able to identify both kinds of statistical relations from the same input. Moreover, their learning was actually facilitated by the presence of two simultaneously present relations. Eight-month-old infants, however, do not appear to benefit from the presence of regular relations between words and objects. Adults, like 8-month-olds, did not benefit from regular audio-visual correspondences when they were tested with tones, rather than linguistic input. These differences in learning outcomes across age and input suggest that both developmental and stimulus-based constraints affect statistical learning.  相似文献   

19.
When a briefly presented and then masked visual object is identified, it impairs the identification of the second target for several hundred milliseconds. This phenomenon is known as attentional blink or attentional dwell time. The present study is an attempt to investigate the role of salient emotional information in shifts of covert visual attention over time. Two experiments were conducted using the dwell time paradigm, in which two successive targets are presented at different locations with a variable stimulus onset asynchrony (SOA). In the first experiment, real emotional faces (happy/sad) were presented as the first target, and letters (L/T) were presented as the second target. The order of stimulus presentation was reversed in the second experiment. In the first experiment, identification of the letters preceded by happy faces showed better performance than did those preceded by sad faces at SOAs less than 200 msec. Similarly, happy faces were identified better than sad faces were at short SOAs in Experiment 2. The results show that the time course of visual attention is dependent on emotional content of the stimuli. The findings indicate that happy faces are associated with distributed attention or broad scope of attention and require fewer attentional resources than do sad faces.  相似文献   

20.
The important ability to discriminate facial expressions of emotion develops early in human ontogeny. In the present study, 7-month-old infants’ event-related potentials (ERPs) in response to angry and fearful emotional expressions were measured. The angry face evoked a larger negative component (Nc) at fronto-central leads between 300 and 600 ms after stimulus onset when compared to the amplitude of the Nc to the fearful face. Furthermore, over posterior channels, the angry expression elicited a N290 that was larger in amplitude and a P400 that was smaller in amplitude than for the fearful expression. This is the first study that shows that the ability of infants to discriminate angry and fearful facial expressions can be measured at the electrophysiological level. These data suggest that 7-month-olds allocated more attentional resources to the angry face as indexed by the Nc. Implications of this result may be that the social signal values were perceived differentially, not merely as “negative”. Furthermore, it is possible that the angry expression might have been more arousing and discomforting for the infant compared with the fearful expression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号