首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Facial expressions are critical for effective social communication, and as such may be processed by the visual system even when it might be advantageous to ignore them. Previous research has shown that categorising emotional words was impaired when faces of a conflicting valence were simultaneously presented. In the present study, we examined whether emotional word categorisation would also be impaired when faces of the same (negative) valence but different emotional category (either angry, sad or fearful) were simultaneously presented. Behavioural results provided evidence for involuntary processing of basic emotional facial expression category, with slower word categorisation when the face and word categories were incongruent (e.g., angry word and sad face) than congruent (e.g., angry word and angry face). Event-related potentials (ERPs) time-locked to the presentation of the word–face pairs also revealed that emotional category congruency effects were evident from approximately 170 ms after stimulus onset.  相似文献   

2.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

3.
Facial expressions are critical for effective social communication, and as such may be processed by the visual system even when it might be advantageous to ignore them. Previous research has shown that categorising emotional words was impaired when faces of a conflicting valence were simultaneously presented. In the present study, we examined whether emotional word categorisation would also be impaired when faces of the same (negative) valence but different emotional category (either angry, sad or fearful) were simultaneously presented. Behavioural results provided evidence for involuntary processing of basic emotional facial expression category, with slower word categorisation when the face and word categories were incongruent (e.g., angry word and sad face) than congruent (e.g., angry word and angry face). Event-related potentials (ERPs) time-locked to the presentation of the word-face pairs also revealed that emotional category congruency effects were evident from approximately 170 ms after stimulus onset.  相似文献   

4.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

5.
In the face literature, it is debated whether the identification of facial expressions requires holistic (i.e., whole face) or analytic (i.e., parts-based) information. In this study, happy and angry composite expressions were created in which the top and bottom face halves formed either an incongruent (e.g., angry top + happy bottom) or congruent composite expression (e.g., happy top + happy bottom). Participants reported the expression in the target top or bottom half of the face. In Experiment 1, the target half in the incongruent condition was identified less accurately and more slowly relative to the baseline isolated expression or neutral face conditions. In contrast, no differences were found between congruent and the baseline conditions. In Experiment 2, the effects of exposure duration were tested by presenting faces for 20, 60, 100 and 120 ms. Interference effects for the incongruent faces appeared at the earliest 20 ms interval and persisted for the 60, 100 and 120 ms intervals. In contrast, no differences were found between the congruent and baseline face conditions at any exposure interval. In Experiment 3, it was found that spatial alignment impaired the recognition of incongruent expressions, but had no effect on congruent expressions. These results are discussed in terms of holistic and analytic processing of facial expressions.  相似文献   

6.
We examined proactive and reactive control effects in the context of task-relevant happy, sad, and angry facial expressions on a face-word Stroop task. Participants identified the emotion expressed by a face that contained a congruent or incongruent emotional word (happy/sad/angry). Proactive control effects were measured in terms of the reduction in Stroop interference (difference between incongruent and congruent trials) as a function of previous trial emotion and previous trial congruence. Reactive control effects were measured in terms of the reduction in Stroop interference as a function of current trial emotion and previous trial congruence. Previous trial negative emotions exert greater influence on proactive control than the positive emotion. Sad faces in the previous trial resulted in greater reduction in the Stroop interference for happy faces in the current trial. However, current trial angry faces showed stronger adaptation effects compared to happy faces. Thus, both proactive and reactive control mechanisms are dependent on emotional valence of task-relevant stimuli.  相似文献   

7.
The present study used event-related potentials (ERPs) to explore the nature of space-valence congruency effects. We presented participants with up or down arrows at the centre of the screen and then asked participants to identify whether the following target words had the emotional valence. The target words included positive emotional words (e.g., “happy” and “delight”), negative emotional words (e.g., “sad” and “depressive”) and neutral words (e.g., “history” and “country”). Behavioural data showed that the positive targets were identified faster when they are primed by up arrows than when primed by down arrows, whereas the negative targets were identified faster when they are primed by down arrows than when primed by up arrows. The ERP analysis showed larger P2 amplitudes were found in the congruent condition (i.e., the positive targets following up arrows or the negative targets following down arrows) than in the incongruent condition (i.e., the positive targets following down arrows or the negative targets following up arrows). Furthermore, larger N400 amplitudes were found in the incongruent condition compared with the congruent condition. Moreover, larger LPC amplitudes were found in the congruent condition compared with the incongruent condition. Therefore, in addition to replicating the space-valence congruency effects in a neutral/emotional judgement task, our study also extended previous studies by showing that spatial information modulates the processing of the emotional words at multiple stages.  相似文献   

8.
We investigated whether emotional information from facial expression and hand movement quality was integrated when identifying the expression of a compound stimulus showing a static facial expression combined with emotionally expressive dynamic manual actions. The emotions (happiness, neutrality, and anger) expressed by the face and hands were either congruent or incongruent. In Experiment 1, the participants judged whether the stimulus person was happy, neutral, or angry. Judgments were mainly based on the facial expressions, but were affected by manual expressions to some extent. In Experiment 2, the participants were instructed to base their judgment on the facial expression only. An effect of hand movement expressive quality was observed for happy facial expressions. The results conform with the proposal that perception of facial expressions of emotions can be affected by the expressive qualities of hand movements.  相似文献   

9.
Human faces are among the most important visual stimuli that we encounter at all ages. This importance partly stems from the face as a conveyer of information on the emotional state of other individuals. Previous research has demonstrated specific scanning patterns in response to threat-related compared to non-threat-related emotional expressions. This study investigated how visual scanning patterns toward faces which display different emotional expressions develop during infancy. The visual scanning patterns of 4-month-old and 7-month-old infants and adults when looking at threat-related (i.e., angry and fearful) versus non-threat-related (i.e., happy, sad, and neutral) emotional faces were examined. We found that infants as well as adults displayed an avoidant looking pattern in response to threat-related emotional expressions with reduced dwell times and relatively less fixations to the inner features of the face. In addition, adults showed a pattern of eye contact avoidance when looking at threat-related emotional expressions that was not yet present in infants. Thus, whereas a general avoidant reaction to threat-related facial expressions appears to be present from very early in life, the avoidance of eye contact might be a learned response toward others' anger and fear that emerges later during development.  相似文献   

10.
We examined 7-month-old infants' processing of emotionally congruent and incongruent face-voice pairs using ERP measures. Infants watched facial expressions (happy or angry) and, after a delay of 400 ms, heard a word spoken with a prosody that was either emotionally congruent or incongruent with the face being presented. The ERP data revealed that the amplitude of a negative component and a subsequent positive component in infants' ERPs varied as a function of crossmodal emotional congruity. An emotionally incongruent prosody elicited a larger negative component in infants' ERPs than did an emotionally congruent prosody. Conversely, the amplitude of infants' positive component was larger to emotionally congruent than to incongruent prosody. Previous work has shown that an attenuation of the negative component and an enhancement of the later positive component in infants' ERPs reflect the recognition of an item. Thus, the current findings suggest that 7-month-olds integrate emotional information across modalities and recognize common affect in the face and voice.  相似文献   

11.
We tested the response dynamics of the evaluative priming effect (i.e. facilitation of target responses following evaluatively congruent compared with evaluatively incongruent primes) using a mouse tracking procedure that records hand movements during the execution of categorisation tasks. In Experiment 1, when participants performed the evaluative categorisation task but not the non-evaluative semantic categorisation task, their mouse trajectories for evaluatively incongruent trials curved more toward the opposite response than those for evaluatively congruent trials, indicating the emergence of evaluative priming effects based on response competition. In Experiment 2, implementing a task-switching procedure in which evaluative and non-evaluative categorisation tasks were intermixed, we obtained reliable evaluative priming effects in the non-evaluative semantic categorisation task as well as in the evaluative categorisation task when participants assigned attention to the evaluative stimulus dimension. Analyses of hand movements revealed that the evaluative priming effects in the evaluative categorisation task were reflected in the mouse trajectories, while evaluative priming effects in the non-evaluative categorisation tasks were reflected in initiation times (i.e. the time elapsed between target onset and first mouse movement). Based on these findings, we discuss the methodological benefits of the mouse tracking procedure and the underlying processes of evaluative priming effects.  相似文献   

12.
The human body is an important source of information to infer a person’s emotional state. Research with adult observers indicate that the posture of the torso, arms and hands provide important perceptual cues for recognising anger, fear and happy expressions. Much less is known about whether infants process body regions differently for different body expressions. To address this issue, we used eye tracking to investigate whether infants’ visual exploration patterns differed when viewing body expressions. Forty-eight 7-months-old infants were randomly presented with static images of adult female bodies expressing anger, fear and happiness, as well as an emotionally-neutral posture. Facial cues to emotional state were removed by masking the faces. We measured the proportion of looking time, proportion and number of fixations, and duration of fixations on the head, upper body and lower body regions for the different expressions. We showed that infants explored the upper body more than the lower body. Importantly, infants at this age fixated differently on different body regions depending on the expression of the body posture. In particular, infants spent a larger proportion of their looking times and had longer fixation durations on the upper body for fear relative to the other expressions. These results extend and replicate the information about infant processing of emotional expressions displayed by human bodies, and they support the hypothesis that infants’ visual exploration of human bodies is driven by the upper body.  相似文献   

13.
The current study examined whether the interaction between emotion and executive control (EC) is modulated by the processing type of the emotional information. Namely, whether the emotional information is explicitly processed, implicitly processed or passively viewed. In each trial, a negative or neutral picture preceded an arrow-flanker stimulus that was congruent or incongruent. Incongruent stimuli are known to recruit EC. Explicit processing of the pictures (Experiment 1a), which required responding to their emotional content, resulted in emotional interference for congruent but not for incongruent stimuli. Similar effects were shown for the passive viewing condition (Experiment 2). In contrast, implicit processing (Experiment 1b), which required responding to non-emotional content, resulted in emotional interference for both congruent and incongruent stimuli. Thus, our findings indicate that implicit emotional processing affects performance independently of EC recruitment. In contrast, explicit emotional processing and passive viewing of emotional pictures lead to reduced emotional interference when EC is recruited.  相似文献   

14.
The ability to quickly perceive threatening facial expressions allows one to detect emotional states and respond appropriately. The anger superiority hypothesis predicts that angry faces capture attention faster than happy faces. Previous studies have used photographic (Hansen & Hansen, 1988) and schematic face images (e.g., Eastwood, Smilek, & Merikle, 2001; Ohman, Lunqvist, & Esteves, 2001) in studying the anger superiority effect, but specific confounds due to the construction of stimuli have led to conflicting findings. In the current study, participants performed a visual search for either angry or happy target faces among crowds of novel, perceptually intermediate morph distractors. A threat-detection advantage was evident where participants showed faster reaction times and greater accuracy in detecting angry over happy faces. Search slopes, however, did not significantly differ. Results suggest a threat-detection advantage mediated by serial rather than preattentive processing.  相似文献   

15.
The aim of the current study was to examine how emotional expressions displayed by the face and body influence the decision to approach or avoid another individual. In Experiment 1, we examined approachability judgments provided to faces and bodies presented in isolation that were displaying angry, happy, and neutral expressions. Results revealed that angry expressions were associated with the most negative approachability ratings, for both faces and bodies. The effect of happy expressions was shown to differ for faces and bodies, with happy faces judged more approachable than neutral faces, whereas neutral bodies were considered more approachable than happy bodies. In Experiment 2, we sought to examine how we integrate emotional expressions depicted in the face and body when judging the approachability of face-body composite images. Our results revealed that approachability judgments given to face-body composites were driven largely by the facial expression. In Experiment 3, we then aimed to determine how the categorization of body expression is affected by facial expressions. This experiment revealed that body expressions were less accurately recognized when the accompanying facial expression was incongruent than when neutral. These findings suggest that the meaning extracted from a body expression is critically dependent on the valence of the associated facial expression.  相似文献   

16.
In Study 1, we examined the moderating impact of alexithymia (i.e., a difficulty identifying and describing feelings to other people and an externally oriented cognitive style) on the automatic processing of affective information. The affective priming paradigm was used, and lower priming effects for high alexithymia scorers were observed when congruent (incongruent) pairs involving nonverbal primes (angry face) and verbal target were presented. The results held after controlling for participants' negative affectivity. The same effects were replicated in Studies 2 and 3, with trait anxiety and depression entered as additional covariates. In Study 3, no moderating impact of alexithymia was found for verbal-facial pairs suggesting that the results cannot be merely explained in terms of transcoding limitations for high alexithymia scorers. Overall, the present results suggest that alexithymia could be related to a difficulty in processing and automatically using high arousal emotional information to respond to concomittant behavioural demands.  相似文献   

17.
18.
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers’ discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum “felt the same” or “felt different.” In the identification task, images were presented individually and participants were asked to label the emotion displayed on the face (e.g., “Does she look happy or sad?”). Results suggest that 3.5-year-olds have the same category boundary as adults. They were more likely to report that the image pairs felt “different” at the image pair that crossed the category boundary. These results suggest that 3.5-year-olds perceive happy and sad emotional facial expressions categorically as adults do. Categorizing emotional expressions is advantageous for children if it allows them to use social information faster and more efficiently.  相似文献   

19.
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.  相似文献   

20.
Twenty-eight 4-month-olds' and twenty-two 20-year-olds' attention to object-context relations was investigated using a common eye-movement paradigm. Infants and adults scanned both objects and contexts. Infants showed equivalent preferences for animals and vehicles and for congruent and incongruent object-context relations overall, more fixations of objects in congruent object-context relations, more fixations of contexts in incongruent object-context relations, more fixations of objects than contexts in vehicle scenes, and more fixation shifts in incongruent than congruent vehicle scenes. Adults showed more fixations of congruent than incongruent scenes, vehicles than animals, and objects than contexts; equal fixations of animals and their contexts but more fixations of vehicles than their contexts; and more shifts of fixation when inspecting animals in context than vehicles in context. These findings for location, number, and order of eye movements indicate that object-context relations play a dynamic role in the development and allocation of attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号