首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants’ emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants’ facial displays and eye-movement tracking to examine infants’ looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model’s negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.  相似文献   

2.
Human faces are among the most important visual stimuli that we encounter at all ages. This importance partly stems from the face as a conveyer of information on the emotional state of other individuals. Previous research has demonstrated specific scanning patterns in response to threat-related compared to non-threat-related emotional expressions. This study investigated how visual scanning patterns toward faces which display different emotional expressions develop during infancy. The visual scanning patterns of 4-month-old and 7-month-old infants and adults when looking at threat-related (i.e., angry and fearful) versus non-threat-related (i.e., happy, sad, and neutral) emotional faces were examined. We found that infants as well as adults displayed an avoidant looking pattern in response to threat-related emotional expressions with reduced dwell times and relatively less fixations to the inner features of the face. In addition, adults showed a pattern of eye contact avoidance when looking at threat-related emotional expressions that was not yet present in infants. Thus, whereas a general avoidant reaction to threat-related facial expressions appears to be present from very early in life, the avoidance of eye contact might be a learned response toward others' anger and fear that emerges later during development.  相似文献   

3.
谷莉  白学军 《心理科学》2014,37(1):101-105
本研究选取45名3-5岁幼儿和39名大学本科生作为被试。实验材料为恐惧、愤怒、悲伤、惊讶和高兴五种面部表情图片。用Tobbi眼动仪记录被试观察表情图片时的眼动轨迹。结果发现:(1)成人偏好高兴表情,并在高兴表情上的注视时间和次数显著大于幼儿;(2)成人偏好注视眼部,幼儿偏好注视嘴部。结果表明,面部表情注意偏好的发展具有社会依存性,趋向于偏好积极情绪,这种发展变化与面部表情部位的注意偏好相关。  相似文献   

4.
Detection of emotional facial expressions has been shown to be more efficient than detection of neutral expressions. However, it remains unclear whether this effect is attributable to visual or emotional factors. To investigate this issue, we conducted two experiments using the visual search paradigm with photographic stimuli. We included a single target facial expression of anger or happiness in presentations of crowds of neutral facial expressions. The anti-expressions of anger and happiness were also presented. Although anti-expressions produced changes in visual features comparable to those of the emotional facial expressions, they expressed relatively neutral emotions. The results consistently showed that reaction times (RTs) for detecting emotional facial expressions (both anger and happiness) were shorter than those for detecting anti-expressions. The RTs for detecting the expressions were negatively related to experienced emotional arousal. These results suggest that efficient detection of emotional facial expressions is not attributable to their visual characteristics but rather to their emotional significance.  相似文献   

5.
注意捕获是指与任务无关的刺激能够不自觉地吸引注意的现象。实验一采用视觉搜索任务,考察与主任务无关的情绪面孔的注意捕获水平及其机制,实验二进一步探究时间任务需求对无关情绪面孔注意捕获的影响。结果发现:与其他情绪面孔相比,愤怒的情绪面孔捕获了更多的注意,且受到整体情绪加工的影响;时间任务需求影响了目标刺激的注意选择,但愤怒优势效应不受到时间任务需求的影响,因此可能是一种较为自动化的加工过程。  相似文献   

6.
The processing of several important aspects of a human face was investigated in a single patient (LZ), who had a large infarct of the right hemisphere involving the parietal, and temporal lobes with extensions into the frontal region. LZ showed selective problems with recognizing emotional expressions, whereas she was flawless in recognizing gender, familiarity, and identity. She was very poor in recognizing negative facial expressions (fear, disgust, anger, sadness), but scored as well as the controls on the positive facial expression of happiness. However, in two experiments using both static and dynamic face stimuli, we showed that LZ also did not have a proper notion of what a facial expression of happiness looks like, and could not adequately apply this label. We conclude that the proper recognition of both negative and positive facial expressions relies on the right hemisphere, and that the left hemisphere produces a default state resulting in a bias towards evaluating expressions as happy. We discuss the implications of the current findings for the main models that aim to explain hemispheric specializations for processing of positive and negative emotions.  相似文献   

7.
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content.  相似文献   

8.
This investigation examined whether impairment in configural processing could explain deficits in face emotion recognition in people with Parkinson’s disease (PD). Stimuli from the Radboud Faces Database were used to compare recognition of four negative emotion expressions by older adults with PD (n = 16) and matched controls (n = 17). Participants were tasked with categorizing emotional expressions from upright and inverted whole faces and facial composites; it is difficult to derive configural information from these two types of stimuli so featural processing should play a larger than usual role in accurate recognition of emotional expressions. We found that the PD group were impaired relative to controls in recognizing anger, disgust and fearful expressions in upright faces. Then, consistent with a configural processing deficit, participants with PD showed no composite effect when attempting to identify facial expressions of anger, disgust and fear. A face inversion effect, however, was observed in the performance of all participants in both the whole faces and facial composites tasks. These findings can be explained in terms of a configural processing deficit if it is assumed that the disruption caused by facial composites was specific to configural processing, whereas inversion reduced performance by making it difficult to derive both featural and configural information from faces.  相似文献   

9.
The ability to quickly perceive threatening facial expressions allows one to detect emotional states and respond appropriately. The anger superiority hypothesis predicts that angry faces capture attention faster than happy faces. Previous studies have used photographic (Hansen & Hansen, 1988) and schematic face images (e.g., Eastwood, Smilek, & Merikle, 2001; Ohman, Lunqvist, & Esteves, 2001) in studying the anger superiority effect, but specific confounds due to the construction of stimuli have led to conflicting findings. In the current study, participants performed a visual search for either angry or happy target faces among crowds of novel, perceptually intermediate morph distractors. A threat-detection advantage was evident where participants showed faster reaction times and greater accuracy in detecting angry over happy faces. Search slopes, however, did not significantly differ. Results suggest a threat-detection advantage mediated by serial rather than preattentive processing.  相似文献   

10.
《Brain and cognition》2007,63(3):261-266
Right hemispheric dominance in unconscious emotional processing has been suggested, but remains controversial. This issue was investigated using the subliminal affective priming paradigm combined with unilateral visual presentation in 40 normal subjects. In either left or right visual fields, angry facial expressions, happy facial expressions, or plain gray images were briefly presented as negative, positive, and control primes, followed by a mosaic mask. Then nonsense target ideographs were presented, and the subjects evaluated their partiality toward the targets. When the stimuli were presented in the left, but not the right, visual fields, the negative primes reduced the subjects’ liking for the targets, relative to the case of the positive or control primes. These results provided behavioral evidence supporting the hypothesis that the right hemisphere is dominant for unconscious negative emotional processing.  相似文献   

11.
Participants (N = 216) were administered a differential implicit learning task during which they were trained and tested on 3 maximally distinct 2nd-order visuomotor sequences, with sequence color serving as discriminative stimulus. During training, 1 sequence each was followed by an emotional face, a neutral face, and no face, using backward masking. Emotion (joy, surprise, anger), face gender, and exposure duration (12 ms, 209 ms) were varied between participants; implicit motives were assessed with a picture-story exercise. For power-motivated individuals, low-dominance facial expressions enhanced and high-dominance expressions impaired learning. For affiliation-motivated individuals, learning was impaired in the context of hostile faces. These findings did not depend on explicit learning of fixed sequences or on awareness of sequence-face contingencies.  相似文献   

12.
Although positive and negative images enhance the visual processing of young adults, recent work suggests that a life-span shift in emotion processing goals may lead older adults to avoid negative images. To examine this tendency for older adults to regulate their intake of negative emotional information, the current study investigated age-related differences in the perceptual boost received by probes appearing over facial expressions of emotion. Visually-evoked event-related potentials were recorded from the scalp over cortical regions associated with visual processing as a probe appeared over facial expressions depicting anger, sadness, happiness, or no emotion. The activity of the visual system in response to each probe was operationalized in terms of the P1 component of the event-related potentials evoked by the probe. For young adults, the visual system was more active (i.e., greater P1 amplitude) when the probes appeared over any of the emotional facial expressions. However, for older adults, the visual system displayed reduced activity when the probe appeared over angry facial expressions.  相似文献   

13.
The present study investigated whether facial expressions modulate visual attention in 7-month-old infants. First, infants' looking duration to individually presented fearful, happy, and novel facial expressions was compared to looking duration to a control stimulus (scrambled face). The face with a novel expression was included to examine the hypothesis that the earlier findings of greater allocation of attention to fearful as compared to happy faces could be due to the novelty of fearful faces in infants' rearing environment. The infants looked longer at the fearful face than at the control stimulus, whereas no such difference was found between the other expressions and the control stimulus. Second, a gap/overlap paradigm was used to determine whether facial expressions affect the infants' ability to disengage their fixation from a centrally presented face and shift attention to a peripheral target. It was found that infants disengaged their fixation significantly less frequently from fearful faces than from control stimuli and happy faces. Novel facial expressions did not have a similar effect on attention disengagement. Thus, it seems that adult-like modulation of the disengagement of attention by threat-related stimuli can be observed early in life, and that the influence of emotionally salient (fearful) faces on visual attention is not simply attributable to the novelty of these expressions in infants' rearing environment.  相似文献   

14.
探讨情绪性面孔的知觉和表象过程中,不同类型线索表情的启动效应,并关注不同类型表情表象难易的差异。选取NimStim数据库中20位演员的愉快、愤怒和中性表情作为启动刺激,随后呈现同一演员不同表情的图片,或通过颜色提示被试对不同表情进行表象,并同时进行表情类型判断。研究发现,情绪性面孔知觉与表象任务中均存在启动效应,之前呈现的线索面孔将会对接下来呈现的相同效价的面孔产生启动效应,对相反效价及中性面孔产生抑制; 在平衡不同类型面孔可能存在的启动效应后,正性、负性及中性表情是同样易于表象的。  相似文献   

15.
A rapid response to a threatening face in a crowd is important to successfully interact in social environments. Visual search tasks have been employed to determine whether there is a processing advantage for detecting an angry face in a crowd, compared to a happy face. The empirical findings supporting the “anger superiority effect” (ASE), however, have been criticized on the basis of possible low-level visual confounds and because of the limited ecological validity of the stimuli. Moreover, a “happiness superiority effect” is usually found with more realistic stimuli. In the present study, we tested the ASE by using dynamic (and static) images of realistic human faces, with validated emotional expressions having similar intensities, after controlling the bottom-up visual saliency and the amount of image motion. In five experiments, we found strong evidence for an ASE when using dynamic displays of facial expressions, but not when the emotions were expressed by static face images.  相似文献   

16.
Functional neuroimaging and lesion-based neuropsychological experiments have demonstrated the human amygdala's role in recognition of certain emotions signaled by sensory stimuli, notably, fear and anger in facial expressions. We examined recognition of two emotional dimensions, arousal and valence, in a rare subject with complete, bilateral damage restricted to the amygdala. Recognition of emotional arousal was impaired for facial expressions, words, and sentences that depicted unpleasant emotions, especially in regard to fear and anger. However, recognition of emotional valence was normal. The findings suggest that the amygdala plays a critical role in knowledge concerning the arousal of negative emotions, a function that may explain the impaired recognition of fear and anger in patients with bilateral amygdala damage, and one that is consistent with the amygdala's role in processing stimuli related to threat and danger.  相似文献   

17.
According to cognitive and neural theories of emotion, attentional processing of innate threat stimuli, such as angry facial expressions, is prioritised over neutral stimuli. To test this hypothesis, the present study used a modified version of the rapid serial visual presentation (RSVP) paradigm to investigate the effect of emotional face stimuli on the attentional blink (AB). The target stimuli were schematic faces which depicted threatening (angry), positive or neutral facial expressions. Results showed that performance accuracy was enhanced (i.e., the AB was reduced) on trials in which the second target was an angry face, rather than a neutral face. Results extend previous research by demonstrating that angry faces reduce the AB, and that this effect is found for schematic facial expressions. These findings further support the proposal that, when there is competition for attentional resources, threat stimuli are given higher priority in processing compared with non-threatening stimuli.  相似文献   

18.
Maternal postpartum depression (PPD) is a risk for disruption of mother–infant interaction. Infants of depressed mothers have been found to display less positive, more negative, and neutral affect. Other studies have found that infants of mothers with PPD inhibit both positive and negative affect. In a sample of 28 infants of mothers with PPD and 52 infants of nonclinical mothers, we examined the role of PPD diagnosis and symptoms for infants’ emotional variability, measured as facial expressions, vocal protest, and gaze using microanalysis, during a mother–infant face-to-face interaction. PPD symptoms and diagnosis were associated with (a) infants displaying fewer high negative, but more neutral/interest facial affect events, and (b) fewer gaze off events.  PPD diagnosis, but not symptoms, was associated with less infant vocal protest. Total duration of seconds of infant facial affective displays and gaze off was not related to PPD diagnosis or symptoms, suggesting that when infants of depressed mothers display high negative facial affect or gaze off, these expressions are more sustained, indicating lower infant ability to calm down and re-engage, interpreted as a disturbance in self-regulation. The findings highlight the importance of not only examining durations, but also frequencies, as the latter may inform infant emotional variability.  相似文献   

19.
The important ability to discriminate facial expressions of emotion develops early in human ontogeny. In the present study, 7-month-old infants’ event-related potentials (ERPs) in response to angry and fearful emotional expressions were measured. The angry face evoked a larger negative component (Nc) at fronto-central leads between 300 and 600 ms after stimulus onset when compared to the amplitude of the Nc to the fearful face. Furthermore, over posterior channels, the angry expression elicited a N290 that was larger in amplitude and a P400 that was smaller in amplitude than for the fearful expression. This is the first study that shows that the ability of infants to discriminate angry and fearful facial expressions can be measured at the electrophysiological level. These data suggest that 7-month-olds allocated more attentional resources to the angry face as indexed by the Nc. Implications of this result may be that the social signal values were perceived differentially, not merely as “negative”. Furthermore, it is possible that the angry expression might have been more arousing and discomforting for the infant compared with the fearful expression.  相似文献   

20.
Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号