首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本研究通过两个实验考察不同效价度和唤醒度的情绪刺激是否存在动机倾向上的分离模式。实验1要求被试对表情图片的动机倾向(趋近/回避)进行主观评分,发现被试对低唤醒表情或积极表情有更加趋近的倾向。实验2以运动的线索启动动机倾向(趋近/回避),完成情绪词效价判断任务,从内隐的角度考察情绪刺激是否存在动机倾向上的分离模式,结果发现,被试对动机倾向一致的情绪刺激的反应时更快。趋近的消极情绪词比趋近的积极情绪词的N2波幅更大,远离的低唤醒情绪词比趋近的低唤醒情绪词的LPC波幅更大。结果说明积极或者低唤醒的情绪刺激引发趋近倾向,而消极或高唤醒的情绪刺激引发回避倾向。  相似文献   

2.
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.  相似文献   

3.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody–semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

4.
躯体和面孔是个体情绪识别的敏感线索。与面部表情的早期视觉加工相似, P1成分对恐惧、愤怒等负性躯体表情更加敏感, 反映了对躯体威胁信息快速且无意识的加工。情绪躯体和面孔还有着类似的构型加工, 表现为二者都能诱发颞枕区视觉皮层相似的N170成分, 但涉及的神经基础并不完全相同。在构型编码加工中, 面部表情的N170与顶中正成分(Vertex Positive Potential, VPP)较躯体表情的N170、VPP更加明显。在面部表情和躯体表情的后期加工阶段, 早期后部负波(Early Posterior Negativity, EPN)反映了面孔和躯体视觉编码的注意指向加工, 随后出现的P3与晚期正成分(Late Positive Component, LPC)代表了顶额皮层对复杂情绪信息的高级认知加工。躯体表情还存在与外纹状皮层躯体区相关的N190成分, 其对躯体的情绪和动作信息敏感。今后的研究应进一步探讨动作对情绪知觉的影响、动态面孔−躯体情绪的加工机制等。  相似文献   

5.
To inform how emotions in speech are implicitly processed and registered in memory, we compared how emotional prosody, emotional semantics, and both cues in tandem prime decisions about conjoined emotional faces. Fifty-two participants rendered facial affect decisions (Pell, 2005a), indicating whether a target face represented an emotion (happiness or sadness) or not (a facial grimace), after passively listening to happy, sad, or neutral prime utterances. Emotional information from primes was conveyed by: (1) prosody only; (2) semantic cues only; or (3) combined prosody and semantic cues. Results indicated that prosody, semantics, and combined prosody-semantic cues facilitate emotional decisions about target faces in an emotion-congruent manner. However, the magnitude of priming did not vary across tasks. Our findings highlight that emotional meanings of prosody and semantic cues are systematically registered during speech processing, but with similar effects on associative knowledge about emotions, which is presumably shared by prosody, semantics, and faces.  相似文献   

6.
Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.  相似文献   

7.
Measuring the speed of recognising facially expressed emotions   总被引:1,自引:0,他引:1  
Faces provide identity- and emotion-related information-basic cues for mastering social interactions. Traditional models of face recognition suggest that following a very first initial stage the processing streams for facial identity and expression depart. In the present study we extended our previous multivariate investigations of face identity processing abilities to the speed of recognising facially expressed emotions. Analyses are based on a sample of N=151 young adults. First, we established a measurement model with a higher order factor for the speed of recognising facially expressed emotions (SRE). This model has acceptable fit without specifying emotion-specific relations between indicators. Next, we assessed whether SRE can be reliably distinguished from the speed of recognising facial identity (SRI) and found latent factors for SRE and SRI to be perfectly correlated. In contrast, SRE and SRI were both only moderately related to a latent factor for the speed of recognising non-face stimuli (SRNF). We conclude that the processing of facial stimuli-and not the processing of facially expressed basic emotions-is the critical component of SRE. These findings are at variance with suggestions of separate routes for processing facial identity and emotional facial expressions and suggest much more communality between these streams as far as the aspect of processing speed is concerned.  相似文献   

8.
Faces provide identity- and emotion-related information—basic cues for mastering social interactions. Traditional models of face recognition suggest that following a very first initial stage the processing streams for facial identity and expression depart. In the present study we extended our previous multivariate investigations of face identity processing abilities to the speed of recognising facially expressed emotions. Analyses are based on a sample of N=151 young adults. First, we established a measurement model with a higher order factor for the speed of recognising facially expressed emotions (SRE). This model has acceptable fit without specifying emotion-specific relations between indicators. Next, we assessed whether SRE can be reliably distinguished from the speed of recognising facial identity (SRI) and found latent factors for SRE and SRI to be perfectly correlated. In contrast, SRE and SRI were both only moderately related to a latent factor for the speed of recognising non-face stimuli (SRNF). We conclude that the processing of facial stimuli—and not the processing of facially expressed basic emotions—is the critical component of SRE. These findings are at variance with suggestions of separate routes for processing facial identity and emotional facial expressions and suggest much more communality between these streams as far as the aspect of processing speed is concerned.  相似文献   

9.
With the Appraisal Tendency Framework, it has been established that (un)certainty appraisals associated with incidental emotions trigger the kind of information processing to cope with situation. We tested the impact of (un)certainty-associated emotions on a sequential task, the Iowa Gambling Task. In this task, intuitive processing is necessary to lead participants to rely on emotional cues arising from previous decisions and to making advantageous decisions. We predicted that certainty-associated emotions would engage participants in intuitive processing, whereas uncertainty-associated emotions would engage them in deliberative processing and lead them to make disadvantageous decisions. As expected, we observed in two distinct experiments, that participants induced to feel uncertainty (fear, sadness) were found to decide less advantageously than participants induced to feel certainty (anger, happiness, disgust).  相似文献   

10.
The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.  相似文献   

11.
Faces provide a complex source of information via invariant (e.g., race, sex and age) and variant (e.g., emotional expressions) cues. At present, it is not clear whether these different cues are processed separately or whether they interact. Using the Garner Paradigm, Experiment 1 confirmed that race, sex, and age cues affected the categorization of faces according to emotional expression whereas emotional expression had no effect on the categorization of faces by sex, age, or race. Experiment 2 used inverted faces and replicated this pattern of asymmetrical interference for race and age cues, but not for sex cues for which no interference on emotional expression categorization was observed. Experiment 3 confirmed this finding with a more stringently matched set of facial stimuli. Overall, this study shows that invariant cues interfere with the processing of emotional expressions. It indicates that the processing of invariant cues, but not of emotional expressions, is obligatory and that it precedes that of emotional expressions.  相似文献   

12.
A number of studies have demonstrated stable individual differences in the cues that generate emotions and other feeling states. These differences are assumed to arise from the cues parents use to identify their children's emotional states. As children learn about their own emotional states, they come to rely on these same cues. To test one implication of their view, the facial expressions of children (N=41) were manipulated and their feelings assessed. Some children reported emotions consistent with their expressions, while others reported emotions appropriate to the situation. In a separate procedure, their mothers were asked to identify the emotional states of children whose expressions were inconsistent with an account of their circumstances. Mothers who paid more attention to their children's expressive behavior had children who were more responsive to their own expressive behavior. In contrast, the mothers who were more responsive to situational cues had children whose emotions arose from the situational cues as well.The authors would like to thank numerous teachers and administrators of the Worcester Public School system in Worcester, Massachusetts, for their assistance.  相似文献   

13.
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content.  相似文献   

14.
Research has given little attention to the influence of incidental emotions on the Iowa Gambling Task (IGT), in which processing of the emotional cues associated with each decision is necessary to make advantageous decisions. Drawing on cognitive theories of emotions, we tested whether uncertainty-associated emotion can cancel the positive effect of the hunch period, by preventing participants from developing a tendency towards advantageous decisions. Our explanation is that uncertainty appraisals initiate deliberative processing that is irrelevant to process emotional cues, contrary to intuitive processing (Kahneman, 2003; Tiedens & Linton, 2001). As expected, uncertainty-associated emotion cancelled the positive effect of the hunch period in the IGT compared to certainty-associated emotion: disgusted participants (certainty-associated emotion) and sad participants induced to feel certainty developed a stronger tendency towards advantageous decisions than sad participants induced to feel uncertainty. We discuss the importance of the core components that trigger incidental emotions to predict decision making.  相似文献   

15.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

16.
采用事件相关电位(ERPs)技术考察了奖赏预期对人类面孔情绪识别的影响。实验采用线索-目标范式, 分别记录了被试在奖赏预期条件下以及无奖赏预期条件下对正性、中性和负性面孔进行情绪辨别任务的ERP数据。行为结果显示, 被试在奖赏预期条件下的反应时快于无奖赏预期条件下的反应时, 对情绪面孔的反应时快于对中性面孔的反应时。ERPs数据显示, 奖赏线索比无奖赏线索诱发了更正的P1、P2和P300成分。目标刺激诱发的P1、N170波幅以及N300均受到奖赏预期的调节, 在奖赏预期条件下目标诱发了更正的ERPs。P1、N170、VPP等成分没有受到面孔情绪的影响, 而额中央位置的N300波幅显示情绪(正性与负性)面孔与中性面孔加工的差异。重要的是, N300波幅出现奖赏预期与情绪的交互作用, 正、负情绪加工效应以及负性偏向效应受奖赏预期的差异性影响。正性情绪加工效应不受奖赏预期的影响, 而负性情绪加工效应和负性偏向效应在奖赏预期条件下显著大于无奖赏预期条件下。这些结果说明, 奖赏预期能够调节对面孔情绪的加工, 且不同加工进程中奖赏对情绪加工的调节作用不同。动机性信息调节注意资源的分配, 促进了个体在加工面孔情绪时的负性偏向。  相似文献   

17.
在阅读理解的过程中,读者能够自动对语篇中的情绪进行推断。本研究采用自定步速阅读的方法,分别在外显和内隐两种情绪加工的任务下,考察话题结构对语篇情绪累加的影响。结果发现,在实验一的外显情绪判断任务下,话题结构未显示出对语篇情绪累加的明显作用;在实验二的内隐情绪理解任务下,当话题延续时,读者对有两个情绪线索的语篇的阅读时间短于仅有一个情绪线索的语篇,此时情绪的累加促进了当前句的加工,而话题转换时,二者没有显著差异,说明此时读者在新结构下建立当前句的情绪表征,并不在先前情绪的基础上进行累加。  相似文献   

18.
In social situations, skillful regulation of emotion and behavior depends on efficiently discerning others' emotions. Identifying factors that promote timely and accurate discernment of facial expressions can therefore advance understanding of social emotion regulation and behavior. The present research examined whether trait mindfulness predicts neural and behavioral markers of early top‐down attention to, and efficient discrimination of, socioemotional stimuli. Attention‐based event‐related potentials (ERPs) and behavioral responses were recorded while participants (N = 62; White; 67% female; Mage = 19.09 years, SD = 2.14 years) completed an emotional go/no‐go task involving happy, neutral, and fearful facial expressions. Mindfulness predicted larger (more negative) N100 and N200 ERP amplitudes to both go and no‐go stimuli. Mindfulness also predicted faster response time that was not attributable to a speed‐accuracy trade‐off. Significant relations held after accounting for attentional control or social anxiety. This study adds neurophysiological support for foundational accounts that mindfulness entails moment‐to‐moment attention with lower tendencies toward habitual patterns of responding. Mindfulness may enhance the quality of social behavior in socioemotional contexts by promoting efficient top‐down attention to and discrimination of others' emotions, alongside greater monitoring and inhibition of automatic response tendencies.  相似文献   

19.
The goal-directed control of behaviour critically depends on emotional regulation and constitutes the basis of mental well-being and social interactions. Within a socioemotional setting, it is necessary to prioritize effectively the relevant emotional information over interfering irrelevant emotional information to orchestrate cognitive resources and achieve appropriate behavior. Currently, it is elusive whether and how different socioemotional stimulus dimensions modulate cognitive control and conflict resolution. Theoretical considerations suggest that interference effects are less detrimental when conflicting emotional information is presented within a “positive socioemotional setting” compared with a “negative socioemotional setting.” Using event-related potentials (ERPs) and source localization methods, we examined the basic system neurophysiological mechanisms and functional neuroanatomical structures associated with interactive effects of different interfering facial, socioemotional stimulus dimensions on conflict resolution. We account for interactive effects of different interfering socioemotional stimulus dimensions on conflict resolution, i.e., we show how the socioemotional valence modulates cognitive control (conflict processing). The data show that conflicts are stronger and more difficult to resolve in a negative emotional task-relevant setting than in a positive emotional task-relevant setting, where incongruent information barely induced conflicts. The degree of emotional conflict critically depends on the contextual emotional valence (positive or negative) in which this conflict occurs. The neurophysiological data show that these modulations were only reflected by late-stage conflict resolution processes associated with the middle (MFG) and superior frontal gyrus (SFG). Attentional selection processes and early-stage conflict monitoring do not seem to be modulated by interactive effects of different interfering socioemotional stimulus dimensions on conflict resolution.  相似文献   

20.
The purpose of the present study was to examine the time course of race and expression processing to determine how these cues influence early perceptual as well as explicit categorization judgments. Despite their importance in social perception, little research has examined how social category information and emotional expression are processed over time. Moreover, although models of face processing suggest that the two cues should be processed independently, this has rarely been directly examined. Event-related brain potentials were recorded as participants made race and emotion categorization judgments of Black and White men posing either happy, angry, or neutral expressions. Our findings support that processing of race and emotion cues occur independently and in parallel, relatively early in processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号