首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Emotional inferences from speech require the integration of verbal and vocal emotional expressions. We asked whether this integration is comparable when listeners are exposed to their native language and when they listen to a language learned later in life. To this end, we presented native and non-native listeners with positive, neutral and negative words that were spoken with a happy, neutral or sad tone of voice. In two separate tasks, participants judged word valence and ignored tone of voice or judged emotional tone of voice and ignored word valence. While native listeners outperformed non-native listeners in the word valence task, performance was comparable in the voice task. More importantly, both native and non-native listeners responded faster and more accurately when verbal and vocal emotional expressions were congruent as compared to when they were incongruent. Given that the size of the latter effect did not differ as a function of language proficiency, one can conclude that the integration of verbal and vocal emotional expressions occurs as readily in one's second language as it does in one's native language.  相似文献   

2.
In order to recognize banter or sarcasm in social interactions, listeners must integrate verbal and vocal emotional expressions. Here, we investigated event-related potential correlates of this integration in Asian listeners. We presented emotional words spoken with congruous or incongruous emotional prosody. When listeners classified word meaning as positive or negative and ignored prosody, incongruous trials elicited a larger late positivity than congruous trials in women but not in men. Sex differences were absent when listeners evaluated the congruence between word meaning and emotional prosody. The similarity of these results to those obtained in Western listeners suggests that sex differences in emotional speech processing depend on attentional focus and may reflect culturally independent mechanisms.  相似文献   

3.
Past research has identified an event-related potential (ERP) marker for vocal emotional encoding and has highlighted vocal-processing differences between male and female listeners. We further investigated this ERP vocal-encoding effect in order to determine whether it predicts voice-related changes in listeners’ memory for verbal interaction content. Additionally, we explored whether sex differences in vocal processing would affect such changes. To these ends, we presented participants with a series of neutral words spoken with a neutral or a sad voice. The participants subsequently encountered these words, together with new words, in a visual word recognition test. In addition to making old/new decisions, the participants rated the emotional valence of each test word. During the encoding of spoken words, sad voices elicited a greater P200 in the ERP than did neutral voices. While the P200 effect was unrelated to a subsequent recognition advantage for test words previously heard with a neutral as compared to a sad voice, the P200 did significantly predict differences between these words in a concurrent late positive ERP component. Additionally, the P200 effect predicted voice-related changes in word valence. As compared to words studied with a neutral voice, words studied with a sad voice were rated more negatively, and this rating difference was larger, the larger the P200 encoding effect was. While some of these results were comparable in male and female participants, the latter group showed a stronger P200 encoding effect and qualitatively different ERP responses during word retrieval. Estrogen measurements suggested the possibility that these sex differences have a genetic basis.  相似文献   

4.
5.
Verbal framing effects have been widely studied, but little is known about how people react to multiple framing cues in risk communication, where verbal messages are often accompanied by facial and vocal cues. We examined joint and differential effects of verbal, facial, and vocal framing on risk preference in hypothetical monetary and life–death situations. In the multiple framing condition with the factorial design (2 verbal frames × 2 vocal tones × 4 basic facial expressions × 2 task domains), each scenario was presented auditorily with a written message on a photo of the messenger's face. Compared with verbal framing effects resulting in preference reversal, multiple frames made risky choice more consistent and shifted risk preference without reversal. Moreover, a positive tone of voice increased risk‐seeking preference in women. When the valence of facial and vocal cues was incongruent with verbal frame, verbal framing effects were significant. In contrast, when the affect cues were congruent with verbal frame, framing effects disappeared. These results suggest that verbal framing is given higher priority when other affect cues are incongruent. Further analysis revealed that participants were more risk‐averse when positive affect cues (positive tone or facial expressions) were congruently paired with a positive verbal frame whereas participants were more risk‐seeking when positive affect cues were incongruent with the verbal frame. In contrast, for negative affect cues, congruency promoted risk‐seeking tendency whereas incongruency increased risk‐aversion. Overall, the results show that facial and vocal cues interact with verbal framing and significantly affect risk communication. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Two experiments using identical stimuli were run to determine whether the vocal expression of emotion affects the speed with which listeners can identify emotion words. Sentences were spoken in an emotional tone of voice (Happy, Disgusted, or Petrified), or in a Neutral tone of voice. Participants made speeded lexical decisions about the word or pseudoword in sentence-final position. Critical stimuli were emotion words that were either semantically congruent or incongruent with the tone of voice of the sentence. Experiment 1, with randomised presentation of tone of voice, showed no effect of congruence or incongruence. Experiment 2, with blocked presentation of tone of voice, did show such effects: Reaction times for congruent trials were faster than those for baseline trials and incongruent trials. Results are discussed in terms of expectation (e.g., Kitayama, 1990, 1991, 1996) and emotional connotation, and implications for models of word recognition are considered.  相似文献   

7.
It has been the matter of much debate whether perceivers are able to distinguish spontaneous vocal expressions of emotion from posed vocal expressions (e.g., emotion portrayals). In this experiment, we show that such discrimination can be manifested in the autonomic arousal of listeners during implicit processing of vocal emotions. Participants (N = 21, age: 20–55 years) listened to two consecutive blocks of brief voice clips and judged the gender of the speaker in each clip, while we recorded three measures of sympathetic arousal of the autonomic nervous system (skin conductance level, mean arterial blood pressure, pulse rate). Unbeknownst to the listeners, the blocks consisted of two types of emotional speech: spontaneous and posed clips. As predicted, spontaneous clips yielded higher arousal levels than posed clips, suggesting that listeners implicitly distinguished between the two kinds of expression, even in the absence of any requirement to retrieve emotional information from the voice. We discuss the results with regard to theories of emotional contagion and the use of posed stimuli in studies of emotions.  相似文献   

8.
A Stroop interference task was used to test the hypothesis that people in different cultures are differentially attuned to verbal content vis-à-vis vocal tone in comprehending emotional words. In Study 1, Americans showed greater difficulty ignoring verbal content than ignoring vocal tone (which reveals an attentional bias for verbal content); but Japanese showed greater difficulty ignoring vocal tone than ignoring verbal content (which reveals a bias for vocal tone). In Study 2, Tagalog-English bilinguals in the Philippines showed an attentional bias for vocal tone regardless of the language used, suggesting that the effect is largely cultural rather than linguistic. Implications for culture-and-cognition research are discussed.  相似文献   

9.
采用跨通道启动范式,探讨言语与面孔情绪加工的相互影响效应,以及语言差异(母语:汉语;非母语:英语)在其中的影响作用。实验1以言语情绪词为启动刺激,面孔情绪为目标刺激,结果发现,相比于英语启动刺激条件,在汉语启动刺激条件下的面孔情绪判断具有更好的表现;在积极情绪启动条件下,言语情绪刺激能够启动面孔情绪刺激。实验2以面孔情绪为启动刺激,言语情绪词为目标刺激,结果发现,相比于英语目标刺激条件,在汉语目标刺激条件下的言语情绪判断具有更好的表现;在积极情绪启动条件下,面孔情绪刺激能够启动言语情绪刺激。研究结果表明,言语情绪与面孔情绪的加工能够相互影响,但这种相互关系仅表现在积极情绪启动条件下。此外,母语和非母语在情绪功能上具有差异性。  相似文献   

10.
Learning through repetition is a fundamental form and also an effective method of language learning critical for achieving proficient and automatic language use. Massive repetition priming as a common research paradigm taps into the dynamic processes involved in repetition learning. Research with this paradigm has so far used only emotionally neutral materials and ignored emotional factors, which seems inappropriate given the well-documented impact of emotion on cognitive processing. The present study used massive repetition priming to investigate whether the emotional valence of learning materials affects implicit language learning. Participants read a list of Chinese words and made speeded perceptual judgments about the spatial configuration of the two characters in a word. Each word was repeated 15 times in the whole learning session. There were three types of words, negative, positive, or neutral in their emotional valence, presented in separate blocks. Although similar levels of asymptotic performance were reached for different valence conditions showing comparable total effects of learning, learning of the positive words was found to be associated with fewer plateaus of shorter durations and to reach saturation earlier, compared with neutral and negative words. The results showed for the first time that the emotional valence of learning materials has significant effects on the time course of learning so that positive materials are learned faster and more efficiently, relative to negative and neutral materials. The study indicates the importance to explicitly consider the role of emotional factors in implicit language learning research.  相似文献   

11.
Under a noisy “cocktail-party” listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker’s voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker’s voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.  相似文献   

12.
The study investigates cross-modal simultaneous processing of emotional tone of voice and emotional facial expression by event-related potentials (ERPs), using a wide range of different emotions (happiness, sadness, fear, anger, surprise, and disgust). Auditory emotional stimuli (a neutral word pronounced in an affective tone) and visual patterns (emotional facial expressions) were matched in congruous (the same emotion in face and voice) and incongruous (different emotions) pairs. Subjects (N=31) were required to watch and listen to the stimuli in order to comprehend them. Repeated measures ANOVAs showed a positive ERP deflection (P2), more posterior distributed. This P2 effect may represent a marker of cross-modal integration, modulated as a function of congruous/incongruous condition. Indeed, it shows an ampler peak in response to congruous stimuli than incongruous ones. It is suggested P2 can be a cognitive marker of multisensory processing, independently from the emotional content.  相似文献   

13.
The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming latencies were collected. Across experiments, tone of voice was either blocked or mixed with respect to emotional meaning. The results suggest that emotional tone of voice facilitated linguistic processing of emotional words in an emotion-congruent fashion. These findings suggest that information about emotional tone is used in the processing of linguistic content influencing the recognition and naming of spoken words in an emotion-congruent manner.  相似文献   

14.
Large and reliable laterality effects have been found using a dichotic target detection task in a recent experiment using word stimuli pronounced with an emotional component. The present study tested the hypothesis that the magnitude and reliability of the laterality effects would increase with the removal of the emotional component and variations in word frequency. Thirty-two participants completed both a dichotic syllable detection task and a dichotic word detection task. In both tasks, stimuli were pronounced in a neutral tone of voice. Each task was completed twice to allow the estimation of test-retest reliability. Results failed to confirm the hypothesis since they were generally similar to those obtained in the previous study. A significant right ear advantage (REA) was found only with the word task. Although no ear advantage was found for the syllable task, somewhat better reliability was demonstrated compared to that obtained with words. The present findings suggest that including an emotional component does not reduce the reliability or magnitude of auditory laterality effects. In fact, the emotional component may have forced participants to focus on the language aspects of the stimuli. This might partly account for the reduced reliability in words and the absence of REA in syllables. Motivational factors inherent to the within-subject design used here are also discussed.  相似文献   

15.
Lateralization of verbal and affective processes was investigated in P-dyslexic, L-dyslexic and normal children with the aid of a dichotic listening task. The children were asked to detect either the presence of a specific target word or of words spoken in a specific emotional tone of voice. The number of correct responses and reaction time were recorded. For monitoring words, an overall right ear advantage was obtained. However, further tests showed no significant ear advantage for P-types, and a right ear advantage for L-types and controls. For emotions, an overall left ear advantage was obtained that was less robust than the word-effect. The results of the word task are in support of previous findings concerning differences between P- and L-dyslexics in verbal processing according to the balance model of dyslexia. However, dyslexic children do not differ from controls on processing of emotional prosody although certain task variables may have affected this result.  相似文献   

16.
Memory is better for emotional words than for neutral words, but the conditions contributing to emotional memory improvement are not entirely understood. Elsewhere, it has been observed that retrieval of a word is easier when its attributes are congruent with a property assessed during an earlier judgment task. The present study examined whether affective assessment of a word matters to its remembrance. Two experiments were run, one in which only valence assessment was performed, and another in which valence assessment was combined with a running recognition for list words. In both experiments, some participants judged whether each word in a randomized list was negative (negative monitoring), and others judged whether each was positive (positive monitoring). We then tested their explicit memory for the words via both free recall and delayed recognition. Both experiments revealed an affective congruence effect, such that negative words were more likely to be recalled and recognized after negative monitoring, whereas positive words likewise benefited from positive monitoring. Memory for neutral words was better after negative monitoring than positive monitoring.Thus, memory for both emotional and neutral words is contingent on one's affective orientation during encoding.  相似文献   

17.
Lateralization of verbal and affective processes was investigated in P-dyslexic, L-dyslexic and normal children with the aid of a dichotic listening task. The children were asked to detect either the presence of a specific target word or of words spoken in a specific emotional tone of voice. The number of correct responses and reaction time were recorded. For monitoring words, an overall right ear advantage was obtained. However, further tests showed no significant ear advantage for P-types, and a right ear advantage for L-types and controls. For emotions, an overall left ear advantage was obtained that was less robust than the word-effect. The results of the word task are in support of previous findings concerning differences between P- and L-dyslexics in verbal processing according to the balance model of dyslexia. However, dyslexic children do not differ from controls on processing of emotional prosody although certain task variables may have affected this result.  相似文献   

18.
We examined whether facial expressions of performers influence the emotional connotations of sung materials, and whether attention is implicated in audio-visual integration of affective cues. In Experiment 1, participants judged the emotional valence of audio-visual presentations of sung intervals. Performances were edited such that auditory and visual information conveyed congruent or incongruent affective connotations. In the single-task condition, participants judged the emotional connotation of sung intervals. In the dual-task condition, participants judged the emotional connotation of intervals while performing a secondary task. Judgements were influenced by melodic cues and facial expressions and the effects were undiminished by the secondary task. Experiment 2 involved identical conditions but participants were instructed to base judgements on auditory information alone. Again, facial expressions influenced judgements and the effect was undiminished by the secondary task. The results suggest that visual aspects of music performance are automatically and preattentively registered and integrated with auditory cues.  相似文献   

19.
采用词性判断和启动加工任务考察具身模拟程度和具体性对词语效价具身表征的影响。结果发现,在具身模拟程度低的空间表征中,只有敏感的负性词出现了效价和远近距离的相容效应;在模拟程度高的趋避表征中,正性词和负性词都与趋避动作产生了相容效应,并且具体词的效价和趋避表征之间的联结强于抽象词。  相似文献   

20.
郭晶晶  吕锦程 《心理科学》2014,37(6):1296-1301
本研究通过两个实验考察了语言标识对个体情绪体验的调节作用。实验一探讨了中性标识对不同类型的情绪体验的影响,结果发现与韩文字符标识相比,中性双字词标识时被试对负性图片的消极情绪体验强度显著降低。实验二进一步探讨了不同情绪色彩的词语的标识效应,结果发现与中性标识相比,负性或正性标识时被试对负性图片的消极情绪体验程度更低。结果表明了语言标识对负性情绪体验具有显著的调节作用,并且标识效应的产生依赖于词汇语义信息的通达。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号