首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
在有多人说话的嘈杂环境下,听者如何利用知觉线索来强化对目标言语的识别?为解答这一鸡尾酒会问题,研究者用脑成像的方法来考察相应的脑网络机制。研究表明,听者对与目标言语某个特征相关联的去掩蔽线索的利用,不但能促进听觉皮层对目标言语信号的短潜伏期反应,而且强化了线索特异性及非特异性的注意、言语表达、抑制功能和言语运动这四类脑区的活动及其功能连接,从而构成了信息掩蔽环境下知觉线索促进目标言语客体完好化的脑网络基础。  相似文献   

2.
欺骗是一种常见的社会现象,通过观察他人的行为表现识别欺骗则是人们的一项重要能力。研究表明,人们的欺骗识别能力仅仅略微高于随机水平。本文关注基于行为线索的欺骗识别研究。首先,介绍欺骗识别的准确率;然后,结合Brunswik的透镜模型从欺骗线索的有效性和欺骗线索的利用两方面分析识别准确率的影响因素;并在此基础上探讨了提高识别准确率的途径。最后,对未来可能的研究方向进行展望。  相似文献   

3.
外源性视觉选择性注意的时空特征   总被引:5,自引:2,他引:5  
杨华海  赵晨  张侃 《心理学报》1998,31(2):136-142
使用与目标出现位置无关的外周空现刺激作为线索,研究在不同刺激呈现间隔(SOA)下它对不同偏心 视觉目标的辨别反应时的影响,以考察外源性觉选择性注意的加工特生,并下三种注意空间分布模型对选择性注意时空特征的不同预测,结果表明:1.外周突现刺激能够不随意地吸引注意,符合自动加工特性;2.注意转移以注意中心连续移动的方式进行,支持探照灯模型,不支持透镜模型和静态的空间梯度模型,并在注意移动速率进行了估计  相似文献   

4.
根据指称主义语义学,一个专名的语义贡献仅仅在于其所指称的个体对象。在指称主义前提下,根据四元素语言理解模型,听者对包含专名的单称语句的理解在于把握说者表达的以专名所指称的个体对象为构成成分的单称命题。本文论证,在说者和听者对一个专名是否有所指称有分歧的情况下,会普遍地出现对单称语句的理解难题。为解决该难题,本文尝试利用假装概念,提出三元素语言理解模型,通过回应可能质疑对该模型进行辩护,并指出未来工作方向。  相似文献   

5.
为了验证积极情绪对人际信任的影响其实更符合启发式依赖模型而非情绪(心境)一致性模型,通过三个实验,以控制信任线索和信任博弈等方式,对102名大学生被试开展实验研究.结果表明:(1)积极情绪不是简单地增加信任,其对信任决策的影响受到信任线索的调节:与中性情绪相比,当易得性图式和线索促进信任时,积极情绪的被试显示更多信任;当易得性图式和线索促进不信任时,积极情绪的被试表现更少的信任.(2)实验三证实积极情绪对人际信任的作用也受到交往环境的影响.正如启发式依赖模型预测,外群体不值得信任的先验图式,使得被诱发了积极情绪的被试较中性情绪被试更不信任外群体成员.  相似文献   

6.
吴婷  郑涌 《心理科学进展》2019,27(3):533-543
透镜模型强调线索的有效性是人格判断准确的重要条件。已有研究表明, 文字信息, 语音内容, 面孔图片, 反映不同情境的视频片段以及面对面交流涉及的言语、非言语信息在人格判断过程中发挥着重要作用。另一方面, 网络背景下常规的文字、视频信息等同样能够有效反映个体的人格特质, 而与人格特质密切相关的网络语言、表情的使用, 状态更新与点赞等特殊线索的有效性也值得深入探究。未来对人格判断线索的研究应加强现实生活情境以提高研究的生态效度, 考虑不同线索间的相互比较以考察线索有效性的适用条件, 深入探究网络情境中个体行为线索的有效性。  相似文献   

7.
音乐与情绪诱发的机制模型   总被引:1,自引:0,他引:1  
音乐与情绪关系的讨论正处于行为层面描述向认知及神经机制研究的初探阶段.线索一致性模型、音乐期待模型、协同化理论和多重机制模型分别从音乐线索、听者认知、音乐与听者互动关系及多重整合角度对音乐诱发情绪的过程做出了解释.当前分歧集中于三方面:(1)音乐诱发情绪是否必须以认知为中介;(2)诱发过程是一般领域还是特殊领域;(3)诱发情绪的机制是多元还是单一.文章提出确定概念同质、借助多指标测量和思考机制关系等解决思路,并阐明了此领域的研究趋势.  相似文献   

8.
摘 要 本文以三种表情面孔为材料,运用事件相关电位(ERP)方法探讨表情线索在注意朝向中与注视转移发生的交互及两者如何共同影响观察者的反应。结果发现:(1)线索效应在三种表情面孔的两种SOA中都出现了;(2)SOA较长时,注视线索效应量间的显著差异出现在中性和恐惧面孔、高兴和恐惧面孔间;(3)面部表情的效应出现在了表情线索诱发的P1成分上;(4)目标诱发的P1和N1说明了注视线索效应的存在及与表情间的交互。结论:表情线索先对观察者的朝向反应产生影响,随后是注视线索及其与表情线索发生的交互。  相似文献   

9.
语句重音分布模式知觉   总被引:1,自引:1,他引:0  
杨玉芳 《心理学报》1996,29(3):225-231
在排除语义和句法结构信息的情况下,用统计方法研究听者使用局部的韵律学线索知觉语句重音分布模式的能力;并根据知觉模式,讨论汉语中重音分类、词重音在语流中的变化以及重音在语句组织和语义表达中的作用等问题。  相似文献   

10.
参照性交流中的“听者设计”   总被引:1,自引:0,他引:1       下载免费PDF全文
“听者设计”一直是参照性交流研究领域中的热点。参照性交流过程中交流者通常会根据对交流同伴共享信息的评估来调整自己的行为,但是这些调整什么时候以及怎样发生的机制问题仍然存在争论。重点评述了“听者设计”的已有研究角度和研究进展,并归纳总结了参照惯例视角、记忆和注意视角、交流情境视角的研究观点。未来研究应扩展已有研究设计,以深入探查“听者设计”的形成、获得、发展变化过程,以及其与参照性交流其他限制因素间的相互作用;需要结合行为证据和眼动、脑成像证据等以帮助揭示“听者设计”过程的行为特点与认知机制。  相似文献   

11.
This study describes the utilization of acoustic cues in communication of emotions in music performance. Three professional guitarists were asked to perform 3 short melodies to communicate anger, sadness, happiness, and fear to listeners. The resulting performances were analyzed with respect to 5 acoustic cues and judged by 30 listeners on adjective scales. Multiple regression analysis was applied to the relationships between (a) the performer's intention and the cues and (b) the listeners' judgments and the cues. The analyses of performers and listeners were related using C. J. Hursch, K. R. Hammond, and J. L. Hursch's (1964) lens model equation. The results indicated that (a) performers were successful at communicating emotions to listeners, (b) performers' cue utilization was well matched to listeners' cue utilization, and (c) cue utilization was more consistent across different melodies than across different performers. Because of the redundancy of the cues, 2 performers could communicate equally well despite differences in cue utilization.  相似文献   

12.
研究考察了42名大学生(中国21人,波兰21人)对男、女性用5种不同情绪声音(高兴、生气、害怕、难过和中性)表达的中性语义句子的情绪类型和强度判断,从而分析中国、波兰不同文化背景下,个体对基于声音线索的情绪知觉差异。结果表明:(1)中国被试对声音情绪类型判断的正确率以及情绪强度的评定上均高于波兰被试,说明在声音情绪知觉上存在组内优势;(2)所有被试对女性声音材料情绪类型识别的正确率以及情绪强度的评定均高于对男性声音材料;(3)在对情绪类型判断上,被试对害怕情绪识别的正确率高于对高兴、难过和中性情绪,对中性情绪识别的正确率最低;(4)在情绪强度评定上,被试对害怕情绪的评定强度高于对难过情绪,对高兴情绪的评定强度最低。  相似文献   

13.
Remembering is impacted by several factors of retrieval, including the emotional content of a memory cue. Here we tested how musical retrieval cues that differed on two dimensions of emotion—valence (positive and negative) and arousal (high and low)—impacted the following aspects of autobiographical memory recall: the response time to access a past personal event, the experience of remembering (ratings of memory vividness), the emotional content of a cued memory (ratings of event arousal and valence), and the type of event recalled (ratings of event energy, socialness, and uniqueness). We further explored how cue presentation affected autobiographical memory retrieval by administering cues of similar arousal and valence levels in a blocked fashion to one half of the tested participants, and randomly to the other half. We report three main findings. First, memories were accessed most quickly in response to musical cues that were highly arousing and positive in emotion. Second, we observed a relation between a cue and the elicited memory’s emotional valence but not arousal; however, both the cue valence and arousal related to the nature of the recalled event. Specifically, high cue arousal led to lower memory vividness and uniqueness ratings, but cues with both high arousal and positive valence were associated with memories rated as more social and energetic. Finally, cue presentation impacted both how quickly and specifically memories were accessed and how cue valence affected the memory vividness ratings. The implications of these findings for views of how emotion directs the access to memories and the experience of remembering are discussed.  相似文献   

14.
Vocal Expression and Perception of Emotion   总被引:3,自引:0,他引:3  
Speech is an acoustically rich signal that provides considerable personal information about talkers. The expression of emotions in speech sounds and corresponding abilities to perceive such emotions are both fundamental aspects of human communication. Findings from studies seeking to characterize the acoustic properties of emotional speech indicate that speech acoustics provide an external cue to the level of nonspecific arousal associated with emotionalprocesses and to a lesser extent, the relative pleasantness of experienced emotions. Outcomes from perceptual tests show that listeners are able to accurately judge emotions from speech at rates far greater than expected by chance. More detailed characterizations of these production and perception aspects of vocal communication will necessarily involve knowledge aboutdifferences among talkers, such as those components of speech that provide comparatively stable cues to individual talkers identities.  相似文献   

15.
Under a noisy “cocktail-party” listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker’s voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker’s voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.  相似文献   

16.
Entrainment of walking to rhythmic auditory cues (e.g., metronome and/or music) improves gait in people with Parkinson's disease (PD). Studies on healthy individuals indicate that entrainment to pleasant musical rhythm can be more beneficial for gait facilitation than entrainment to isochronous rhythm, potentially as a function of emotional/motivational responses to music and their associated influence on motor function. Here, we sought to investigate how emotional attributes of music and isochronous cues influence stride and arm swing amplitude in people with PD. A within-subjects experimental trial was completed with persons with PD serving as their own controls. Twenty-three individuals with PD walked to the cue of self-chosen pleasant music cue, pitch-distorted unpleasant music, and an emotionally neutral isochronous drumbeat. All music cues were tempo-matched to individual walking pace at baseline. Greater gait velocity, stride length, arm swing peak velocity and arm swing range of motion (RoM) were found when patients walked to pleasant music cues compared to baseline, walking to unpleasant music, and walking to isochronous cues. Cued walking in general marginally increased variability of stride-to-stride time and length compared with uncued walking. Enhanced stride and arm swing amplitude were most strongly associated with increases in perceived enjoyment and pleasant musical emotions such as power, tenderness, and joyful activation. Musical pleasure contributes to improvement of stride and arm swing amplitude in people with PD, independent of perceived familiarity with music, cognitive demands of music listening, and beat salience. Our findings aid in understanding the role of musical pleasure in invigorating gait in PD, and inform novel approaches for restoring or compensating impaired motor circuits.  相似文献   

17.
Can listeners distinguish unfamiliar performers playing the same piece on the same instrument? Professional performers recorded two expressive and two inexpressive interpretations of a short organ piece. Nonmusicians and musicians listened to these recordings and grouped together excerpts they thought had been played by the same performer. Both musicians and nonmusicians performed significantly above chance. Expressive interpretations were sorted more accurately than inexpressive ones, indicating that musical individuality is communicated more efficiently through expressive performances. Furthermore, individual performers' consistency and distinctiveness with respect to expressive patterns were shown to be excellent predictors of categorisation accuracy. Categorisation accuracy was superior for prize-winning performers compared to non-winners, suggesting a link between performer competence and the communication of musical individuality. Finally, results indicate that temporal information is sufficient to enable performer recognition, a finding that has broader implications for research on the detection of identity cues.  相似文献   

18.
The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical ‘pseudo‐utterances’ were presented to listener groups with and without PD in two separate rating tasks. Task 1 required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo‐utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the polite/impolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language ( Pell & Leonard, 2003 ).  相似文献   

19.
Previous research has shown that listeners follow speaker gaze to mentioned objects in a shared environment to ground referring expressions, both for human and robot speakers. What is less clear is whether the benefit of speaker gaze is due to the inference of referential intentions (Staudte and Crocker, 2011) or simply the (reflexive) shifts in visual attention. That is, is gaze special in how it affects simultaneous utterance comprehension? In four eye-tracking studies we directly contrast speech-aligned speaker gaze of a virtual agent with a non-gaze visual cue (arrow). Our findings show that both cues similarly direct listeners’ attention and that listeners can benefit in utterance comprehension from both cues. Only when they are similarly precise, however, does this equality extend to incongruent cueing sequences: that is, even when the cue sequence does not match the concurrent sequence of spoken referents can listeners benefit from gaze as well as arrows. The results suggest that listeners are able to learn a counter-predictive mapping of both cues to the sequence of referents. Thus, gaze and arrows can in principle be applied with equal flexibility and efficiency during language comprehension.  相似文献   

20.
During speech perception, listeners make judgments about the phonological category of sounds by taking advantage of multiple acoustic cues for each phonological contrast. Perceptual experiments have shown that listeners weight these cues differently. How do listeners weight and combine acoustic cues to arrive at an overall estimate of the category for a speech sound? Here, we present several simulations using a mixture of Gaussians models that learn cue weights and combine cues on the basis of their distributional statistics. We show that a cue‐weighting metric in which cues receive weight as a function of their reliability at distinguishing phonological categories provides a good fit to the perceptual data obtained from human listeners, but only when these weights emerge through the dynamics of learning. These results suggest that cue weights can be readily extracted from the speech signal through unsupervised learning processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号