首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
通常人们接收到来自不同感觉通道的信息时, 首先在大脑中各个分离的区域单独进行加工处理, 而后在多感官区进行整合。前人关于言语感知中视听整合加工的神经成像研究认为, 视觉和听觉信息能够相互影响; 两者进行整合的关键区域是人脑左后侧的颞上沟, 其整合效应受时间和空间因素的限制。未来的研究应致力于建立更加合理的实验范式和数据分析方法来探讨整合加工的脑区机制, 把多感官整合研究进一步延伸到更加复杂的领域。  相似文献   

2.
虚拟现实技术通过提供视觉、听觉和触觉等信息为用户创造身临其境的感知体验, 其中触觉反馈面临诸多技术瓶颈使得虚拟现实中的自然交互受限。基于多感官错觉的伪触觉技术可以借助其他通道的信息强化和丰富触觉感受, 是目前虚拟现实环境中优化触觉体验的有效途径。本文聚焦于触觉中最重要的维度之一——粗糙度, 试图为解决虚拟现实中触觉反馈的受限问题提供新思路。探讨了粗糙度感知中, 视、听、触多感觉通道整合的关系, 分析了视觉线索(表面纹理密度、表面光影、控制显示比)和听觉线索(音调/频率、响度)如何影响触觉粗糙度感知, 总结了当下调控这些因素来改变粗糙度感知的方法。最后, 探讨了使用伪触觉反馈技术时, 虚拟现实环境中视、听、触觉信息在呈现效果、感知整合等方面与真实世界相比可能存在的差异, 提出可借鉴的改善触觉体验的适用方法和未来待研究的方向。  相似文献   

3.
以往关于面孔吸引力判断的研究多关注视觉信息,忽视了非视觉信息在其中的作用,而现有研究已证实面孔吸引力判断中存在不同感官信息的相互作用,是跨通道整合的。为此,在以往研究的基础上,综合面孔空间模型和贝叶斯因果推理模型,推测在面孔吸引力判断的跨通道整合过程中,当个体根据感官刺激和已有的标准面孔推断不同感官信息是来自同一张目标面孔时,便自然将各种感官信息在大脑中进行整合,形成统一的目标面孔,进行吸引力的判断。未来可将面孔嵌入更广泛的环境中,考察多种感官信息的跨通道整合,并进一步探究跨通道整合的边界条件,以及社会互动中的跨通道整合,以构建更系统的面孔吸引力跨通道整合模型。  相似文献   

4.
统计最优化理论认为, 在多感觉信息整合的过程中, 大脑是以加权平均的方式将多个感觉通道的信息整合为统一的感觉信息, 通道信息的利用权重根据各通道信息的可靠性来确定。近期的几个行为研究结果则显示, 有关通道估计可靠性的先验知识同样能够影响通道信息在整合过程中的利用权重。然而, 这些研究结果不能确定先验知识对多感觉信息整合的影响是发生在认知加工的感知觉阶段还是决策阶段。当前研究致力于对此问题进行探讨。实验通过赋予两种颜色字母不同的视、听一致概率(高概率、低概率), 测量并分析被试在各概率水平下对视听一致刺激的反应时。实验数据显示不同的视听一致概率能够调制对视听一致刺激的反应时。该结果揭示通道估计可靠性先验知识在早期的知觉加工阶段影响多感觉信息整合。  相似文献   

5.
王静  薛成波  刘强 《心理学报》2018,50(2):176-185
客体理论认为视觉工作记忆的存储单位是客体, 人类能够把组成物体的所有特征整合成一个单元进行记忆, 不管这些特征是来自不同维度还是同一维度。然而同一维度多个特征可以被整合成客体记忆的结果只被少数研究证实, 大量研究发现同一维度的多个特征不能被整合记忆, 这形成了弱客体理论的基本观点。为了解决两大理论之间的争议, 本研究主要探究同维度特征能否被整合记忆。通过分析以往研究, 我们认为实验范式和物体意义两个因素可能是导致之前大部分研究没有发现维度内特征能够被整合记忆的原因。因此, 实验1采用回忆报告范式替代以往研究经常使用的变化觉察范式, 结果发现, 同一维度的多个特征难以被整合记忆。实验2以记忆无意义物体的特征为基线, 探究在记忆有意义物体的特征时是否可能将多个特征进行整合记忆。结果发现, 对于有意义物体的特征的记忆成绩并没有显著好于记忆无意义物体特征的成绩, 说明即使采用具有较强整合线索的有意义物体作为载体, 同一维度多个特征也难以被整合记忆。两个实验结果进一步支持了弱客体理论。  相似文献   

6.
采用内源性线索-靶子范式, 操纵线索类型(有效线索、无效线索)和靶刺激通道类型(视觉刺激、听觉刺激、视听觉刺激)两个自变量, 通过两个实验, 分别设置50%和80%两种内源性空间线索有效性来考察不同空间线索有效性条件下内源性空间注意对视听觉整合的影响。结果发现, 当线索有效性为50%时(实验1), 有效线索位置和无效线索位置的视听觉整合效应没有显著差异; 当线索有效性为80%时(实验2), 有效线索位置的视听觉整合效应显著大于无效线索位置的视听觉整合效应。结果表明, 线索有效性不同时, 内源性空间注意对视听觉整合产生了不同的影响, 高线索有效性条件下内源性空间注意能够促进视听觉整合效应。  相似文献   

7.
视听觉整合是将视觉和听觉信息整合成为统一、连贯且稳定的知觉过程。研究采用内源性线索-靶子范式,探讨了不同内源性空间线索有效性对老年人视听觉整合的影响,以及不同线索有效性条件下老年人和青年人视听觉整合的差异。结果表明,(1)无论线索有效性的高低,老年人的视听觉整合均弱于青年人;(2)低线索有效性(50%)条件下,老年人和青年人在有效线索条件下视听觉整合效应均与无效线索条件下没有差异;(3)中线索有效性(70%)条件下,老年人在有效线索条件下视听觉整合效应与无效线索条件下没有差异,青年人在有效线索条件下视听觉整合效应显著高于无效线索条件;(4)高线索有效性(90%)条件下,老年人和青年人在有效线索条件下视听觉整合效应均显著高于无效线索条件。研究结果支持了空间不确定性假说,并且进一步揭示了内源性注意与视听觉整合的交互作用,明确了不同线索有效性条件下内源性空间注意定向收益的不同是导致老年人与青年人视听觉整合差异的原因之一。  相似文献   

8.
饮食行为包含个体对饮食的风味感知、口感评价、情绪感受、个人饮食偏好以及外显的进饮动作等一系列心理与行为过程。研究相继表明:声音主要通过影响人们对饮食的感官感受性与喜好程度来影响饮食行为。饮食行为中的声音信息包括内感受性线索(Interoceptive cues),即个体与饮食的交互音(如咀嚼食物声、吞咽饮品声,制作与准备饮食过程中的声音等);外感受性线索(exteroceptive cues),即环境音(主要指噪音)与背景音乐。行为研究结果普遍强调认知因素在声音与饮食间所起的作用,如注意的分散与转移、跨通道联结(匹配性效应)、期望与回避(潜在的音画效应)等。而神经科学则以"听-嗅-味"为突破口,从"多通道整合"的角度为理论间的争议寻求更为清晰的证据与潜在的内部机制;与此同时,情绪唤醒、躯体标示(内隐联结)与具身认知视角有望成为新的理论整合点。  相似文献   

9.
已有的多感觉整合研究范式多采用不同单通道刺激和双通道刺激随机呈现的方式进行测量。在这种范式中混有通道转换效应, 该效应可能导致多感觉整合的测量结果不准确。因此, 分析清楚实验范式中通道转换效应的影响因素, 并以此为依据设计合理的整合测量范式, 是进行多感觉整合研究的一个必要前提。本文首先通过实验一验证了在经典的整合测量范式中通道转换效应的影响方式; 进而, 在实验二中通过控制前后刺激的信号强度的一致性, 分析了通道转换效应的特点。综合分析显示, 通道转换效应是由前一刺激差异所造成的对当前刺激通道加工的注意资源分配和警觉水平的变化所导致。该结果表明, 在多感觉行为测量中, 需先根据前一刺激通道分类后再进行分析。  相似文献   

10.
唐晓雨  佟佳庚  于宏  王爱君 《心理学报》2021,53(11):1173-1188
本文采用内-外源性空间线索靶子范式, 操控内源性线索有效性(有效线索、无效线索)、外源性线索有效性(有效线索、无效线索)、目标刺激类型(视觉刺激、听觉刺激、视听觉刺激)三个自变量。通过两个不同任务难度的实验(实验1: 简单定位任务; 实验2: 复杂辨别任务)来考察内外源性空间注意对多感觉整合的影响。两个实验结果均发现外源性空间注意显著减弱了多感觉整合效应, 内源性空间注意没有显著增强多感觉整合效应; 实验2中还发现了内源性空间注意会对外源性空间注意减弱多感觉整合效应产生影响。结果表明, 与内源性空间注意不同, 外源性空间注意对多感觉整合的影响不易受任务难度的调控; 当任务较难时内源性空间注意会影响外源性空间注意减弱多感觉整合效应的过程。由此推测, 内外源性空间注意对多感觉整合的调节并非彼此独立、而是相互影响的。  相似文献   

11.
This experiment examines how emotion is perceived by using facial and vocal cues of a speaker. Three levels of facial affect were presented using a computer-generated face. Three levels of vocal affect were obtained by recording the voice of a male amateur actor who spoke a semantically neutral word in different simulated emotional states. These two independent variables were presented to subjects in all possible permutations—visual cues alone, vocal cues alone, and visual and vocal cues together—which gave a total set of 15 stimuli. The subjects were asked to judge the emotion of the stimuli in a two-alternative forced choice task (either HAPPY or ANGRY). The results indicate that subjects evaluate and integrate information from both modalities to perceive emotion. The influence of one modality was greater to the extent that the other was ambiguous (neutral). The fuzzy logical model of perception (FLMP) fit the judgments significantly better than an additive model, which weakens theories based on an additive combination of modalities, categorical perception, and influence from only a single modality.  相似文献   

12.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

13.
The ability of 4 olive baboons (Papio anubis) to use human gaze cues during a competitive food task was investigated. Three baboons used head orientation as a cue, and 1 individual also used eye direction alone. As the baboons did not receive prior training with gestural cues, their performance suggests that the competitive paradigm may be more appropriate for testing nonhuman primates than the standard object-choice paradigm. However, the baboons were insensitive to whether the experimenter could actually perceive the food item, and therefore the use of visual orientation cues may not be indicative of visual perspective-taking abilities. Performance was disrupted by the introduction of a screen and objects to conceal food items and by the absence of movement in cues presented.  相似文献   

14.
Speech perception requires listeners to integrate multiple cues that each contribute to judgments about a phonetic category. Classic studies of trading relations assessed the weights attached to each cue but did not explore the time course of cue integration. Here, we provide the first direct evidence that asynchronous cues to voicing (/b/ vs. /p/) and manner (/b/ vs. /w/) contrasts become available to the listener at different times during spoken word recognition. Using the visual world paradigm, we show that the probability of eye movements to pictures of target and of competitor objects diverge at different points in time after the onset of the target word. These points of divergence correspond to the availability of early (voice onset time or formant transition slope) and late (vowel length) cues to voicing and manner contrasts. These results support a model of cue integration in which phonetic cues are used for lexical access as soon as they are available.  相似文献   

15.
In two experiments, we investigated whether reference frames acquired through touch could influence memories for locations learned through vision. Participants learned two objects through touch, and haptic egocentric (Experiment 1) and environmental (Experiment 2) cues encouraged selection of a specific reference frame. Participants later learned eight new objects through vision. Haptic cues were manipulated, whereas visual learning was held constant in order to observe any potential influence of the haptically experienced reference frame on memories for visually learned locations. When the haptically experienced reference frame was defined primarily by egocentric cues, cue manipulation had no effect on memories for objects learned through vision. Instead, visually learned locations were remembered using a reference frame selected from the visual study perspective. When the haptically experienced reference frame was defined by both egocentric and environmental cues, visually learned objects were remembered in the context of the haptically experienced reference frame. These findings support the common reference frame hypothesis, which proposes that locations learned through different sensory modalities are represented within a common reference frame.  相似文献   

16.
To interpret our environment, we integrate information from all our senses. For moving objects, auditory and visual motion signals are correlated and provide information about the speed and the direction of the moving object. We investigated at what level the auditory and the visual modalities interact and whether the human brain integrates only motion signals that are ecologically valid. We found that the sensitivity for identifying motion was improved when motion signals were provided in both modalities. This improvement in sensitivity can be explained by probability summation. That is, auditory and visual stimuli are combined at a decision level, after the stimuli have been processed independently in the auditory and the visual pathways. Furthermore, this integration is direction blind and is not restricted to ecologically valid motion signals.  相似文献   

17.
To interact successfully with the environment and to compensate for environmental challenges, the human brain must integrate information originating in different sensory modalities. Such integration occurs in non-primary associative regions of the human brain. Additionally, recent investigations have documented the involvement of the primary visual cortex in processing tactile information in blind humans to a larger extent than in sighted controls. This form of cross-modal plasticity highlights the capacity of the human central nervous system to reorganize after chronic visual deprivation.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

18.
Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the 2 most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the /hi/~/∫i/ continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time course of this interaction and discussing how different models of cue integration could be adapted to account for our results.  相似文献   

19.
Infants have been demonstrated to be able to perceive illusory contours in Kanizsa figures. This study tested whether they also perceive these illusory figures as having the properties of real objects, such as depth and capability of occluding other objects. Eight‐ and five‐month‐old infants were presented with scenes that included a Kanizsa square and further depth cues provided by the deletion and accretion pattern of a moving duck. The 8‐month‐old infants looked significantly longer at the scene when the two types of occlusion cues were inconsistent than when they were consistent with each other, which provides evidence that they interpreted the Kanizsa square as a depth cue. In contrast, 5‐month‐olds did not show this difference. This finding demonstrates that 8‐month‐olds perceive the figure formed by the illusory contours as having properties of a real object that can act as an occluder.  相似文献   

20.
In probabilistic inferences concerning which of two objects has the larger criterion value (e.g., which of two cities has more inhabitants), participants may recognize both objects, only one, or neither. According to the mental-toolbox approach, different decision strategies exist for each of these cases, utilizing different probabilistic cues. Possibly, however, participants use these cues to build a subjective rank order that involves all objects, irrespective of their recognition status. The decision process then simply utilizes the distance between two objects in one’s subjective order. We tested the role of such linear orders in reanalyses of existing data and in a new experiment. Participants’ choices and decision times were determined both by subjective rank-order distances and by the recognition status of the compared objects. To integrate these theoretically inconsistent findings, we discuss the role of the evidential difference (or the degree of conflict) between two objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号