首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 148 毫秒
1.
郑晓丹  岳珍珠 《心理科学》2022,45(6):1329-1336
采用生活中的真实客体,我们考察了跨通道语义相关性对视觉注意的影响以及跨通道促进的时程。结合启动范式和点探测范式,实验1发现在听觉启动600毫秒后,被试对高相关视觉刺激的反应比对低相关刺激的反应更快,而在视觉启动下没有发现启动效应。实验2发现在启动刺激呈现900毫秒后跨通道启动效应消失。我们的研究证明了基于先前经验的视、听语义相关能够促进视觉的选择性注意。  相似文献   

2.
本文通过设置不同的线索条件对短时记忆中范畴组织和通道组织进行了比较研究。实验采用了通道线索组、范围线索组和混合线索组三种条件,并设置了五种词表比例。在通道线索组中,每个词表是同范畴字词,但一半用视觉呈现,一半用听觉呈现;在范畴线索组中,两类词分属两个不同的词义范畴,但以相同通道呈现;在混合线索组中,两类不同范畴的词,各有一半分别用视觉和听觉呈现。结果表明在三种线索条件下,ARC分值差异显著,表现为  相似文献   

3.
仿同时刺激延迟反应模式及其在ERP研究中的作用   总被引:1,自引:0,他引:1  
本文论述了本实验室新近研究成功的“双通道仿同时刺激延迟反应”实验模式及其在当代ERP研究中的作用.该模式要点是在同项实验中将视觉刺激与听觉刺激的呈现顺序随机排列,以达双通道ERP具有同时性之效果;在被试本通道操作时呈现异通道刺激.该模式兼具关于MMN心理机制争论双方模式之优点:刺激间隔长,非注意纯度高;且视,听背景更为一致,因此实验结果更有说服力.  相似文献   

4.
返回抑制(inhibition of return, IOR)与情绪刺激都具有引导注意偏向、提高搜索效率的特点, 但二者间是否存在一定的交互作用迄今为止尚不明确。研究采用“线索-目标”范式并在视听双通道呈现情绪刺激来考察情绪刺激的加工与IOR的交互作用。实验1中情绪刺激以单通道视觉面孔或一致的视听双通道呈现, 实验2通过在视听通道呈现不一致的情绪刺激进一步考察视听双通道情绪一致刺激对IOR的影响是否是由听觉通道一致的情绪刺激导致的, 即是否对听觉通道的情绪刺激进行了加工。结果发现, 视听双通道情绪一致刺激能够削弱IOR, 但情绪不一致刺激与IOR之间不存在交互作用, 并且单双通道的IOR不存在显著差异。研究结果表明仅在视听双通道呈现情绪一致刺激时, 才会影响同一阶段的IOR, 这进一步支持了IOR的知觉抑制理论。  相似文献   

5.
陈宏  王苏妍 《心理科学进展》2012,20(12):1926-1939
视觉注意瞬脱是指在很短时间内(约500 ms)序列呈现两个目标刺激时,被试对第二个目标正确报告率显著下降的现象。近年来国外注意瞬脱实验研究渐成选择性注意研究领域的热点, 其实验范式疏理为两大类-- 单刺激序列RSVP范式和多刺激序列RSVP范式。综述对两类范式的诸多变式进行了分析与评估, 提出了影响注意瞬脱实验范式发展的四项实验因素和今后视觉注意瞬脱实验范式发展的五种趋势。  相似文献   

6.
唐晓雨  孙佳影  彭姓 《心理学报》2020,52(3):257-268
本研究基于线索-靶子范式, 操纵目标刺激类型(视觉、听觉、视听觉)与线索有效性(有效线索、中性条件、无效线索)两个自变量, 通过3个实验来考察双通道分配性注意对视听觉返回抑制(inhibition of return, IOR)的影响。实验1 (听觉刺激呈现在左/右侧)结果发现, 在双通道分配性注意条件下, 视觉目标产生显著IOR效应, 而视听觉目标没有产生IOR效应; 实验2 (听觉刺激呈现在左/右侧)与实验3 (听觉刺激呈现在中央)结果发现, 在视觉通道选择性注意条件下, 视觉与视听觉目标均产生显著IOR效应但二者无显著差异。结果表明:双通道分配性注意减弱视听觉IOR效应。  相似文献   

7.
采用2×3的被试内实验设计,将注意条件和目标刺激类型作为实验变量,考察了指向不同感觉通道的注意对视听语义整合加工的不同影响。结果发现,只有在同时注意视觉和听觉刺激时,被试对语义一致的视听刺激反应最快,即产生冗余信号效应。而在选择性注意一个感觉通道时,语义一致的视听刺激并不具有加工优势。进一步分析发现,在同时注意视觉和听觉时语义一致视听刺激的加工优势源自于其视觉和听觉成分产生了整合。也就是说,只有在同时注意视觉和听觉时,语义一致视听刺激才会产生整合,语义不一致视听刺激不会产生整合。而在选择性注意一个感觉通道时,不论语义是否一致,视听刺激均不会产生整合。  相似文献   

8.
阅读知觉广度通常指阅读者在阅读文本过程中每次注视能获取有用视觉信息的范围。既往的汉语知觉广度的研究一般把单字作为基本的知觉呈现单元,但在某些呈现条件下,此类呈现方式会导致阅读过程中的语义加工的完整性被破坏。本研究采用眼动追踪技术,使用移动窗口范式和中央凹掩蔽范式,使用双字词句作为阅读材料,双字词作为基本的知觉呈现单元,探讨在保证知觉呈现单元语义完整性的基础上大学生的汉语阅读知觉广度。实验一采用移动窗口范式,结果发现,知觉广度范围为注视词左侧1个双字词及右侧1-2个双字词的空间。实验二采用中央凹掩蔽范式,结果验证了实验一的研究结果。该结果表明,在汉语阅读过程中,以双字词为基本的视觉呈现单元,较既往研究中单字为基本呈现单元的情况,更好地保证了阅读中语义的完整性,从而获得了更大的知觉范围。该结果建立于视觉呈现单元的语义完整性的基础上,是对现有的汉语阅读知觉广度理论的完善和扩展。  相似文献   

9.
采用双任务范式探讨当听觉节律刺激序列以较慢速度呈现时,其诱导产生的时间期待效应是否受到同时进行的视觉工作记忆任务的影响。结果发现,无论目标刺激是呈现在听觉通道还是视觉通道,双任务和单任务条件下目标刺激出现在规律听觉刺激序列之后被试的反应时均快于目标出现在非规律听觉刺激序列之后,即节律性刺激序列诱导产生的时间期待效应不受工作记忆任务的影响。该结果表明节律性时间期待效应不受注意控制的影响。  相似文献   

10.
1 前言  早有实验证明视觉反应存在视野差异。Poffenberger( 192 1)指出光刺激落在离中央凹 45°处的视觉反应时大约比落在中央凹处增加 18~ 2 6ms ;Kobrick( 196 5 )对 16名受试者研究后发现在 38°以内的闪光刺激引起的反应时变化很小 ,38°以外反应时才逐渐延长 ,6 4°以外明显延长。有关 30°以内视野中视觉反应差异的研究则未见报道 ,这很可能受当时实验手段的限制。随着电脑技术的发展 ,为描述视觉反应的视野分布和分析视觉反应时的差异提供了有效的手段。本实验运用自编程序测试不同人群的视觉反应时 ,以期…  相似文献   

11.
Following up on studies of the "attentional blink," we studied interference between successive target stimuli in visual and auditory modalities. In each experiment, stimuli were two tones and four dots, simultaneously presented for 1,800 msec. Targets were brief intensity changes in either a tone or a dot. Subjects gave unspeeded responses. In four experiments, our results showed interference between targets in the same modality, but not across modalities. We conclude that, under our experimental conditions, restrictions in concurrent target identification are largely modality specific.  相似文献   

12.
视、听双重记忆原序报告成绩下降的可能的原因   总被引:1,自引:1,他引:0  
本文两种交替报告中的通道转移最多,但是都没有引起成绩大幅度下降。  相似文献   

13.
Four experiments examined judgements of the duration of auditory and visual stimuli. Two used a bisection method, and two used verbal estimation. Auditory/visual differences were found when durations of auditory and visual stimuli were explicitly compared and when durations from both modalities were mixed in partition bisection. Differences in verbal estimation were also found both when people received a single modality and when they received both. In all cases, the auditory stimuli appeared longer than the visual stimuli, and the effect was greater at longer stimulus durations, consistent with a “pacemaker speed” interpretation of the effect. Results suggested that Penney, Gibbon, and Meck's (2000) “memory mixing” account of auditory/visual differences in duration judgements, while correct in some circumstances, was incomplete, and that in some cases people were basing their judgements on some preexisting temporal standard.  相似文献   

14.
Similarities have been observed in the localization of the final position of moving visual and moving auditory stimuli: Perceived endpoints that are judged to be farther in the direction of motion in both modalities likely reflect extrapolation of the trajectory, mediated by predictive mechanisms at higher cognitive levels. However, actual comparisons of the magnitudes of displacement between visual tasks and auditory tasks using the same experimental setup are rare. As such, the purpose of the present free-field study was to investigate the influences of the spatial location of motion offset, stimulus velocity, and motion direction on the localization of the final positions of moving auditory stimuli (Experiment 1 and 2) and moving visual stimuli (Experiment 3). To assess whether auditory performance is affected by dynamically changing binaural cues that are used for the localization of moving auditory stimuli (interaural time differences for low-frequency sounds and interaural intensity differences for high-frequency sounds), two distinct noise bands were employed in Experiments 1 and 2. In all three experiments, less precise encoding of spatial coordinates in paralateral space resulted in larger forward displacements, but this effect was drowned out by the underestimation of target eccentricity in the extreme periphery. Furthermore, our results revealed clear differences between visual and auditory tasks. Displacements in the visual task were dependent on velocity and the spatial location of the final position, but an additional influence of motion direction was observed in the auditory tasks. Together, these findings indicate that the modality-specific processing of motion parameters affects the extrapolation of the trajectory.  相似文献   

15.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

16.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

17.
A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.  相似文献   

18.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.  相似文献   

19.
The numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号