首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.  相似文献   

2.
Attentional capture in serial audiovisual search tasks   总被引:1,自引:0,他引:1  
The phenomenon of attentional capture has typically been studied in spatial search tasks. Dalton and Lavie recently demonstrated that auditory attention can also be captured by a singleton item in a rapidly presented tone sequence. In the experiments reported here, we investigated whether these findings extend cross-modally to sequential search tasks using audiovisual stimuli. Participants searched a stream of centrally presented audiovisual stimuli for targets defined on a particular dimension (e.g., duration) in a particular modality. Task performance was compared in the presence versus absence of a unique singleton distractor. Irrelevant auditory singletons captured attention during visual search tasks, leading to interference when they coincided with distractors but to facilitation when they coincided with targets. These results demonstrate attentional capture by auditory singletons during nonspatial visual search.  相似文献   

3.
Audiotactile temporal order judgments   总被引:3,自引:0,他引:3  
We report a series of three experiments in which participants made unspeeded 'Which modality came first?' temporal order judgments (TOJs) to pairs of auditory and tactile stimuli presented at varying stimulus onset asynchronies (SOAs) using the method of constant stimuli. The stimuli were presented from either the same or different locations in order to explore the potential effect of redundant spatial information on audiotactile temporal perception. In Experiment 1, the auditory and tactile stimuli had to be separated by nearly 80 ms for inexperienced participants to be able to judge their temporal order accurately (i.e., for the just noticeable difference (JND) to be achieved), no matter whether the stimuli were presented from the same or different spatial positions. More experienced psychophysical observers (Experiment 2) also failed to show any effect of relative spatial position on audiotactile TOJ performance, despite having much lower JNDs (40 ms) overall. A similar pattern of results was found in Experiment 3 when silent electrocutaneous stimulation was used rather than vibrotactile stimulation. Thus, relative spatial position seems to be a less important factor in determining performance for audiotactile TOJ than for other modality pairings (e.g., audiovisual and visuotactile).  相似文献   

4.
唐晓雨  孙佳影  彭姓 《心理学报》2020,52(3):257-268
本研究基于线索-靶子范式, 操纵目标刺激类型(视觉、听觉、视听觉)与线索有效性(有效线索、中性条件、无效线索)两个自变量, 通过3个实验来考察双通道分配性注意对视听觉返回抑制(inhibition of return, IOR)的影响。实验1 (听觉刺激呈现在左/右侧)结果发现, 在双通道分配性注意条件下, 视觉目标产生显著IOR效应, 而视听觉目标没有产生IOR效应; 实验2 (听觉刺激呈现在左/右侧)与实验3 (听觉刺激呈现在中央)结果发现, 在视觉通道选择性注意条件下, 视觉与视听觉目标均产生显著IOR效应但二者无显著差异。结果表明:双通道分配性注意减弱视听觉IOR效应。  相似文献   

5.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   

6.
We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.  相似文献   

7.
基于外源性线索-靶子范式, 采用2(线索-靶子间隔时间, stimulus onset asynchronies, SOA:400~600 ms、1000~1200 ms) × 3(目标刺激类型:视觉、听觉、视听觉) × 2(线索有效性:有效线索、无效线索)的被试内实验设计, 要求被试对目标刺激完成检测任务, 以考察视觉线索诱发的返回抑制(inhibition of return, IOR)对视听觉整合的调节作用, 从而为感知觉敏感度、空间不确定性及感觉通道间信号强度差异假说提供实验证据。结果发现:(1) 随SOA增长, 视觉IOR效应显著降低, 视听觉整合效应显著增强; (2) 短SOA (400~600 ms)时, 有效线索位置上的视听觉整合效应显著小于无效线索位置, 但长SOA (1000~1200 ms)时, 有效与无效线索位置上的视听觉整合效应并无显著差异。结果表明, 在不同SOA条件下, 视觉IOR对视听觉整合的调节作用产生变化, 当前结果支持感觉通道间信号强度差异假说。  相似文献   

8.
采用内源性线索-靶子范式, 操纵线索类型(有效线索、无效线索)和靶刺激通道类型(视觉刺激、听觉刺激、视听觉刺激)两个自变量, 通过两个实验, 分别设置50%和80%两种内源性空间线索有效性来考察不同空间线索有效性条件下内源性空间注意对视听觉整合的影响。结果发现, 当线索有效性为50%时(实验1), 有效线索位置和无效线索位置的视听觉整合效应没有显著差异; 当线索有效性为80%时(实验2), 有效线索位置的视听觉整合效应显著大于无效线索位置的视听觉整合效应。结果表明, 线索有效性不同时, 内源性空间注意对视听觉整合产生了不同的影响, 高线索有效性条件下内源性空间注意能够促进视听觉整合效应。  相似文献   

9.
In the attentional boost effect, memory for images presented at the same time as unrelated targets (e.g., an orange square) is enhanced relative to images presented at the same time as distractors (e.g., a blue square). One difficulty in understanding the nature of this enhancement is that, in most experiments demonstrating the attentional boost effect, targets have been less common than distractors. As a result, the memory enhancement associated with target detection may have been driven by differences in the relative frequencies of targets and distractors. In four experiments, participants encoded images into memory at the same time that they monitored a second, unrelated stimulus stream for targets. In some conditions, targets were as common as distractors (1:1 ratio); in others, targets were rare (1:6 ratio). The attentional boost effect was present when the target and distractor frequencies were equated, ruling out oddball and distinctiveness effects as explanations. These effects were observed when targets required a buttonpress and when they were covertly counted. Memory enhancements were not observed for images presented at the same time as rare distractor stimuli. We concluded that selectively attending to events that require an overt or covert response enhances the processing of concurrent information.  相似文献   

10.
In Experiment 1, participants were presented with pairs of stimuli (one visual and the other tactile) from the left and/or right of fixation at varying stimulus onset asynchronies and were required to make unspeeded temporal order judgments (TOJs) regarding which modality was presented first. When the participants adopted an uncrossed-hands posture, just noticeable differences (JNDs) were lower (i.e., multisensory TOJs were more precise) when stimuli were presented from different positions, rather than from the same position. This spatial redundancy benefit was reduced when the participants adopted a crossed-hands posture, suggesting a failure to remap visuotactile space appropriately. In Experiment 2, JNDs were also lower when pairs of auditory and visual stimuli were presented from different positions, rather than from the same position. Taken together, these results demonstrate that people can use redundant spatial cues to facilitate their performance on multisensory TOJ tasks and suggest that previous studies may have systematically overestimated the precision with which people can make such judgments. These results highlight the intimate link between spatial and temporal factors in determining our perception of the multimodal objects and events in the world around us.  相似文献   

11.
To investigate the effect of semantic congruity on audiovisual target responses, participants detected a semantic concept that was embedded in a series of rapidly presented stimuli. The target concept appeared as a picture, an environmental sound, or both; and in bimodal trials, the audiovisual events were either consistent or inconsistent in their representation of a semantic concept. The results showed faster detection latencies to bimodal than to unimodal targets and a higher rate of missed targets when visual distractors were presented together with auditory targets, in comparison to auditory targets presented alone. The findings of Experiment 2 showed a cross-modal asymmetry, such that visual distractors were found to interfere with the accuracy of auditory target detection, but auditory distractors had no effect on either the speed or the accuracy of visual target detection. The biased-competition theory of attention (Desimone & Duncan Annual Review of Neuroscience 18: 1995; Duncan, Humphreys, & Ward Current Opinion in Neurobiology 7: 255–261 1997) was used to explain the findings because, when the saliency of the visual stimuli was reduced by the addition of a noise filter in Experiment 4, visual interference on auditory target detection was diminished. Additionally, the results showed faster and more accurate target detection when semantic concepts were represented in a visual rather than an auditory format.  相似文献   

12.
Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant’s two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of crossmodal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.  相似文献   

13.
Identification of the second of two targets is impaired when presented less than about 500 ms after the first. The magnitude of this attentional blink (AB) is known to be modulated by tonic factors (e.g., observer's state of relaxation). The present work examined the effects of a phasic change in observer's state brought about by an alerting stimulus (an aggregate of faint rings) presented in temporal proximity to either letter-target inserted in a temporal stream (RSVP) of digit distractors. In four experiments, identification accuracy of each target was substantially improved by presenting the alerting stimulus either in the target's frame or in the preceding RSVP frame. However, alerting did not modulate the magnitude of the AB. The appearance of an alerting effect on the AB in Experiment 1 was ascribed to a ceiling effect in Experiment 2. Experiment 3 ruled out endogenous temporal cueing effects; Experiment 4 examined the temporal gradient of alerting. Independence of the alerting and AB effects suggests that the alerting stimuli and the letter targets may be processed along distinct visual pathways.  相似文献   

14.
The attentional blink refers to the finding that the 2nd of 2 targets embedded in a stream of rapidly presented distractors is often missed. Whereas most theories of the attentional blink focus on limited-capacity processes that occur after target selection, the present work investigates the selection process itself. Identifying a target letter caused an attentional blink for the enumeration of subsequent dot patterns, but this blink was reduced when the dots shared their color with the target letter. In contrast, performance worsened when the color of the dots matched that of the remaining distractors in the stream. Similarity between the targets also affected competition between different sets of dots presented simultaneously within a single display. The authors conclude that the selection of targets from a rapid serial visual presentation stream is mediated by both excitatory and inhibitory attentional control mechanisms.  相似文献   

15.
王润洲  毕鸿燕 《心理科学进展》2022,30(12):2764-2776
发展性阅读障碍的本质一直是研究者争论的焦点。大量研究发现, 阅读障碍者具有视听时间整合缺陷。然而, 这些研究仅考察了阅读障碍者视听时间整合加工的整体表现, 也就是平均水平的表现, 却对整合加工的变化过程缺乏探讨。视听时间再校准反映了视听时间整合的动态加工过程, 对内部时间表征与感觉输入之间差异的再校准困难则会导致多感觉整合受损, 而阅读障碍者的再校准相关能力存在缺陷。因此, 视听时间再校准能力受损可能是发展性阅读障碍视听时间整合缺陷的根本原因。未来的研究需要进一步考察发展性阅读障碍者视听时间再校准能力的具体表现, 以及这些表现背后的认知神经机制。  相似文献   

16.
Identification accuracy for the second of two target (T2) is impaired when presented shortly after the first (T1). Does this attentional blink (AB) also impair the perception of the order of presentation? In four experiments, three letter targets (T1, T2, T3) were inserted in a stream of digit distractors displayed in rapid serial visual presentation (RSVP), with T3 always presented directly after T2. The T1-T2 lag was varied to assess the perception of T2-T3 temporal order throughout the period of the AB. Factorial manipulation of the presence or absence of distractors before T1 and between T1 and T2 had similar effects on accuracy and on perception of temporal order. It is important to note that perception of temporal order suffered even when accuracy was unimpaired. This pattern of results is consistent with prior-entry theories of the perception of temporal order but not with episodic-integration theories. Simulations based on the Episodic Simultaneous Type, Serial Token (eSTST) model (Wyble, Bowman, & Nieuwenstein, 2009) provided excellent fits to the data except for the condition in which no distractors were presented in the RSVP stream.  相似文献   

17.
Recent reports have shown that saccades can deviate either toward or away from distractors. However, the specific conditions responsible for the change in initial saccade direction are not known. One possibility, examined here, is that the direction of curvature (toward or away from distractors) reflects preparatory tuning of the oculomotor system when the location of the target and distractor are known in advance. This was investigated by examining saccade trajectories under predictable and unpredictable target conditions. In Experiment 1, the targets and the distractors appeared unpredictably, whereas in Experiment 2 an arrow cue presented at fixation indicated the location of the forthcoming target prior to stimulus onset. Saccades were made to targets on the horizontal, vertical, and principal oblique axis, and distractors appeared simultaneously at an adjacent location (a separation of +/- 45 degrees of visual angle). On average, saccade trajectories curved toward distractors when target locations were unpredictable and curved away from distractors when target locations were known in advance. There was no overall difference in mean saccade latencies between the two experiments. The magnitude of the distractor modulation of saccade trajectory (either toward or away from) was comparable across the different saccade directions (horizontal, vertical, and oblique). These results are interpreted in terms of the time course of competitive interactions operating in the neural structures involved in the suppression of distractors and the selection of a saccade target. A relatively slow mechanism that inhibits movements to distractors produces curvature away from the distractor. This mechanism has more time to operate when target location is predictable, increasing the likelihood that the saccade trajectory will deviate away from the distractor.  相似文献   

18.
Audiovisual integration (AVI) has been demonstrated to play a major role in speech comprehension. Previous research suggests that AVI in speech comprehension tolerates a temporal window of audiovisual asynchrony. However, few studies have employed audiovisual presentation to investigate AVI in person recognition. Here, participants completed an audiovisual voice familiarity task in which the synchrony of the auditory and visual stimuli was manipulated, and in which visual speaker identity could be corresponding or noncorresponding to the voice. Recognition of personally familiar voices systematically improved when corresponding visual speakers were presented near synchrony or with slight auditory lag. Moreover, when faces of different familiarity were presented with a voice, recognition accuracy suffered at near synchrony to slight auditory lag only. These results provide the first evidence for a temporal window for AVI in person recognition between approximately 100 ms auditory lead and 300 ms auditory lag.  相似文献   

19.
The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from rhythm perception. In this method, participants had to align the temporal position of a target in a rhythmic sequence of four markers. In the first experiment, target and markers consisted of a visual flash or an auditory noise burst, and all four combinations of target and marker modalities were tested. In the same-modality conditions, no temporal biases and a high precision of the adjusted temporal position of the target were observed. In the different-modality conditions, we found a systematic temporal bias of 25–30 ms. In the second part of the first and in a second experiment, we tested conditions in which audiovisual markers with different stimulus onset asynchronies (SOAs) between the two components and a visual target were used to quantify temporal ventriloquism. The adjusted target positions varied by up to about 50 ms and depended in a systematic way on the SOA and its proximity to the point of subjective synchrony. These data allowed testing different quantitative models. The most satisfying model, based on work by Maij, Brenner, and Smeets (Journal of Neurophysiology 102, 490–495, 2009), linked temporal ventriloquism and the percept of synchrony and was capable of adequately describing the results from the present study, as well as those of some earlier experiments.  相似文献   

20.
Previous research reported ambiguous findings regarding the relationship of visuospatial attention and (stereoscopic) depth information. Some studies indicate that attention can be focused on a distinct depth plane, while other investigations revealed attentional capture from irrelevant items located in other, unattended depth planes. To evaluate whether task relevance of depth information modulates the deployment of attentional resources across depth planes, the additional singleton paradigm was adapted: Singletons defined by depth (i.e., displayed behind or in front of a central depth plane) or color (green against gray) were presented among neutral items and served as targets or (irrelevant) distractors. When participants were instructed to search for a color target, no attentional capture from irrelevant depth distractors was observed. In contrast, it took substantially longer to search for depth targets when an irrelevant distractor was presented simultaneously. Color distractors as well as depth distractors caused attentional capture, independent of the distractors’ relative depth position (i.e., in front of or behind the target). However, slight differences in task performance were obtained depending on whether or not participants fixated within the target depth plane. Thus, the current findings indicate that attentional resources in general are uniformly distributed across different depth planes. Although task relevant depth singletons clearly affect the attentional system, this information might be processed subsequent to other stimulus features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号