首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’ visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target’s color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants’ visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants’ visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays.  相似文献   

2.
When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a specified target object as quickly as possible. The target’s characteristic sound speeded the saccadic search time within 215–220 msec and also guided the initial saccade toward the target, compared with presentation of a distractor’s sound or with no sound. These results suggest that object-based auditory—visual interactions rapidly increase the target object’s salience in visual search.  相似文献   

3.
We investigated the extent to which auditory and visual motion signals are combined when observers are asked to predict the location of a virtually moving target. In Condition 1, the unimodal and bimodal signals were noisy, but the target object was continuously visible and audible; in Condition 2, the virtually moving object was hidden (invisible and inaudible) for a short period prior to its arrival at the target location. Our main finding was that the facilitation due to simultaneous visual and auditory input is very different for the two conditions. When the target is continuously visible and audible (Condition 1), the bimodal performance is twice as good as the unimodal performances, thus suggesting a very effective integration mechanism. On the other hand, if the object is hidden for a short period (Condition 2) and the task therefore requires the extrapolation of motion speed over a temporal and spatial period, the facilitation due to both sensory inputs is almost absent, and the bimodal performance is limited by the visual performance.  相似文献   

4.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

5.
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances.  相似文献   

6.
The present experiments were designed to test whether or not processing in visual information channels defined by spatial position is independent in the visual search paradigm. In Experiment 1 subjects were asked to judge whether or not a red square was present in a display of two colored geometric figures. Their mean reaction time (RT) to respond no to a “divided target” display in which one figure was red and the other was a square was about 100 msec longer than to control displays containing either two red circles or two green squares. This result is inconsistent with a spatially serial independent-channel model and with many spatially parallel independent-channel models. The relatively slow responding to divided target displays was replicated in Experiments 2 and 3, when subjects judged whether or not an “A” was present in a display of two alphanumeric characters, and a divided target display was one which contained two features of “A.” Experiments 4 and 5 demonstrated that the dependence observed in the first three experiments was probably the result of two mechanisms: crosstalk integration, whereby the target features are integrated across the two spatial channels, and repetition facilitation, whereby processing is facilitated (in some cases) when the two figures in the display are physically identical. Experiment 6 suggested that subjects organized the display in terms of spatial channels even when the task allowed subjects to ignore spatial location.  相似文献   

7.
This study explored whether hand location affected spatial attention. The authors used a visual covert-orienting paradigm to examine whether spatial attention mechanisms--location prioritization and shifting attention--were supported by bimodal, hand-centered representations of space. Placing 1 hand next to a target location, participants detected visual targets following highly predictive visual cues. There was no a priori reason for the hand to influence task performance unless hand presence influenced attention. Results showed that target detection near the hand was facilitated relative to detection away from the hand, regardless of cue validity. Similar facilitation was found with only proprioceptive or visual hand location information but not with arbitrary visual anchors or distant targets. Hand presence affected attentional prioritization of space, not the shifting of attention.  相似文献   

8.
Königs K  Knöll J  Bremmer F 《Perception》2007,36(10):1507-1512
Previous studies have shown that the perceived location of visual stimuli briefly flashed during smooth pursuit, saccades, or optokinetic nystagmus (OKN) is not veridical. We investigated whether these mislocalisations can also be observed for brief auditory stimuli presented during OKN. Experiments were carried out in a lightproof sound-attenuated chamber. Participants performed eye movements elicited by visual stimuli. An auditory target (white noise) was presented for 5 ms. Our data clearly indicate that auditory targets are mislocalised during reflexive eye movements. OKN induces a shift of perceived location in the direction of the slow eye movement and is modulated in the temporal vicinity of the fast phase. The mislocalisation is stronger for look- as compared to stare-nystagmus. The size and temporal pattern of the observed mislocalisation are different from that found for visual targets. This suggests that different neural mechanisms are at play to integrate oculomotor signals and information on the spatial location of visual as well as auditory stimuli.  相似文献   

9.
Recent studies have shown that cueing eye gaze can affect the processing of visual information, and this phenomenon is called the gaze-orienting effect (visual-GOE). Emerging evidence has shown that the cueing eye gaze also affects the processing of auditory information (auditory-GOE). However, it is unclear whether the auditory-GOE is modulated by emotion. We conducted three behavioural experiments to investigate whether cueing eye gaze influenced the orientation judgement to a sound, and whether the effect was modulated by facial expressions. The current study set four facial expressions (angry, fearful, happy, and neutral), manipulated the display type of facial expressions, and changed the sequence of gaze and emotional expressions. Participants were required to judge the sound orientation after facial expressions and gaze cues. The results showed that the orientation judgement of sound was influenced by gaze direction in all three experiments, and the orientation judgement of sound was faster when the face was oriented to the target location (congruent trials) than when the face was oriented away from the target location (incongruent trials). The modulation of emotion on auditory-GOE was observed only when gaze shifted followed by facial expression (Exp3); the auditory-GOE was significantly greater for angry faces than for neutral faces. These findings indicate that auditory-GOE as a social phenomenon exists widely, and the effect was modulated by facial expression. Gaze shift before the presentation of emotion was the key influencing factor for the emotional modulation in an auditory target gaze-orienting task. Our findings suggest that the integration of facial expressions and eye gaze was context-dependent.  相似文献   

10.
Three experiments examined visual orienting in response to spatial precues. In Experiments 1 and 2, attentional effects of central letters were stimulus driven: Orienting was dependent on the spatial layout of the cue display. When there were no correspondences between spatial features of the cue display and target location, attentional effects were absent, despite a conscious intention to orient in response to the symbolic information carried by the cue letters. In Experiment 3 clear orienting effects were observed when target location corresponded with spatial features of the cue display, but the magnitude of these effects was unaffected by whether participants were aware or unaware of the cue–target relationship. These findings are consistent with the view that (1) spatial correspondences between cues and targets are a critical factor driving visual orienting in cueing paradigms, and (2) attentional effects of spatial precues are largely independent of participants’ conscious awareness of the cue–target relationship.  相似文献   

11.
Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.  相似文献   

12.
Two experiments examined cross-trial positional priming (V. Maljkovic & K. Nakayama, 1994, 1996, 2000) in visual pop-out search. Experiment 1 used regularly arranged target and distractor displays, as in previous studies. Reaction times were expedited when the target appeared at a previous target location (facilitation relative to neutral baseline) and slowed when the target appeared at a previous distractor location (inhibition). In contrast to facilitation, inhibition emerged only after extended practice. Experiment 2 revealed reduced facilitatory and no inhibitory priming when the elements' spatial arrangement was made irregular, indicating that positional--in particular, inhibitory--priming critically depends on the configuration of the display elements across sequences of trials. These results are discussed with respect to the role of the context for cross-trial priming in visual pop-out search.  相似文献   

13.
We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results demonstrate that the crossmodal facilitation of participants' visual identification performance elicited by the presentation of a simultaneous sound occurs over a very narrow range of ISIs. This critical time-window lies just beyond the interval needed for participants to differentiate the target and mask as constituting two distinct perceptual events (Experiment 1) and can be dissociated from any facilitation elicited by making the visual target physically brighter (Experiment 2). When the sound is presented at the same time as the mask, a facilitatory, rather than an inhibitory effect on visual target identification performance is still observed (Experiment 3). We further demonstrate that the crossmodal facilitation of the visual target by the sound depends on the establishment of a reliable temporally coincident relationship between the two stimuli (Experiment 4); however, by contrast, spatial coincidence is not necessary (Experiment 5). We suggest that when visual and auditory stimuli are always presented synchronously, a better-consolidated object representation is likely to be constructed (than that resulting from unimodal visual stimulation).  相似文献   

14.
Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not depend on the direction of deliberate, or endogenous, attention (Bertelson, Vroomen, de Gelder, & Driver, 2000). In the present study, a similar question concerning automatic, or exogenous, attention was examined. The experimental manipulation was based on the fact that exogenous visual attention can be attracted toward a singleton--that is, an item different on some dimension from all other items presented simultaneously. A display was used that consisted of a row of four bright squares with one square, in either the left- or the rightmost position, smaller than the others, serving as the singleton. In Experiment 1, subjects made dichotomous left-right judgments concerning sound bursts, whose successive locations were controlled by a psychophysical staircase procedure and which were presented in synchrony with a display with the singleton either left or right. Results showed that the apparent location of the sound was attracted not toward the singleton, but instead toward the big squares at the opposite end of the display. Experiment 2 was run to check that the singleton effectively attracted exogenous attention. The task was to discriminate target letters presented either on the singleton or on the opposite big square. Performance deteriorated when the target was on the big square opposite the singleton, in comparison with control trials with no singleton, thus showing that the singleton attracted attention away from the target location. In Experiment 3, localization and discrimination trials were mixed randomly so as to control for potential differences in subjects' strategies in the two preceding experiments. Results were as before, showing that the singleton attracted attention, whereas sound localization was shifted away from the singleton. Ventriloquism can thus be dissociated from exogenous visual attention and appears to reflect sensory interactions with little role for the direction of visual spatial attention.  相似文献   

15.
Multisensory-mediated auditory localization   总被引:1,自引:0,他引:1  
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.  相似文献   

16.
Abstract: Despite previous failures to identify visual‐upon‐auditory spatial‐cuing effects, recent studies have demonstrated that the abrupt onset of a lateralized visual stimulus triggers a shift of spatial attention in response to auditory judgment. Nevertheless, whether a centrally presented visual stimulus orients auditory attention remained unclear. The present study investigated whether centrally presented gaze cues trigger a reflexive shift of attention in response to auditory judgment. Participants fixated on a schematic face in which the eyes looked left or right (the cue). A target sound was then presented to the left or right of the cue. Participants judged the direction of the target as quickly as possible. Even though participants were told that the gaze direction did not predict the direction of the target, the response time was significantly faster when the gaze was in the target direction than when it was in the non‐target direction. These findings provide initial evidence for visual‐upon‐auditory spatial‐cuing effects produced by centrally presented cues, suggesting that a reflexive crossmodal shift of attention does occur with a centrally presented visual stimulus.  相似文献   

17.
It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.  相似文献   

18.
The categorization and identification of previously ignored visual or auditory stimuli is typically slowed down—a phenomenon that has been called the negative priming effect and can be explained by the episodic retrieval of response-inadequate prime information and/or an inhibitory model. A similar after-effect has been found in visuospatial tasks: participants are slowed down in localizing a visual stimulus that appears at a previously ignored location. In the auditory modality, however, such an after-effect of ignoring a sound at a specific location has never been reported. Instead, participants are impaired in their localization performance when the sound at the previously ignored location changes identity, a finding which is compatible with the so-called feature-mismatch hypothesis. Here, we describe the properties of auditory spatial in contrast to visuospatial negative priming and report two experiments that specify the nature of this auditory after-effect. Experiment 1 shows that the detection of identity-location mismatches is a genuinely auditory phenomenon that can be replicated even when the sound sources are invisible. Experiment 2 reveals that the detection of sound-identity mismatches in the probe depends on the processing demands in the prime. This finding implies that the localization of irrelevant sound sources is not the inevitable consequence of processing the auditory prime scenario but depends on the difficulty of the target search process among distractor sounds.  相似文献   

19.
Novel stimuli reliably attract attention, suggesting that novelty may disrupt performance when it is task-irrelevant. However, under certain circumstances novel stimuli can also elicit a general alerting response having beneficial effects on performance. In a series of experiments we investigated whether different aspects of novelty – stimulus novelty, contextual novelty, surprise, deviance, and relative complexity – lead to distraction or facilitation. We used a version of the visual oddball paradigm in which participants responded to an occasional auditory target. Participants responded faster to this auditory target when it occurred during the presentation of novel visual stimuli than of standard stimuli, especially at SOAs of 0 and 200 ms (Experiment 1). Facilitation was absent for both infrequent simple deviants and frequent complex images (Experiment 2). However, repeated complex deviant images did facilitate responses to the auditory target at the 200 ms SOA (Experiment 3). These findings suggest that task-irrelevant deviant visual stimuli can facilitate responses to an unrelated auditory target in a short 0–200 millisecond time-window after presentation. This only occurs when the deviant stimuli are complex relative to standard stimuli. We link our findings to the novelty P3, which is generated under the same circumstances, and to the adaptive gain theory of the locus coeruleus–norepinephrine system (Aston-Jones and Cohen, 2005), which may explain the timing of the effects.  相似文献   

20.
A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the ‘ventriloquist aftereffect’, reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their physical spatial discordance. Such dynamic changes to sensory representations are likely to underlie the brain’s ability to accommodate inter-sensory discordance produced by sensory errors (particularly in sound localization) and variability in sensory transduction. It is currently unknown, however, whether these plastic changes induced by adaptation to spatially disparate inputs occurs automatically or whether they are dependent on selectively attending to the visual or auditory stimuli. Here, we demonstrate that robust auditory spatial aftereffects can be induced even in the presence of a competing visual stimulus. Importantly, we found that when attention is directed to the competing stimuli, the pattern of aftereffects is altered. These results indicate that attention can modulate the ventriloquist aftereffect.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号