首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

2.
In the present study, participants identified the location of a visual target presented in a rapidly masked, changing sequence of visual distractors. In Experiment 1, we examined performance when a high tone, embedded in a sequence of low tones, was presented in synchrony with the visual target and observed that the high tone improved visual target identification, relative to a condition in which a low tone was synchronized with the visual target, thus replicating Vroomen and de Gelder's (2000, Experiment 1) findings. In subsequent experiments, we presented a single visual, auditory, vibrotactile, or combined audiotactile cue with the visual target and found similar improvements in participants' performance regardless of cue type. These results suggest that crossmodal perceptual organization may account for only a part of the improvement in participants' visual target identification performance reported in Vroomen and de Gelder's original study. Moreover, in contrast with many previous crossmodal cuing studies, our results also suggest that visual cues can enhance visual target identification performance. Alternative accounts for these results are discussed in terms of enhanced saliency, the presence of a temporal marker, and attentional capture by oddball stimuli as potential explanations for the observed performance benefits.  相似文献   

3.
Previous research has shown that sounds facilitate perception of visual patterns appearing immediately after the sound but impair perception of patterns appearing after some delay. Here we examined the spatial gradient of the fast crossmodal facilitation effect and the slow inhibition effect in order to test whether they reflect separate mechanisms. We found that crossmodal facilitation is only observed at visual field locations overlapping with the sound, whereas crossmodal inhibition affects the whole hemifield. Furthermore, we tested whether multisensory perceptual learning with misaligned audio-visual stimuli reshapes crossmodal facilitation and inhibition. We found that training shifts crossmodal facilitation towards the trained location without changing its range. By contrast, training narrows the range of inhibition without shifting its position. Our results suggest that crossmodal facilitation and inhibition reflect separate mechanisms that can both be reshaped by multisensory experience even in adult humans. Multisensory links seem to be more plastic than previously thought.  相似文献   

4.
Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.  相似文献   

5.
Using a dual-target identification task during rapid serial visual presentation (RSVP), we examined facilitation and interference effects exerted by emotional stimuli. Emotionally arousing first targets (T1s) were encoded with higher accuracy than neutral T1s. At the same time, identification of a second neutral target (T2) was impaired reflecting a failure of disengaging attention from arousing T1s. Similar interference was triggered by arousing filler stimuli that were not voluntarily searched for in the RSVP stream (Experiment 2). In Experiment 3, we showed that interference is reduced (though facilitation for arousing T1s is maintained) when the second task itself involves variations in emotional arousal. Vice versa, when arousal associated with the T2 stimulus was predictable, interference recurred (Experiment 4). Our findings indicate that the perceived emotional intensity of a stimulus is a determinant of successful identification during RSVP: Encoding of arousing stimuli is reliably facilitated. Interference effects with subsequent processing arise independently and are strongly modulated by the overall task context and specific processing strategies.  相似文献   

6.
Spence C  Walton M 《Acta psychologica》2005,118(1-2):47-70
We investigated the extent to which people can selectively ignore distracting vibrotactile information when performing a visual task. In Experiment 1, participants made speeded elevation discrimination responses (up vs. down) to a series of visual targets presented from one of two eccentricities on either side of central fixation, while simultaneously trying to ignore task-irrelevant vibrotactile distractors presented independently to the finger (up) vs. thumb (down) of either hand. Participants responded significantly more slowly, and somewhat less accurately, when the elevation of the vibrotactile distractor was incongruent with that of the visual target than when they were presented from the same (i.e., congruent) elevation. This crossmodal congruency effect was significantly larger when the visual and tactile stimuli appeared on the same side of space than when they appeared on different sides, although the relative eccentricity of the two stimuli within the hemifield (i.e., same vs. different) had little effect on performance. In Experiment 2, participants who crossed their hands over the midline showed a very different pattern of crossmodal congruency effects to participants who adopted an uncrossed hands posture. Our results suggest that both the relative external location and the initial hemispheric projection of the target and distractor stimuli contribute jointly to determining the magnitude of the crossmodal congruency effect when participants have to respond to vision and ignore touch.  相似文献   

7.
Novel stimuli reliably attract attention, suggesting that novelty may disrupt performance when it is task-irrelevant. However, under certain circumstances novel stimuli can also elicit a general alerting response having beneficial effects on performance. In a series of experiments we investigated whether different aspects of novelty – stimulus novelty, contextual novelty, surprise, deviance, and relative complexity – lead to distraction or facilitation. We used a version of the visual oddball paradigm in which participants responded to an occasional auditory target. Participants responded faster to this auditory target when it occurred during the presentation of novel visual stimuli than of standard stimuli, especially at SOAs of 0 and 200 ms (Experiment 1). Facilitation was absent for both infrequent simple deviants and frequent complex images (Experiment 2). However, repeated complex deviant images did facilitate responses to the auditory target at the 200 ms SOA (Experiment 3). These findings suggest that task-irrelevant deviant visual stimuli can facilitate responses to an unrelated auditory target in a short 0–200 millisecond time-window after presentation. This only occurs when the deviant stimuli are complex relative to standard stimuli. We link our findings to the novelty P3, which is generated under the same circumstances, and to the adaptive gain theory of the locus coeruleus–norepinephrine system (Aston-Jones and Cohen, 2005), which may explain the timing of the effects.  相似文献   

8.
We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants’ picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.  相似文献   

9.
Motor responses can be facilitated by congruent visual stimuli and prolonged by incongruent visual stimuli that are made invisible by masking (direct motor priming). Recent studies on direct motor priming showed a reversal of these priming effects when a three-stimulus paradigm was used in which a prime was followed by a mask and a target stimulus was presented after a delay. A similar three-stimulus paradigm on nonmotor priming, however, showed no reversal of priming effects when the mask was used as a cue for processing of the following target stimulus (cue priming). Experiment 1 showed that the time interval between mask and target is crucial for the reversal of priming. Therefore, the time interval between mask and target was varied in three experiments to see whether cue priming is also subject to inhibition at a certain time interval. Cues indicated (1) the stimulus modality of the target stimulus, (2) the task to be performed on a multidimensional auditory stimulus, or (3) part of the motor response. Whereas direct motor priming showed the reversal of priming about 100 msec after mask presentation, cue priming effects simply decayed during the 300 msec after mask presentation. These findings provide boundary conditions for accounts of inverse priming effects.  相似文献   

10.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

11.
Object substitution masking (OSM) occurs when an initial display of a target and mask continues with the mask alone, creating a mismatch between the reentrant hypothesis, triggered by the initial display, and the ongoing low-level activity. We tested the proposition that the critical factor in OSM is not whether the mask remains in view after target offset, but whether the representation of the mask is sufficiently stronger than that of the target when the reentrant signal arrives. In Experiment 1, a variable interstimulus interval (ISI) was inserted between the initial display and the mask alone. The trailing mask was presumed to selectively boost the strength of the mask representation relative to that of the target. As predicted, OSM occurred at intermediate ISIs, at which the mask was presented before the arrival of the reentrant signal, creating a mismatch, but not at long ISIs, at which a comparison between the reentrant signal and the low-level activity had already been made. Experiment 2, conducted in dark-adapted viewing, ruled out the possibility that low-level inhibitory contour interactions (metacontrast masking) had played a significant role in Experiment 1. Metacontrast masking was further ruled out in Experiment 3, in which the masking contours were reduced to four small dots. We concluded that OSM does not depend on extended presentation of the mask alone, but on a mismatch between the reentrant signals and the ongoing activity at the lower level. The present results place constraints on estimates of the timing of reentrant signals involved in OSM.  相似文献   

12.
When a target is enclosed by a 4-dot mask that persists after the target disappears, target identification is worse than it is when the mask terminates with the target. This masking effect is attributed to object substitution masking (OSM). Previewing the mask, however, attenuates OSM. This study investigated specific conditions under which mask preview was, or was not, effective in attenuating masking. In Experiment 1, the interstimulus interval (ISI) between previewed mask offset and target presentation was manipulated. The basic preview effect was replicated; neither ISI nor preview duration influenced target identification performance. In Experiment 2, mask configurations were manipulated. When the mask configuration at preview matched that at target presentation, the preview effect was replicated. New evidence of ineffective mask preview was found: When the two configurations did not match, performance declined. Yet, when the ISI between previewed mask offset and target presentation was removed such that the mask underwent apparent motion, preview was effective despite the configuration mismatch. An interpretation based on object representations provides an excellent account of these data.  相似文献   

13.
Three experiments were conducted using a repetition priming paradigm: Auditory word or environmental sound stimuli were identified by subjects in a pre-test phase, which was followed by a perceptual identification task using either sounds or words in the test phase. Identification of an environmental sound was facilitated by prior presentation of the same sound, but not by prior presentation of a spoken label (Experiments 1 and 2). Similarly, spoken word identification was facilitated by previous presentation of the same word, but not when the word had been used to label an environmental sound (Experiment 1). A degree of abstraction was demonstrated in Experiment 3, which revealed a facilitation effect between similar sounds produced by the same type of source. These results are discussed in terms of the Transfer Appropriate Processing, activation, and systems approaches.  相似文献   

14.
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar‐like rules (e.g. ABA) enhanced 5‐month‐olds’ capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle‐triangle‐circle) or auditory presentation of the syllables (la‐ba‐la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio‐visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8‐ to 10‐month‐old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio‐visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio‐visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ  相似文献   

15.
Change blindness is the name given to people's inability to detect changes introduced between two consecutively-presented scenes when they are separated by a distractor that masks the transients that are typically associated with change. Change blindness has been reported within vision, audition, and touch, but has never before been investigated when successive patterns are presented to different sensory modalities. In the study reported here, we investigated change detection performance when the two to-be-compared stimulus patterns were presented in the same sensory modality (i.e., both visual or both tactile) and when one stimulus pattern was tactile while the other was presented visually or vice versa. The two to-be-compared patterns were presented consecutively, separated by an empty interval, or else separated by a masked interval. In the latter case, the masked interval could either be tactile or visual. The first experiment investigated visual-tactile and tactile-visual change detection performance. The results showed that in the absence of masking, participants detected changes in position accurately, despite the fact that the two to-be-compared displays were presented in different sensory modalities. Furthermore, when a mask was presented between the two to-be-compared displays, crossmodal change blindness was elicited no matter whether the mask was visual or tactile. The results of two further experiments showed that performance was better overall in the unimodal (visual or tactile) conditions than in the crossmodal conditions. These results suggest that certain of the processes underlying change blindness are multisensory in nature. We discuss these findings in relation to recent claims regarding the crossmodal nature of spatial attention.  相似文献   

16.
The temporal occurrence of a flash can be shifted towards a slightly offset sound (temporal ventriloquism). Here we examined whether four-dot masking is affected by this phenomenon. In Experiment 1, we demonstrate that there is release from four-dot masking if two sounds - one before the target and one after the mask - are presented at ∼100 ms intervals rather than at ∼0 ms intervals or a silent condition. In Experiment 2, we show that the release from masking originates from an alerting effect of the first sound, and a temporal ventriloquist effect from the first and second sounds that lengthened the perceived interval between target and mask, thereby leaving more time for the target to consolidate. Results thus show that sounds penetrate the visual system at more than one level.  相似文献   

17.
Doyle MC  Snowden RJ 《Perception》2001,30(7):795-810
Can auditory signals influence the processing of visual information? The present study examined the effects of simple auditory signals (clicks and noise bursts) whose onset was simultaneous with that of the visual target, but which provided no information about the target. It was found that such a signal enhances performance in the visual task: the accessory sound reduced response times for target identification with no cost to accuracy. The spatial location of the sound (whether central to the display or at the target location) did not modify this facilitation. Furthermore, the same pattern of facilitation was evident whether the observer fixated centrally or moved their eyes to the target. The results were not altered by changes in the contrast (and therefore visibility) of the visual stimulus or by the perceived utility of the spatial location of the sound. We speculate that the auditory signal may promote attentional 'disengagement' and that, as a result, observers are able to process the visual target sooner when sound accompanies the display relative to when visual information is presented alone.  相似文献   

18.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

19.
Three experiments investigated the nature of the information required for the lexical access of visual words. A four-field masking procedure was used, in which the presentation of consecutive prime and target letter strings was preceded and followed by presentations of a pattern mask. This procedure prevented subjects from identifying, and thus intentionally using, prime information. Experiment I extablished the existence of a semantic priming effect on target identification, demonstrating the lexical access of primes under these conditions. It also showed a word repetition effect independent of letter case. Experiment II tested whether this repetition effect was due to the activation of graphemic or phonemic information. The graphemic and phonemic similarity of primes and targets was varied. No evidence for phonemic priming was found, although a graphemic priming effect, independent of the physical similarity of the stimuli, was obtained. Finally Experiment III demonstrated that, irrespective of whether the prime was a word or a nonword, graphemic priming was equally effective. In both Experiments II and III, however, the word repetition effect was stronger than the graphemic priming effect. It is argued that facilitation from graphemic priming was due to the prime activating a target representation coded for abstract (non-visual) graphemic features, such as letter identities. The extra facilitation from same identity priming was attributed to semantic as well as graphemic activation. The implications of these results for models of word recognition are discussed.  相似文献   

20.
Presenting two targets in a rapid visual stream will frequently result in the second target (T2) being missed when presented shortly after the first target (T1). This so-called attentional blink (AB) phenomenon can be reduced by various experimental manipulations. This study investigated the effect of combining T2 with a non-specific sound, played either simultaneously with T2 or preceding T2 by a fixed latency. The reliability of the observed effects and their correlation with potential predictors were studied. The tone significantly improved T2 identification rates regardless of tone condition and of the delay between targets, suggesting that the crossmodal facilitation of T2 identification is not limited to visual-perceptual enhancement. For the simultaneous condition, an additional time-on-task effect was observed in form of a reduction of the AB that occurred within an experimental session. Thus, audition-driven enhancement of visual perception may need some time for its full potential to evolve. Split-half and test-retest reliability were found consistently only for a condition without additional sound. AB magnitude obtained in this condition was related to AB magnitudes obtained in both sound conditions. Self-reported distractibility and performance in tests of divided attention and of cognitive flexibility correlated with the AB magnitudes of a subset but never all conditions under study. Reliability and correlation results suggest that not only dispositional abilities but also state factors exert an influence on AB magnitude. These findings extend earlier work on audition-driven enhancement of target identification in the AB and on the reliability and behavioural correlates of the AB.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号