首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cross-modal effects on visual and auditory object perception   总被引:1,自引:0,他引:1  
  相似文献   

2.
Tagliabue, Zorzi, Umiltà, and Bassignani (2000) showed that one's practicing of a spatially incompatible task influences performance in a Simon task even when the interval between the two tasks is as long as 1 week. In the present study, three experiments were conducted to investigate whether such an effect could be found in a cross-modal paradigm, whereby stimuli in the two tasks were presented in different modalities. Subjects performed either compatible or incompatible mappings in an acoustic spatial compatibility task and, after an interval of 5 min, 24 h, or 7 days, performed a visual Simon task. Results show that the spatially incompatible mapping task affected performance in the Simon task: The Simon effect was absent for all three intervals. This pattern is similar to the results of the Tagliabue et al. study, in which both tasks were performed in the same (visual) modality. Our findings disprove possible explanations based on episodic/contextual effects and support the hypothesis of a long-lasting spatial remapping that is not modality specific.  相似文献   

3.
Responses are typically faster and more accurate when both auditory and visual modalities are stimulated than when only one is. This bimodal advantage is generally attributed to a speeding of responding on bimodal trials, relative to unimodal trials. It remains possible that this effect might be due to a performance decrement on unimodal ones. To investigate this, two levels of auditory and visual signal intensities were combined in a double-factorial paradigm. Responses to the onset of the imperative signal were measured under go/no-go conditions. Mean reaction times to the four types of bimodal stimuli exhibited a superadditive interaction. This is evidence for the parallel self-terminating processing of the two signal components. Violations of the race model inequality also occurred, and measures of processing capacity showed that efficiency was greater on the bimodal than on the unimodal trials. These data are discussed in terms of a possible underlying neural substrate.  相似文献   

4.
Three experiments with musicians and nonmusicians (N=338) explored variations of Deutsch’s musical scale illusion. Conditions under which the illusion occurs were elucidated and data obtained which supported Bregman’s suggestion that auditory streaming results from a competition among alternative perceptual organizations. In nExperiment 1, a series of studies showed that it is more difficult to induce the scale illusion than might be expected if it is accepted that an illusion will be present for most observers despite minor changes in stimuli and experimental conditions, The stimulus sequence seems better described as an ambiguous figure. Having discovered conditions under which the scale illusion could be reliably induced, Experiments 2 and 3 manipulated additional properties of the stimulus (timbre, loudness, and tune) to provide cues to streaming other than pitch and location. The data showed that streaming of this sequence can be altered by these properties, supporting the notion of a general parsing mechanism which follows general gestalt principles and allows streaming by many stimulus dimensions. Finally, suggestions are made as to how this mechanism might operate.  相似文献   

5.
Dupoux E  de Gardelle V  Kouider S 《Cognition》2008,109(2):267-273
Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and stimulus strength (energy, duration). Here, we used a masked speech priming method in conjunction with a submillisecond interaural delay manipulation to contrast subliminal and supraliminal processing at constant prime, mask and target strength. This delay induced a perceptual streaming effect, with the prime popping out in the supraliminal condition. By manipulating the prime-target interval (ISI), we show a qualitatively distinct profile of priming longevity as a function of prime awareness. While subliminal priming disappeared after half a second, supraliminal priming was independent of ISI. This shows that the distinction between conscious and unconscious processing depends on high-level perceptual streaming factors rather than low-level features (energy, duration).  相似文献   

6.
When a formant transition and the remainder of a syllable are presented to subjects' opposite ears, most subjects perceive two simultaneous sounds: a syllable and a nonspeech chirp. It has been demonstrated that, when the remainder of the syllable (base) is kept unchanged, the identity of the perceived syllable will depend on the kind of transition presented at the opposite ear. This phenomenon, called duplex perception, has been interpreted as the result of the independent operation of two perceptual systems or modes, the phonetic and the auditory mode. In the present experiments, listeners were required to identify and discriminate such duplex syllables. In some conditions, the isolated transition was embedded in a temporal sequence of capturing transitions sent to the same ear. This streaming procedure significantly weakened the contribution of the transition to the perceived phonetic identity of the syllable. It is likely that the sequential integration of the isolated transition into a sequence of capturing transitions affected its fusion with the contralateral base. This finding contrasts with the idea that the auditory and phonetic processes are operating independently of each other. The capturing effect seems to be more consistent with the hypothesis that duplex perception occurs in the presence of conflicting cues for the segregation and the integration of the isolated transition with the base.  相似文献   

7.
Adults and infants were tested for the capacity to detect correspondences between nonspeech sounds and real vowels. The /i/ and /a/ vowels were presented in 3 different ways: auditory speech, silent visual faces articulating the vowels, or mentally imagined vowels. The nonspeech sounds were either pure tones or 3-tone complexes that isolated a single feature of the vowel without allowing the vowel to be identified. Adults perceived an orderly relation between the nonspeech sounds and vowels. They matched high-pitched nonspeech sounds to /i/ vowels and low-pitched nonspeech sounds to /a/ vowels. In contrast, infants could not match nonspeech sounds to the visually presented vowels. Infants' detection of correspondence between auditory and visual speech appears to require the whole speech signal; with development, an isolated feature of the vowel is sufficient for detection of the cross-modal correspondence.  相似文献   

8.
In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.  相似文献   

9.
Three priming experiments were conducted to determine how information about the self from different sensory modalities/cognitive domains affects self-face recognition. Being exposed to your body odor, seeing your name, and hearing your name all facilitated self-face recognition in a reaction time task. No similar cross-modal facilitation was found among stimuli from familiar or novel individuals. The finding of a left-hand advantage for self-face recognition was replicated when no primes were presented. These data, along with other recent results suggest the brain processes/represents information about the self in highly integrated ways.  相似文献   

10.
Two experiments were conducted to investigate whether auditory and visual language laterality tasks test the same brain processes for verbal functions. In the first experiment, 48 undergraduate students (24 males, 24 females) completed both an auditory monitoring task and a visual monitoring task, with the Waterloo Handedness Questionnaire administered between the two tasks. The visual task was an analogue of the dichotic listening task used. It was hypothesized that a significant cross-modal correlation would be found, indicating that the dichotic listening task and the visual analogue task do, in fact, test the same brain processes for verbal functions. Results revealed a right ear advantage in the auditory task, a left visual field advantage (LVFA) in the visual task, and a cross-modal correlation of asymmetries of -.09. The LVFA observed in the visual task was replicated in Experiment 2, thus establishing its legitimacy. Results are discussed in relation with the type of processing that might produce such an unexpected finding on the visual task.  相似文献   

11.
Adult humans were studied for improvements in their ability to segregate natural whole speech in background noise, in 6 test sessions spaced with a very wide range of inter-session interval (ISI) ranging from minutes to weeks apart so as to examine the effect of this parameter on initial (early) and late components of perceptual learning. Improvements were found even with spacings of 3 weeks between the punctate task sessions. All subjects showed similar total learning amounts but there were sex- and ISI-dependent differences in learning patterns, which we indexed by dividing the overall exponentially-decreasing learning pattern into an early phase between the first two sessions and a later phase between the second and sixth sessions. Males tested at all ISIs and females tested at short (2, 5 and 15 min) and long (1–21 days) ISIs showed small amounts of early-phase learning and large amounts of late-phase learning. However, females tested at intermediate (30 min and 1 h) ISIs showed only early learning, i.e., faster learning given that the total learning was the same. This sex- and ISI-specific deviant pattern could be changed to the standard pattern by interposing an overnight interruption that included sleep amongst test sessions. Thus, improvement in this complex auditory streaming and identification task can occur even with very brief and widely-spaced exposure, generally through a standard pattern of slower overall learning, but also through a sex- and ISI-specific deviant pattern of very rapid early learning which can be modulated by interposed delay unlike the standard pattern.  相似文献   

12.
Three experiments were conducted to examine the effects of extraneous speech warnings (i.e., low-priority warnings initiated during high-priority tasks) on cognitive performance and whether organizing the auditory warnings into streams can attenuate any disruption. Experiment 1 demonstrated that a variety of speech warnings can be separated into perceptually distinct streams by allocating them to discrete spatial locations. Experiment 2 showed that increasing the rate of presentation of the warnings to promote streaming decreased clarity ratings but increased perceived urgency ratings. Experiment 3 demonstrated that the disruption to serial memory for navigational information by extraneous speech warnings could be attenuated by streaming. Results are interpreted in light of previous research, and practical implications for auditory warning design are discussed.  相似文献   

13.
When a component of a complex tone is captured into a stream by other events that precede and follow it, it does not fuse with the other components of the complex tone but tends to to be heard as a separate event. The current study examined the ability of elements of a stream to resist becoming fused with other synchronous events, heard either in the same ear or at the opposite ear. The general finding was that events in one ear fuse strongly with elements of an auditory stream in the other ear only when they are spectrally very similar. In this case, the fusion of simultaneous components at opposite ears is stronger than of simultaneous components heard in the same ear. However, when the spectra of the synchronous events are mismatched even slightly, components in the same ear fuse more strongly than components at opposite ears. These results are accounted for by a theory that assumes that decisions that perceptually integrate sequential events, synchronous events, and events at opposite ears are interdependent.  相似文献   

14.
In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.  相似文献   

15.
Cusack R  Roberts B 《Perception》1999,28(10):1281-1289
We investigated the perceptual grouping of sequentially presented sounds--auditory stream segregation. It is well established that sounds heard as more similar in quality, or timbre, are more likely to be grouped into the same auditory stream. However, it is often unclear exactly what acoustic factors determine timbre. In this study, we presented various sequences of simple sounds, each comprising two frequency components (two-tone complexes), and measured their perceptual grouping. We varied only one parameter between trials, the intercomponent separation for some of the complexes, and examined the effects on stream segregation. Four hypotheses are presented that might predict the extent of streaming. Specifically, least streaming might be expected when the sounds were most similar in either (1) the frequency regions in which they have energy (maximum spectral overlap), (2) their auditory bandwidths, (3) their relative bandwidths, or (4) the rate at which the two components beat together (intermodulation rate). It was found that least streaming occurred when sounds were most similar in either their auditory or their relative bandwidths. Although these two hypotheses could not be distinguished, the results were clearly different from those predicted by hypotheses (1) and (4). The implications for models of stream segregation are discussed.  相似文献   

16.
Bilinguals have been shown to outperform monolinguals at suppressing task-irrelevant information. The present study aimed to identify how processing linguistic ambiguity during auditory comprehension may be associated with inhibitory control. Monolinguals and bilinguals listened to words in their native language (English) and identified them among four pictures while their eye-movements were tracked. Each target picture (e.g., hamper) appeared together with a similar-sounding within-language competitor picture (e.g., hammer) and two neutral pictures. Following each eye-tracking trial, priming probe trials indexed residual activation of target words, and residual inhibition of competitor words. Eye-tracking showed similar within-language competition across groups; priming showed stronger competitor inhibition in monolinguals than in bilinguals, suggesting differences in how inhibitory control was used to resolve within-language competition. Notably, correlation analyses revealed that inhibition performance on a nonlinguistic Stroop task was related to linguistic competition resolution in bilinguals but not in monolinguals. Together, monolingual-bilingual comparisons suggest that cognitive control mechanisms can be shaped by linguistic experience.  相似文献   

17.
A harmonic that begins before the other harmonics contributes less than they do to vowel quality. This reduction can be partly reversed by accompanying the leading portion with a captor tone. This effect is usually interpreted as reflecting perceptual grouping of the captor with the leading portion. Instead, it has recently been proposed that the captor effect depends on broadband inhibition within the central auditory system. A test of psychophysical predictions based on this proposal showed that captor efficacy is (a) maintained for noise-band captors, (b) absent when a captor accompanies a harmonic that continues after the vowel, and (c) maintained for 80 ms or more over a gap between captor offset and vowel onset. These findings support and refine the inhibitory account.  相似文献   

18.
Representational momentum refers to the phenomenon that observers tend to incorrectly remember an event undergoing real or implied motion as shifted beyond its actual final position. This has been demonstrated in both visual and auditory domains. In 5 pitch discrimination experiments, listeners heard tone sequences that implied either linear, periodic, or null motions in pitch space. Their task was to judge whether the pitch of a probe tone following each sequence was the same or different from the final sequence tone. Results suggested that listeners made errors consistent with extrapolation of coherent pitch patterns (linear, periodic) but not with incoherent (null) ones. Hypotheses associated with internalized physical principles and pattern-based expectations are discussed.  相似文献   

19.
In this study, we show that the contingent auditory motion aftereffect is strongly influenced by visual motion information. During an induction phase, participants listened to rightward-moving sounds with falling pitch alternated with leftward-moving sounds with rising pitch (or vice versa). Auditory aftereffects (i.e., a shift in the psychometric function for unimodal auditory motion perception) were bigger when a visual stimulus moved in the same direction as the sound than when no visual stimulus was presented. When the visual stimulus moved in the opposite direction, aftereffects were reversed and thus became contingent upon visual motion. When visual motion was combined with a stationary sound, no aftereffect was observed. These findings indicate that there are strong perceptual links between the visual and auditory motion-processing systems.  相似文献   

20.
To interact successfully with the environment and to compensate for environmental challenges, the human brain must integrate information originating in different sensory modalities. Such integration occurs in non-primary associative regions of the human brain. Additionally, recent investigations have documented the involvement of the primary visual cortex in processing tactile information in blind humans to a larger extent than in sighted controls. This form of cross-modal plasticity highlights the capacity of the human central nervous system to reorganize after chronic visual deprivation.Edited by: Marie-Hélène Giard and Mark Wallace  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号