首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
How humans perform duration judgments with multisensory stimuli is an ongoing debate. Here, we investigated how sub-second duration judgments are achieved by asking participants to compare the duration of a continuous sound to the duration of an empty interval in which onset and offset were marked by signals of different modalities using all combinations of visual, auditory and tactile stimuli. The pattern of perceived durations across five stimulus durations (ranging from 100 ms to 900 ms) follows the Vierordt Law. Furthermore, intervals with a sound as onset (audio-visual, audio-tactile) are perceived longer than intervals with a sound as offset. No modality ordering effect is found for visualtactile intervals. To infer whether a single modality-independent or multiple modality-dependent time-keeping mechanisms exist we tested whether perceived duration follows a summative or a multiplicative distortion pattern by fitting a model to all modality combinations and durations. The results confirm that perceived duration depends on sensory latency (summative distortion). Instead, we did not find evidence for multiplicative distortions. The results of the model and the behavioural data support the concept of a single time-keeping mechanism that allows for judgments of durations marked by multisensory stimuli.  相似文献   

2.
High-span individuals (as measured by the operation span [OSPAN] technique) are less likely than low-span individuals to notice their own names in an unattended auditory stream (A. R. A. Conway, N. Cowan, & M. F. Bunting, 2001). The possibility that OSPAN accounts for individual differences in auditory distraction on an immediate recall test was examined. There was no evidence that high-OSPAN participants were more resistant to the disruption caused by irrelevant speech in serial or in free recall. Low-OSPAN participants did, however, make more semantically related intrusion errors from the irrelevant sound stream in a free recall test (Experiment 4). Results suggest that OSPAN mediates semantic components of auditory distraction dissociable from other aspects of the irrelevant sound effect.  相似文献   

3.
The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.  相似文献   

4.
We describe a set of pictorial and auditory stimuli that we have developed for use in word learning tasks in which the participant learns pairings of novel auditory sound patterns (names) with pictorial depictions of novel objects (referents). The pictorial referents are drawings of “space aliens,” consisting of images that are variants of 144 different aliens. The auditory names are possible nonwords of English; the stimulus set consists of over 2,500 nonword stimuli recorded in a single voice, with controlled onsets, varying from one to seven syllables in length. The pictorial and nonword stimuli can also serve as independent stimulus sets for purposes other than word learning. The full set of these stimuli may be downloaded fromwww.psychonomic.org/archive/.  相似文献   

5.
It is suggested that aftereffects caused by swept change of sound level (Reinhardt-Rutland & Anstis, Note 1) may contribute to auditory motion aftereffects (Grantham & Wightman, 1979). The latter appear to show selectivity for frequency; they are substantial if the frequency is .5 kHz but not if it is 2 kHz. An experiment was carried out to show to what extent there is such selectivity for aftereffects from swept sound-level change; this showed that they are substantial over a much wider range of frequencies than auditory motion aftereffects. It is concluded that the frequency selectivity of auditory motion aftereffects might be explained by frequency selectivity for inter-aural phase differences.  相似文献   

6.
Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.  相似文献   

7.
The visuospatial negative priming effect—that is, the slowed-down responding to a previously ignored location—is partly due to response inhibition associated with the previously ignored location (Buckolz, Goldfarb, & Khan, Perception & Psychophysics 66:837-845 2004). We tested whether response inhibition underlies spatial negative priming in the auditory modality as well. Eighty participants localized a target sound while ignoring a simultaneous distractor sound at another location. Eight possible sound locations were arranged in a semicircle around the participant. Pairs of adjacent locations were associated with the same response. On ignored repetition trials, the probe target sound was played from the same location as the previously ignored prime sound. On response control trials, prime distractor and probe target were played from different locations but were associated with the same response. On control trials, prime distractor and probe target shared neither location nor response. A response inhibition account predicts slowed-down responding when the response associated with the prime distractor has to be executed in the probe. There was no evidence of response inhibition in audition. Instead, the negative priming effect depended on whether the sound at the repeatedly occupied location changed identity between prime and probe. The latter result replicates earlier findings and supports the feature mismatching hypothesis, while the former is compatible with the assumption that response inhibition is irrelevant in auditory spatial attention.  相似文献   

8.
Two experiments examine the effect on an immediate recall test of simulating a reverberant auditory environment in which auditory distracters in the form of speech are played to the participants (the ‘irrelevant sound effect’). An echo‐intensive environment simulated by the addition of reverberation to the speech reduced the extent of ‘changes in state’ in the irrelevant speech stream by smoothing the profile of the waveform. In both experiments, the reverberant auditory environment produced significantly smaller irrelevant sound distraction effects than an echo‐free environment. Results are interpreted in terms of changing‐state hypothesis, which states that acoustic content of irrelevant sound, rather than phonology or semantics, determines the extent of the irrelevant sound effect (ISE). Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
Bioacousticians (M.S. Ficken, S. R. Ficken, & S. R. Witken, 1978) classified black-capped chickadee call notes from the chick-a-dee call complex into 4 note types (A, B, C, and D) identified from sound spectrograms. In Experiment 1, chickadees (Poecile atricapillus) learned operant auditory discriminations both within and between the 4 note types but learned the between note-type discrimination significantly faster. In Experiment 2, when the original, unrewarded between-category exemplars were replaced with novel, rewarded exemplars of these same categories, chickadees showed transfer of inhibitory stimulus control to the novel exemplars. In Experiment 3, when novel exemplars were replaced by the original exemplars, chickadees showed propagation of positive stimulus control back to the original exemplars. This evidence suggests that chickadees and bioacousticians accurately sort conspecific call notes into the same open-ended categories (R. J. Herrnstein, 1990).  相似文献   

10.
Perceptual completion of a sound with a short silent gap   总被引:1,自引:0,他引:1  
Remijn GB  Nakajima Y  Tanaka S 《Perception》2007,36(6):898-917
Listeners reported the perceptual completion of a sound in stimuli consisting of two crossing frequency glides of unequal duration that shared a short silent gap (40 ms or less) at their crossing point. Even though both glides shared the gap, it was consistently perceived only in the shorter glide, whereas the longer glide could be perceptually completed. Studies on perceptual completion in the auditory domain reported so far have shown that completion of a sound with a gap occurs only if the gap is filled with energy from another sound. In the stimuli used here, however, no such substitute energy was present in the gap, showing evidence for perceptual completion of a sound without local stimulation ('modal' completion). Perceptual completion of the long glide occurred under both monaural and dichotic presentation of the gap-sharing glides; it therefore involves central stages of auditory processing. The inclusion of the gap in the short glide, rather than in both the long and the short glide, is explained in terms of auditory event and stream formation.  相似文献   

11.
Cortical representations of sound can be modified by repeatedly pairing presentation of a pure tone with electrical stimulation of neuromodulatory neurons located in the basal forebrain (Bakin & Weinberger, 1996; Kilgard & Merzenich, 1998a). We developed a computational model to investigate the possible effects of basal forebrain modulation on map reorganization in the auditory cortex. The model is a self-organizing map with acoustic response characteristics mimicking those observed in the mammalian auditory cortex. We simulated the effects of basal forebrain modulation, using parameters intrinsic to the self-organizing map, such as the learning rate (controlling the adaptability of map nodes) and the neighborhood function (controlling the excitability of map nodes). Previous research has suggested that both parameters can be useful for characterizing the effects of neuromodulation on plasticity (Kohonen, 1993; Myers et al., 1996; Myers, Ermita, Hasselmo, & Gluck, 1998). The model successfully accounts for experimentally observed effects of pairing basal forebrain stimulation with the presentation of a single tone, but not of two tones, suggesting that auditory cortical plasticity is constrained in ways not accounted for by current theories. Despite this limitation, the model provides a useful framework for describing experience-induced changes in auditory representations and for relating such changes to variations in the excitability and adaptability of cortical neurons produced by neuromodulation.  相似文献   

12.
We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked, pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants’ picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter’s (1993) notion of conceptual short-term memory.  相似文献   

13.
We describe a set of pictorial and auditory stimuli that we have developed for use in word learning tasks in which the participant learns pairings of novel auditory sound patterns (names) with pictorial depictions of novel objects (referents). The pictorial referents are drawings of "space aliens," consisting of images that are variants of 144 different aliens. The auditory names are possible nonwords of English; the stimulus set consists of over 2,500 nonword stimuli recorded in a single voice, with controlled onsets, varying from one to seven syllables in length. The pictorial and nonword stimuli can also serve as independent stimulus sets for purposes other than word learning. The full set of these stimuli may be downloaded from www.psychonomic.org/archive/.  相似文献   

14.
The current study presents a multi-dimensional scale measuring the learning potential of the workplace (LPW), which is applicable across various occupational settings. Based on a comprehensive literature review, we establish four theoretically relevant dimensions of work-based learning, which together constitute the learning potential of the workplace. The psychometric characteristics of our instrument were examined among a sample of Dutch employees working in different organizations (N = 1013). In this study, we tested the factorial structure and validity of the LPW-scale by conducting Confirmatory Factor Analyses, testing for measurement invariance and determining the scale's reliability. Subsequently, the LPW-instrument was cross-validated using SEM (AMOS 20.0). Furthermore, convergent, divergent, and construct validity were investigated. The results empirically supported the theory based four-factor structure of the LPW-scale and provided solid evidence for the sound psychometric properties of the study's instrument.  相似文献   

15.
It is common to judge the duration of an audiovisual event, and yet it remains controversial how the judgment of duration is affected by signals from other modalities. We used an oddball paradigm to examine the effect of sound on the judgment of visual duration and that of a visual object on the judgment of an auditory duration. In a series of standards and oddballs, the participants compared the duration of the oddballs to that of the standards. Results showed asymmetric cross-modal effects, supporting the auditory dominance hypothesis: a sound extends the perceived visual duration, whereas a visual object has no effect on perceived auditory duration. The possible mechanisms (pacemaker or mode switch) proposed in the Scalar Expectancy Theory [Gibbon, J., Church, R. M., & Meck, W. H. (1984). Scalar timing in memory. In J. Gibbon & L. Allan (Eds.), Annals of the New York Academy of Sciences: Vol. 423. Timing and time perception (pp. 52–77). New York: New York Academy of Sciences] were examined using different standard durations. We conclude that sound increases the perceived visual duration by accelerating the pulse rate in the visual pacemaker.  相似文献   

16.
Previous research has shown that irrelevant sounds can facilitate the perception of visual apparent motion. Here the effectiveness of a single sound to facilitate motion perception was investigated in three experiments. Observers were presented with two discrete lights temporally separated by stimulus onset asynchronies from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. A short sound presented temporally (and spatially) midway between the lights facilitated the impression of motion relative to baseline (lights without sound), whereas a sound presented either before the first or after the second light or simultaneously with the lights did not affect motion impression. The facilitation effect also occurred with sound presented far from the visual display, as well as with continuous-sound that was started with the first light and terminated with the second light. No facilitation of visual motion perception occurred if the sound was part of a tone sequence that allowed for intramodal perceptual grouping of the auditory stimuli prior to the critical audiovisual stimuli. Taken together, the findings are consistent with a low-level audiovisual integration approach in which the perceptual system merges temporally proximate sound and light stimuli, thereby provoking the impression of a single multimodal moving object.  相似文献   

17.
Auditory perception of the depth of space is based mainly on spectral and amplitude changes of sound waves originating from the sound source and reaching the listener. The perceptive illusion of movement of an auditory image caused by changes in amplitude and/or frequency of the signal tone emanating from an immobile loudspeaker was studied. Analysis of data obtained from the participants revealed the diapason of combinations of amplitude and frequency changes for which the movement direction was perceived similarly by all participants, despite significantly different movement assessment criteria. Additional auditory and visual information of the conditions of radial movement (near or far fields) determined listeners' interpretation of changes in the signal parameters. The data obtained about the perception of approach and withdrawal models are evidence of the fact that the principal cues of the perception of the distance of immobile sound sources manifests similarly to that of an auditory image moving along a radial axis.  相似文献   

18.
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via 21 loudspeakers mounted horizontally (from 80° on the left to 80° on the right). Participants had to localize the target either by using a swivel hand-pointer or by head-pointing. Individual lateral preferences of eye, ear, hand, and foot were obtained using a questionnaire. With both pointing methods, participants showed a bias in sound localization that was to the side contralateral to the preferred hand, an effect that was unrelated to their overall precision. This partially parallels findings in the visual modality as left-handers typically have a more rightward bias in visual line bisection compared with right-handers. Despite the differences in neural processing of auditory and visual spatial information these findings show similar effects of lateral preference on auditory and visual spatial perception. This suggests that supramodal neural processes are involved in the mechanisms generating laterality in space perception.  相似文献   

19.
While “recalibration by pairing” is now generally held to be the main process responsible for adaptation to intermodal discordance, the conditions under which pairing of heteromodal data occur in spite of a discordance have not been studied systematically. The question has been explored in the case of auditory-visual discordance. Subjects pointed at auditory targets before and after exposure to auditory and visual data from sources 20° apart in azimuth, in conditions varying by (a) the degree of realism of the context, and (b) the synchronization be-tween auditory and visual data. In Experiment 1, the exposure conditions combined the sound of a percussion instrument (bongos) with either the image on a video monitor of the hands of the player (semirealistic situation) or diffuse light modulated by the sound (nonrealistic situation). Experiment 2 featured a voice and either the image of the face of the speaker or light modulated by the voice, and in both situations either sound and image were exactly syn-chronous or the sound was made to lag by 0.35 sec. Desynchronization was found to reduce adaptation significantly, while degree of realism failed to produce an effect. Answers to a question asked at the end of the testing regarding the location of the sound source suggested that the apparent fusion of the auditory and visual data—the phenomenon called “ventriloquism”— was not affected by the conditions in the same way as adaptation. In Experiment 3, subjects were exposed to the experimental conditions of Experiment 2 and were asked to report their impressions of fusion by pressing a key. The results contribute to the suggestion that pairing of registered auditory and visual locations, the hypothetical process at the basis of recalibration, may be a different phenomenon from conscious fusion.  相似文献   

20.
Nonspatial attentional shifts between audition and vision   总被引:2,自引:0,他引:2  
This study investigated nonspatial shifts of attention between visual and auditory modalities. The authors provide evidence that the modality of a stimulus (S1) affected the processing of a subsequent stimulus (S2) depending on whether they shared the same modality. For both vision and audition, the onset of S1 summoned attention exogenously to its modality, causing a delay in processing S2 in a different modality. That undermines the notion that auditory stimuli have a stronger and more automatic alerting effect than visual stimuli (M. I. Posner, M. J. Nissen, & R. M. Klein, 1976). The results are consistent with other recent studies showing cross-modal attentional limitation. The authors suggest that such cross-modal limitation can be produced by simply presenting S1 and S2 in different modalities and that central processing mechanisms are also, at least partially, modality dependent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号