首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
To better understand temporal and spatial cross-modal interactions, two signal detection experiments were conducted in which an auditory target was sometimes accompanied by an irrelevant flash of light. In the first, a psychometric function for detecting a unisensory auditory target in varying signal-to-noise ratios (SNRs) was derived. Then auditory target detection was measured while an irrelevant light was presented with light/sound stimulus onset asynchronies (SOAs) between 0 and ±700 ms. When the light preceded the sound by 100 ms or was coincident, target detection (d') improved for low SNR conditions. In contrast, for larger SOAs (350 and 700 ms), the behavioral gain resulted from a change in both d' and response criterion (β). However, when the light followed the sound, performance changed little. In the second experiment, observers detected multimodal target sounds at eccentricities of ±8°, and ±24°. Sensitivity benefits occurred at both locations, with a larger change at the more peripheral location. Thus, both temporal and spatial factors affect signal detection measures, effectively parsing sensory and decision-making processes.  相似文献   

2.
Previous research has shown that irrelevant sounds can facilitate the perception of visual apparent motion. Here the effectiveness of a single sound to facilitate motion perception was investigated in three experiments. Observers were presented with two discrete lights temporally separated by stimulus onset asynchronies from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. A short sound presented temporally (and spatially) midway between the lights facilitated the impression of motion relative to baseline (lights without sound), whereas a sound presented either before the first or after the second light or simultaneously with the lights did not affect motion impression. The facilitation effect also occurred with sound presented far from the visual display, as well as with continuous-sound that was started with the first light and terminated with the second light. No facilitation of visual motion perception occurred if the sound was part of a tone sequence that allowed for intramodal perceptual grouping of the auditory stimuli prior to the critical audiovisual stimuli. Taken together, the findings are consistent with a low-level audiovisual integration approach in which the perceptual system merges temporally proximate sound and light stimuli, thereby provoking the impression of a single multimodal moving object.  相似文献   

3.
The temporal occurrence of a flash can be shifted towards a slightly offset sound (temporal ventriloquism). Here we examined whether four-dot masking is affected by this phenomenon. In Experiment 1, we demonstrate that there is release from four-dot masking if two sounds - one before the target and one after the mask - are presented at ∼100 ms intervals rather than at ∼0 ms intervals or a silent condition. In Experiment 2, we show that the release from masking originates from an alerting effect of the first sound, and a temporal ventriloquist effect from the first and second sounds that lengthened the perceived interval between target and mask, thereby leaving more time for the target to consolidate. Results thus show that sounds penetrate the visual system at more than one level.  相似文献   

4.
The effect of brief auditory stimuli on visual apparent motion   总被引:1,自引:0,他引:1  
Getzmann S 《Perception》2007,36(7):1089-1103
When two discrete stimuli are presented in rapid succession, observers typically report a movement of the lead stimulus toward the lag stimulus. The object of this study was to investigate crossmodal effects of irrelevant sounds on this illusion of visual apparent motion. Observers were presented with two visual stimuli that were temporally separated by interstimulus onset intervals from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. The presentation of short sounds intervening between the visual stimuli facilitated the impression of apparent motion relative to baseline (visual stimuli without sounds), whereas sounds presented before the first and after the second visual stimulus as well as simultaneously presented sounds reduced the motion impression. The results demonstrate an effect of the temporal structure of irrelevant sounds on visual apparent motion that is discussed in light of a related multisensory phenomenon, 'temporal ventriloquism', on the assumption that sounds can attract lights in the temporal dimension.  相似文献   

5.
This article discusses two experiments on the discrimination of time intervals presented in sequences marked by brief auditory signals. Participants had to indicate whether the last interval in a series of three intervals marked by four auditory signals was shorter or longer than the previous intervals. Three base durations were under investigation: 75, 150, and 225 ms. In Experiment 1, sounds were presented through headphones, from a single-speaker in front of the participants or by four equally spaced speakers. In all three presentation modes, the highest different threshold was obtained in the lower base duration condition (75 ms), thus indicating an impairment of temporal processing when sounds are presented too rapidly. The results also indicate the presence, in each presentation mode, of a 'time-shrinking effect' (i.e., with the last interval being perceived as briefer than the preceding ones) at 75 ms, but not at 225 ms. Lastly, using different sound sources to mark time did not significantly impair discrimination. In Experiment 2, three signals were presented from the same source, and the last signal was presented at one of two locations, either close or far. The perceived duration was not influenced by the location of the fourth signal when the participant knew before each trial where the sounds would be delivered. However, when the participant was uncertain as to its location, more space between markers resulted in longer perceived duration, a finding that applies only at 150 and 225 ms. Moreover, the perceived duration was affected by the direction of the sequences (left-right vs. right-left).  相似文献   

6.
Harrar V  Harris LR 《Perception》2007,36(10):1455-1464
Gestalt rules that describe how visual stimuli are grouped also apply to sounds, but it is unknown if the Gestalt rules also apply to tactile or uniquely multimodal stimuli. To investigate these rules, we used lights, touches, and a combination of lights and touches, arranged in a classic Ternus configuration. Three stimuli (A, B, C) were arranged in a row across three fingers. A and B were presented for 50 ms and, after a delay, B and C were presented for 50 ms. Subjects were asked whether they perceived AB moving to BC (group motion) or A moving to C (element motion). For all three types of stimuli, at short delays, A to C dominated, while at longer delays AB to BC dominated. The critical delay, where perception changed from group to element motion, was significantly different for the visual Ternus (3 lights, 162 ms) and the tactile Ternus (3 touches, 195 ms). The critical delay for the multimodal Ternus (3 light-touch pairs, 161 ms) was not different from the visual or tactile Ternus effects. In a second experiment, subjects were exposed to 2.5 min of visual group motion (stimulus onset asynchrony = 300 ms). The exposure caused a shift in the critical delay of the visual Ternus, a trend in the same direction for the multimodal Ternus, but no shift in the tactile Ternus. These results suggest separate but similar grouping rules for visual, tactile, and multimodal stimuli.  相似文献   

7.
Cross-modal temporal recalibration describes a shift in the point of subjective simultaneity (PSS) between 2 events following repeated exposure to asynchronous cross-modal inputs--the adaptors. Previous research suggested that audiovisual recalibration is insensitive to the spatial relationship between the adaptors. Here we show that audiovisual recalibration can be driven by cross-modal spatial grouping. Twelve participants adapted to alternating trains of lights and tones. Spatial position was manipulated, with alternating sequences of a light then a tone, or a tone then a light, presented on either side of fixation (e.g., left tone--left light--right tone--right light, etc.). As the events were evenly spaced in time, in the absence of spatial-based grouping it would be unclear if tones were leading or lagging lights. However, any grouping of spatially colocalized cross-modal events would result in an unambiguous sense of temporal order. We found that adapting to these stimuli caused the PSS between subsequent lights and tones to shift toward the temporal relationship implied by spatial-based grouping. These data therefore show that temporal recalibration is facilitated by spatial grouping.  相似文献   

8.
Audio-visual simultaneity judgments   总被引:3,自引:0,他引:3  
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.  相似文献   

9.
Audiotactile temporal order judgments   总被引:3,自引:0,他引:3  
We report a series of three experiments in which participants made unspeeded 'Which modality came first?' temporal order judgments (TOJs) to pairs of auditory and tactile stimuli presented at varying stimulus onset asynchronies (SOAs) using the method of constant stimuli. The stimuli were presented from either the same or different locations in order to explore the potential effect of redundant spatial information on audiotactile temporal perception. In Experiment 1, the auditory and tactile stimuli had to be separated by nearly 80 ms for inexperienced participants to be able to judge their temporal order accurately (i.e., for the just noticeable difference (JND) to be achieved), no matter whether the stimuli were presented from the same or different spatial positions. More experienced psychophysical observers (Experiment 2) also failed to show any effect of relative spatial position on audiotactile TOJ performance, despite having much lower JNDs (40 ms) overall. A similar pattern of results was found in Experiment 3 when silent electrocutaneous stimulation was used rather than vibrotactile stimulation. Thus, relative spatial position seems to be a less important factor in determining performance for audiotactile TOJ than for other modality pairings (e.g., audiovisual and visuotactile).  相似文献   

10.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

11.
Participants judged whether two sequential visual events were presented for the same length of time or for different lengths of time, while ignoring two irrelevant sequential sounds. Sounds could be either the same or different in terms of their duration or their pitch. When the visual stimuli were in conflict with the sound stimuli (e.g., visual events were the same, but the sounds were different) performance declined. This was true whether sounds varied in duration or in pitch. The influence of sounds was eliminated when visual duration discriminations were made easier. Together these results demonstrate that resolutions to crossmodal conflicts are flexible across the neural and cognitive architectures. More importantly, they suggest that interactions between modalities can span to abstract levels of same/different representations.  相似文献   

12.
Binaural and monaural localization of sound in two-dimensional space   总被引:2,自引:0,他引:2  
Two experiments were conducted. In experiment 1, part 1, binaural and monaural localization of sounds originating in the left hemifield was investigated. 104 loudspeakers were arranged in a 13 x 8 matrix with 15 degrees separating adjacent loudspeakers in each column and in each row. In the horizontal plane (HP), the loudspeakers extended from 0 degrees to 180 degrees; in the vertical plane (VP), they extended from -45 degrees to 60 degrees with respect to the interaural axis. Findings of special interest were: (i) binaural listeners identified the VP coordinate of the sound source more accurately than did monaural listeners, and (ii) monaural listeners identified the VP coordinate of the sound source more accurately than its HP coordinate. In part 2, it was found that foreknowledge of the HP coordinate of the sound source aided monaural listeners in identifying its VP coordinate, but the converse did not hold. In experiment 2, part 1, localization performances were evaluated when the sound originated from consecutive 45 degrees segments of the HP, with the VP segments extending from -22.5 degrees to 22.5 degrees. Part 2 consisted of measuring, on the same subjects, head-related transfer functions by means of a miniature microphone placed at the entrance of their external ear canal. From these data, the 'covert' peaks (defined and illustrated in text) of the sound spectrum were extracted. This spectral cue was advanced to explain why monaural listeners in this study as well as in other studies performed better when locating VP-positioned sounds than when locating HP-positioned sounds. It is not claimed that there is inherent advantage for localizing sound in the VP; rather, monaural localization proficiency, whether in the VP or HP, depends on the availability of covert peaks which, in turn, rests on the spatial arrangement of the sound sources.  相似文献   

13.
Both the imagery literature and grounded models of language comprehension emphasize the tight coupling of high-level cognitive processes, such as forming a mental image of something or language understanding, and low-level sensorimotor processes in the brain. In an electrophysiological study, imagery and language processes were directly compared and the sensory associations of processing linguistically implied sounds or imagined sounds were investigated. Participants read sentences describing auditory events (e.g., “The dog barks”), heard a physical (environmental) sound, or had to imagine such a sound. We examined the influence of the 3 sound conditions (linguistic, physical, imagery) on subsequent physical sound processing. Event-related potential (ERP) difference waveforms indicated that in all 3 conditions, prime compatibility influenced physical sound processing. The earliest compatibility effect was observed in the physical condition, starting in the 80–110 ms time interval with a negative maximum over occipital electrode sites. In contrast, the linguistic and the imagery condition elicited compatibility effects starting in the 180–220 ms time window with a maximum over central electrode sites. In line with the ERPs, the analysis of the oscillatory activity showed that compatibility influenced early theta and alpha band power changes in the physical, but not in the linguistic and imagery, condition. These dissociations were further confirmed by dipole localization results showing a clear separation between the source of the compatibility effect in the physical sound condition (superior temporal area) and the source of the compatibility effect triggered by the linguistically implied sounds or the imagined sounds (inferior temporal area). Implications for grounded models of language understanding are discussed.  相似文献   

14.
Auditory apparent motion under binaural and monaural listening conditions   总被引:1,自引:0,他引:1  
This investigation examined the ability of listeners to perceive apparent motion under binaural and monaural listening conditions. Fifty-millisecond broadband noise sources were presented through two speakers separated in space by either 10 degrees, 40 degrees, or 160 degrees, centered about the subject's midline. On each trial, the sources were temporally separated by 1 of 12 interstimulus onset intervals (ISOIs). Six listeners were asked to place their experience of these sounds into one of five categories (single sound, simultaneous sounds, continuous motion, broken motion, or successive sounds), and to indicate either the proper temporal sequence of presentation or the direction of motion, depending on whether or not motion was perceived. Each listener was tested at all spatial separations under binaural and monaural listening conditions. Motion was perceived in the binaural listening condition at all spatial separations tested for ISOIs between 20 and 130 msec. In the monaural listening condition, motion was reliably heard by all subjects at 10 degrees and 40 degrees for the same range of ISOIs. At 160 degrees, only 3 of the 6 subjects consistently reported motion. However, when motion was perceived in the monaural condition, the direction of motion could not be determined.  相似文献   

15.
Playback experiments have been a useful tool for studying the function of sounds and the relevance of different sound characteristics in signal recognition in many different species of vertebrates. However, successful playback experiments in sound-producing fish remain rare, and few studies have investigated the role of particular sound features in the encoding of information. In this study, we set-up an apparatus in order to test the relevance of acoustic signals in males of the cichlid Metriaclima zebra. We found that territorial males responded more to playbacks by increasing their territorial activity and approaching the loudspeaker during and after playbacks. If sounds are used to indicate the presence of a competitor, we modified two sound characteristics, that is, the pulse period and the number of pulses, in order to investigate whether the observed behavioural response was modulated by the temporal structure of sounds recorded during aggressive interactions. Modified sounds yielded little or no effect on the behavioural response they elicited in territorial males, suggesting a high tolerance for variations in pulse period and number of pulses. The biological function of sounds in M. zebra and the lack of responsiveness to our temporal modifications are discussed.  相似文献   

16.
Three experiments asked whether subjects could retrieve information from a 2nd stimulus while they retrieved information from a 1st stimulus. Subjects performed recognition judgments on each of 2 words that followed each other by 0, 250, and 1,000 ms (Experiment 1) or 0 and 300 ms (Experiments 2 and 3). In each experiment, reaction time to both stimuli was faster when the 2 stimuli were both targets (on the study list) or both lures (not on the study list) than when 1 was a target and the other was a lure. Each experiment found priming from the 2nd stimulus to the 1st when both stimuli were targets. Reaction time to the 1st stimulus was faster when the 2 targets came from the same memory structure at study (columns in Experiment 1; pairs in Experiment 2; sentences in Experiment 3) than when they came from different structures. This priming is inconsistent with discrete serial retrieval and consistent with parallel retrieval.  相似文献   

17.
The present study investigated whether the quality of a frequency change within a sound (i.e., smooth vs. abrupt) would influence perception of its duration. In three experiments, participants were presented with two consecutive sounds on each of a series of trials, and their task was to judge whether the second sound was longer or shorter in duration than the first. In Experiment 1, participants were more likely to judge sounds consisting of a smooth and continuous change in frequency as longer in duration than sounds that maintained a constant frequency. In Experiment 2, the same bias was observed for sounds incorporating an abrupt change in frequency, but only when the frequency change was relatively small. The results of Experiment 3 suggested that the application of a change heuristic when generating duration judgments depends on the perception of change as originating from a single, integrated perceptual object.  相似文献   

18.
The phonological deficit theory of dyslexia assumes that degraded speech sound representations might hamper the acquisition of stable letter-speech sound associations necessary for learning to read. However, there is only scarce and mainly indirect evidence for this assumed letter-speech sound association problem. The present study aimed at clarifying the nature and the role of letter-speech sound association problems in dyslexia by analysing event-related potentials (ERP) of 11-year-old dyslexic children to speech sounds in isolation or combined with letters, which were presented either simultaneously with or 200 ms before the speech sounds. Recent studies with normal readers revealed that letters systematically modulated speech sound processing in an early (mismatch negativity or MMN) and late (Late Discriminatory Negativity or LDN) time-window. The amplitude of the MMN and LDN to speech sounds was enhanced when speech sounds were presented with letters. The dyslexic readers in the present study, however, did not exhibit any early influences of letters on speech sounds even after 4 years of reading instruction, indicating no automatic integration of letters and speech sounds. Interestingly, they revealed a systematic late effect of letters on speech sound processing, probably reflecting the mere association of letters and speech sounds. This pattern is strongly divergent from that observed in age-matched normal readers, who showed both early and late effects, but reminiscent of that observed in beginner normal readers in a previous study (Froyen, Bonte, van Atteveldt & Blomert, 2009). The finding that the quality of letter-speech sound processing is directly related to reading fluency urges further research into the role of audiovisual integration in the development of reading failure in dyslexia.  相似文献   

19.
In the present investigation, the effects of spatial separation on the interstimulus onset intervals (ISOIs) that produce auditory and visual apparent motion were compared. In Experiment 1, subjects were tested on auditory apparent motion. They listened to 50-msec broadband noise pulses that were presented through two speakers separated by one of six different values between 0 degrees and 160 degrees. On each trial, the sounds were temporally separated by 1 of 12 ISOIs from 0 to 500 msec. The subjects were instructed to categorize their perception of the sounds as "single," "simultaneous," "continuous motion," "broken motion," or "succession." They also indicated the proper temporal sequence of each sound pair. In Experiments 2 and 3, subjects were tested on visual apparent motion. Experiment 2 included a range of spatial separations from 6 degrees to 80 degrees; Experiment 3 included separations from .5 degrees to 10 degrees. The same ISOIs were used as in Experiment 1. When the separations were equal, the ISOIs at which auditory apparent motion was perceived were smaller than the values that produced the same experience in vision. Spatial separation affected only visual apparent motion. For separations less than 2 degrees, the ISOIs that produced visual continuous motion were nearly equal to those which produced auditory continuous motion. For larger separations, the ISOIs that produced visual apparent motion increased.  相似文献   

20.
Ninety-six infants of 3 1/2 months were tested in an infant-control habituation procedure to determine whether they could detect three types of audio-visual relations in the same events. The events portrayed two amodal invariant relations, temporal synchrony and temporal microstructure specifying the composition of the objects, and one modality-specific relation, that between the pitch of the sound and the color/shape of the objects. Subjects were habituated to two events accompanied by their natural, synchronous, and appropriate sounds and then received test trials in which the relation between the visual and the acoustic information was changed. Consistent with Gibson's increasing specificity hypothesis, it was expected that infants would differentiate amodal invariant relations prior to detecting arbitrary, modality-specific relations. Results were consistent with this prediction, demonstrating significant visual recovery to a change in temporal synchrony and temporal microstructure, but not to a change in the pitch-color/shape relations. Two subsequent discrimination studies demonstrated that infants' failure to detect the changes in pitch-color/shape relations could not be attributed to an inability to discriminate the pitch or the color/shape changes used in Experiment 1. Infants showed robust discrimination of the contrasts used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号