首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Rapid control of responding by sound location is obtained in squirrel monkeys when sound stimuli are presented from one of two loudspeakers, each one adjacent to a response key. With this arrangement of loudspeakers and response keys, squirrel monkeys quickly learn to respond on the key near the source of the sound stimulus, and this pattern is the same whether or not responses near the sound source are differentially reinforcedmthis result may depend on a pre-experimental tendency in squirrel monkeys to orient head and eyes toward a sound, which would lead the animal to look at the response key in front of the loudspeaker producing the sound. The present experiment sought to determine whether visual stimuli are necessary for rapid control of responding by sound location. Two monkeys were trained in darkness in a sound-localization task similar to that described above. Results were similar to those obtained from animals trained in light, indicating that visual stimuli are not required for rapid acquisition of sound-localization behavior in monkeys.  相似文献   

2.
The effect of a background sound on the auditory localization of a single sound source was examined. Nine loudspeakers were arranged crosswise in the horizontal and the median vertical plane. They ranged from -20 degrees to +20 degrees, with the center loudspeaker at 0 degree azimuth and elevation. Using vertical and horizontal centimeter scales, listeners verbally estimated the position of a 500-ms broadband noise stimulus being presented at the same time as a 2 s background sound, emitted by one of the four outer loudspeakers. When the background sound consisted of continuous broadband noise, listeners consistently shifted the apparent target positions away from the background sound locations. This auditory contrast effect, which is consistent with earlier findings, equally occurred in both planes. But when the background sound was changed to a pulse train of noise bursts, the contrast effect decreased in the horizontal plane and increased in the vertical plane. This discrepancy might be due to general differences in the processing of interaural and spectral localization information.  相似文献   

3.
Binaural and monaural localization of sound in two-dimensional space   总被引:2,自引:0,他引:2  
Two experiments were conducted. In experiment 1, part 1, binaural and monaural localization of sounds originating in the left hemifield was investigated. 104 loudspeakers were arranged in a 13 x 8 matrix with 15 degrees separating adjacent loudspeakers in each column and in each row. In the horizontal plane (HP), the loudspeakers extended from 0 degrees to 180 degrees; in the vertical plane (VP), they extended from -45 degrees to 60 degrees with respect to the interaural axis. Findings of special interest were: (i) binaural listeners identified the VP coordinate of the sound source more accurately than did monaural listeners, and (ii) monaural listeners identified the VP coordinate of the sound source more accurately than its HP coordinate. In part 2, it was found that foreknowledge of the HP coordinate of the sound source aided monaural listeners in identifying its VP coordinate, but the converse did not hold. In experiment 2, part 1, localization performances were evaluated when the sound originated from consecutive 45 degrees segments of the HP, with the VP segments extending from -22.5 degrees to 22.5 degrees. Part 2 consisted of measuring, on the same subjects, head-related transfer functions by means of a miniature microphone placed at the entrance of their external ear canal. From these data, the 'covert' peaks (defined and illustrated in text) of the sound spectrum were extracted. This spectral cue was advanced to explain why monaural listeners in this study as well as in other studies performed better when locating VP-positioned sounds than when locating HP-positioned sounds. It is not claimed that there is inherent advantage for localizing sound in the VP; rather, monaural localization proficiency, whether in the VP or HP, depends on the availability of covert peaks which, in turn, rests on the spatial arrangement of the sound sources.  相似文献   

4.
Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants but sacrifice control over sound presentation and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining whether online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing.  相似文献   

5.
In 1890, William James hypothesized that emotions are our perception of physiological changes. Many different theories of emotion have emerged since then, but it has been demonstrated that a specifically induced physiological state can influence an individual's emotional responses to stimuli. In the present study, auditory and/or vibrotactile heartbeat stimuli were presented to participants (N = 24), and the stimuli's effect on participants' physiological state and subsequent emotional attitude to affective pictures was measured. In particular, we aimed to investigate the effect of the perceived distance to stimuli on emotional experience. Distant versus close sound reproduction conditions (loudspeakers vs. headphones) were used to identify whether an "embodied" experience can occur in which participants would associate the external heartbeat sound with their own. Vibrotactile stimulation of an experimental chair and footrest was added to magnify the experience. Participants' peripheral heartbeat signals, self-reported valence (pleasantness) and arousal (activation) ratings for the pictures, and memory performance scores were collected. Heartbeat sounds significantly affected participants' heartbeat, the emotional judgments of pictures, and their recall. The effect of distance to stimuli was observed in the significant interaction between the spatial location of the heartbeat sound and the vibrotactile stimulation, which was mainly caused by the auditory-vibrotactile interaction in the loudspeakers condition. This interaction might suggest that vibrations transform the far sound condition (sound via loudspeakers) in a close-stimulation condition and support the hypothesis that close sounds are more affective than distant ones. These findings have implications for the design and evaluation of mediated environments.  相似文献   

6.
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via 21 loudspeakers mounted horizontally (from 80° on the left to 80° on the right). Participants had to localize the target either by using a swivel hand-pointer or by head-pointing. Individual lateral preferences of eye, ear, hand, and foot were obtained using a questionnaire. With both pointing methods, participants showed a bias in sound localization that was to the side contralateral to the preferred hand, an effect that was unrelated to their overall precision. This partially parallels findings in the visual modality as left-handers typically have a more rightward bias in visual line bisection compared with right-handers. Despite the differences in neural processing of auditory and visual spatial information these findings show similar effects of lateral preference on auditory and visual spatial perception. This suggests that supramodal neural processes are involved in the mechanisms generating laterality in space perception.  相似文献   

7.
The precedence effect is a phenomenon that may occur when a sound from one direction (the lead) is followed within a few milliseconds by the same or a similar sound from another direction (the lag, or the echo). Typically, the lag sound is not heard as a separate event, and changes in the lag sound’s direction cannot be discriminated. The hypothesis is proposed in this study that these two aspects of precedence (echo suppression and discrimination suppression) are at least partially independent phenomena. Two experiments were conducted in which pairs of noise bursts were presented to subjects from two loudspeakers in the horizontal plane to simulate a lead sound and a lag sound (the echo). Echo suppression threshold was measured as the minimum echo delay at which subjects reported hearing two sounds rather than one sound; discrimination suppression threshold was measured as the minimum echo delay at which subjects could reliably discriminate between two positions of the echo. In Experiment 1, it was found that echo suppression threshold was the same as discrimination suppression threshold when measured with a single burst pair (average 5.4 msec). However, when measured after presentation of a train of burst pairs (a condition that may produce “buildup of suppression”), discrimination suppression threshold increased to 10.4 msec, while echo suppression threshold increased to 26.4 msec. The greater buildup of echo suppression than of discrimination suppression indicates that the two phenomena are distinct under buildup conditions and may be the reflection of different underlying mechanisms. Experiment 2 investigated the effect of the directional properties of the lead and lag sounds on discrimination suppression and echo suppression. There was no consistent effect of the spatial separation between lead and lag sources on discrimination suppression or echo suppression, nor was there any consistent difference between the two types of thresholds (overall average threshold was 5.9 msec). The negative result in Experiment 2 may have been due to the measurements being obtained only for single-stimulus conditions and not for buildup conditions that may involve more central processing by the auditory system.  相似文献   

8.
José Morais 《Cognition》1975,3(2):127-139
Subjects were asked to recall one of two simultaneous messages coming from hidden loudspeakers situated either at 90° or at 45° from the median plane to the left and to the right. They were told that the messages were coming from two visible dummy loudspeakers which were also situated either at 90° or at 45°. Pre-stimulus cueing of the side to be recalled was given. Significant right-side advantage was obtained in the 90° real-fictitious condition, not in the other conditions. These results show that right-side advantage can be obtained with presentation over loudspeakers and unilateral recall, and dismiss a purely structural or purely cognitive view of lateral asymmetries in audition. Role of structural and cognitive factors is discussed.  相似文献   

9.
Head turning and manual pointing to auditory targets have been studied in normal subjects and in subjects with right parietal damage. Important differences were found between these two types of movement. (1) In brain-damaged subjects, audiospatial manual pointing deficit patterns and audiospatial head turning deficit patterns were dissociated. Moreover, head turning deficits tended to appear peripherally in both auditory hemifields, while manual pointing deficits tended to appear unilaterally in the left hemifield. (2) In normal subjects, at all tested eccentricities in both hemifields, head turning performances showed a characteristic undershooting of auditory targets when compared to manual pointing. Results are discussed in terms of differences between the processes underlying audio-motor tasks that involve the head and tasks that involve the hands.  相似文献   

10.
The nature of the connection between musical and spatial processing is controversial. While pitch may be described in spatial terms such as “high” or “low”, it is unclear whether pitch and space are associated but separate dimensions or whether they share representational and processing resources. In the present study, we asked participants to judge whether a target vocal note was the same as (or different from) a preceding cue note. Importantly, target trials were presented as video clips where a singer sometimes gestured upward or downward while singing that target note, thus providing an alternative, concurrent source of spatial information. Our results show that pitch discrimination was significantly biased by the spatial movement in gesture, such that downward gestures made notes seem lower in pitch than they really were, and upward gestures made notes seem higher in pitch. These effects were eliminated by spatial memory load but preserved under verbal memory load conditions. Together, our findings suggest that pitch and space have a shared representation such that the mental representation of pitch is audiospatial in nature.  相似文献   

11.
《Acta psychologica》2013,142(3):410-418
We investigated the representation of azimuthal directions of sound sources under two different conditions. In the first experiment, we examined the participants' mental representation of sound source directions via similarity judgments. Auditory stimuli originating from sixteen loudspeakers positioned equidistantly around the participant were presented in pairs, with the first stimulus serving as the anchor, and thereby providing the context for the second stimulus. For each pair of stimuli, participants had to rate the sound source directions as either similar or dissimilar. In the second experiment, the same participants categorized single sound source directions using verbal direction labels (front, back, left, right, and combinations of any two of these). In both experiments, the directions within the front and back regions were more distinctively categorized than those on the sides, and the sides' categories included more directions than those of the front or back. Furthermore, we found evidence that the left-right decision comprises the basic differentiation of the surrounding regions. These findings illustrate what seem to be central features of the representation of directions in auditory space.  相似文献   

12.
Two experiments test whether the shape of objects that obstruct sound can be perceived by human listeners. Three foam-core shapes of equal area—disk, square, and triangle—were positioned in front a set of loudspeakers, which emanated broadband noise. On each trial, blindfolded listeners were asked to identify which shape obstructed the noise. Both experiments revealed that under most conditions, listeners could identify the shapes at better-than-chance levels. Experiment 2 also showed that the addition of a second intensity level of broadband noise randomized across trials actually improved performance. This finding suggests that listeners were likely basing their judgments on an acoustic dimension that was invariant—and was perhaps made more salient—over multiple intensities. These results add to the growing literature showing that human listeners are sensitive to sound-structuring surfaces that themselves do not produce sound.  相似文献   

13.
ObjectivesWe compared the mental representation of sound directions in blind football players, blind non-athletes and sighted individuals.DesignStanding blindfolded in the middle of a circle with 16 loudspeakers, participants judged whether the directions of two subsequently presented sounds were similar or not.MethodStructure dimensional analysis (SDA) was applied to reveal mean cluster solutions for the groups.ResultsHierarchical cluster analysis via SDA resulted in distinct representation structures of sound directions. The blind football players' mean cluster solution consisted of pairs of neighboring directions. The blind non-athletes also clustered the directions in pairs, but included non-adjacent directions. In the sighted participants' structure, frontal directions were clustered pairwise, the absolute back was singled out, and the side regions accounted for more directions.ConclusionsOur results suggest that the mental representation of egocentric auditory space is influenced by sight and by the level of expertise in auditory-based orientation and navigation.  相似文献   

14.
Acquisition of a sound localization discrimination by rats was investigated. Two loudspeakers were located outside an experimental enclosure containing two levers and a dipper feeder. In the same-side condition, responses on the lever nearest the sound-producing speaker were reinforced. Animals in this condition acquired the discrimination rapidly, generally within the first session. In the opposite-side condition, responses on the lever furthest from the sound-producing speaker were reinforced. Acquisition for animals in this condition began below the chance level (50% correct responses) and took on the order of 10 sessions to approach the final, high level. The course of acquisition in both cases appeared to depend upon an initial tendency of rats to respond on the lever nearest the source of sound in this situation. The rise-decay time of the 4-kHz tone burst signal clearly affected the performance level reached. It did not, however, affect the rate at which the discrimination was acquired.  相似文献   

15.
Listeners, whose right ears were blocked, located low-intensity sounds originating from loudspeakers placed 15 deg apart along the horizontal plane on the side of the open, or functioning, ear. In Experiment 1, the stimuli consisted of noise bursts, 1.0 kHz wide and centered at 4.0 through 14.0 kHz in steps of .5 kHz. We found that the apparent location of the noise bursts was governed by their frequency composition. Specifically, as the center frequency was increased from 4.0 to about 8.0 kHz, the sound appeared to move away from the frontal sector and toward the side. This migration pattern of the apparent sound source was observed again when the center frequency was increased from 8.0 to about 12.0 kHz. Then, with center frequencies of 13.0 and 14.0 kHz, the sound appeared once more in front. We referred to this relation between frequency composition and apparent location in terms of spatial referent maps. In Experiment 2, we showed that localization was more proficient if the frequency content of the stimulus served to connect adjacent spatial referent maps rather than falling within a single map. By these means, we have further elucidated the spectral cues utilized in monaural localization of sound in the horizontal plane.  相似文献   

16.
Listeners identified spoken words, letters, and numbers and the spatial location of these utterances in three listening conditions as a function of the number of simultaneously presented utterances. The three listening conditions were a normal listening condition, in which the sounds were presented over seven possible loudspeakers to a listener seated in a sound-deadened listening room; a one-headphone listening condition, in which a single microphone that was placed in the listening room delivered the sounds to a single headphone worn by the listener in a remote room; and a stationary KEMAR listening condition, in which binaural recordings from an acoustic manikin placed in the listening room were delivered to a listener in the remote room. The listeners were presented one, two, or three simultaneous utterances. The results show that utterance identification was better in the normal listening condition than in the one-headphone condition, with the KEMAR listening condition yielding intermediate levels of performance. However, the differences between listening in the normal and in the one-headphone conditions were much smaller when two, rather than three, utterances were presented at a time. Localization performance was good for both the normal and the KEMAR listening conditions and at chance for the one-headphone condition. The results suggest that binaural processing is probably more important for solving the “cocktail party” problem when there are more than two concurrent sound sources.  相似文献   

17.
Neurologically normal observers misperceive the midpoint of horizontal lines as systematically leftward of veridical center, a phenomenon known as pseudoneglect. Pseudoneglect is attributed to a tonic asymmetry of visuospatial attention favoring left hemispace. Whereas visuospatial attention is biased toward left hemispace, some evidence suggests that audiospatial attention may possess a right hemispatial bias. If spatial attention is supramodal, then the leftward bias observed in visual line bisection should also be expressed in auditory bisection tasks. If spatial attention is modality specific then bisection errors in visual and auditory spatial judgments are potentially dissociable. Subjects performed a bisection task for spatial intervals defined by auditory stimuli, as well as a tachistoscopic visual line bisection task. Subjects showed a significant leftward bias in the visual line bisection task and a significant rightward bias in the auditory interval bisection task. Performance across both tasks was, however, significantly positively correlated. These results imply the existence of both modality specific and supramodal attentional mechanisms where visuospatial attention has a prepotent leftward vector and audiospatial attention has a prepotent rightward vector of attention. In addition, the biases of both visuospatial and audiospatial attention are correlated.  相似文献   

18.
Auditory saltation is a spatiotemporal illusion in which the judged positions of sound stimuli are shifted toward subsequent stimuli that follow closely in time and space. In this study, the "reduced-rabbit" paradigm and a direct-location method were employed to investigate the effect of spectral sound content on the saltation illusion. Eighteen listeners were presented with sound sequences consisting of three high-pass or low-pass filtered noise bursts. Noise bursts within a sequence were either the same or differed in frequency. Listeners judged the position of the second sound using a hand pointer. When the time interval between the second and third sound was short, the target was shifted toward the location of the subsequent stimulus. This displacement effect did not depend on the spectral content of the first sound, but decreased substantially when the second and third sounds were different. The results indicated an effect of spectral difference on saltation that is discussed with regard to a recently proposed stimulus integration approach in which saltation was attributed to an interaction between perceptual processing of temporally proximate stimuli.  相似文献   

19.
20.
Sound was presented to monkeys through one of two loudspeakers, each adjacent to a response key. A response on the key adjacent to the sound source was reinforced (correct response). A response on the other key produced a timeout (incorrect response). Under these conditions, over 90% of responses were correct within one or two sessions. When the procedure was changed so that a response on either key was reinforced independently of which speaker was sounding, similar control by location developed within one or two sessions. When conditions were modified by moving the keys away from the immediate vicinity of the speakers, the animals required about 20 sessions to reach a stable level of greater than 90% correct responses under differential reinforcement conditions. No control by location developed under nondifferential reinforcement conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号