首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Although our subjective experience of the world is one of discrete sound sources, the individual frequency components that make up these separate sources are spread across the frequency spectrum. Listeners. use various simple cues, including common onset time and harmonicity, to help them achieve this perceptual separation. Our ability to use harmonicity to segregate two simultaneous sound sources is constrained by the frequency resolution of the auditory system, and is much more effective for low-numbered, resolved harmonics than for higher-numbered, unresolved ones. Our ability to use interaural time-differences (ITDs) in perceptual segregation poses a paradox. Although ITDs are the dominant cue for the localization of complex sounds, listeners cannot use ITDs alone to segregate the speech of a single talker from similar simultaneous sounds. Listeners are, however, very good at using ITD to track a particular sound source across time. This difference might reflect two different levels of auditory processing, indicating that listeners attend to grouped auditory objects rather than to those frequencies that share a common ITD.  相似文献   

2.
The relative importance of voice pitch and interaural difference cues in facilitating the recognition of both of two concurrently presented synthetic vowels was measured. The interaural difference cues used were an interaural time difference (400 μsec ITD), two magnitudes of interaural level difference (15 dB and infinite ILD), and a combination of ITD and ILD (400 μsec plus 15 dB). The results are analysed separately for those cases where both vowels are identical and those where they are different. When the two vowels are different, a voice pitch difference of one semitone is found to improve the percentage of correct reports of both vowels by 35.8% on average. However, the use of interaural difference cues results in an improvement of 11.5% on average when there is a voice pitch difference of one semitone, but only a non-significant 0.1% when there is no voice pitch difference. When the two vowels are identical, imposition of either a voice pitch difference or binaural difference reduces performance, in a subtractive manner. It is argued that the smaller size of the interaural difference effect is not due to a “ceiling effect” but is characteristic of the relative importance of the two kinds of cues in this type of experiment. The possibility that the improvement due to interaural difference cues may in fact be due to monaural processing is discussed. A control experiment is reported for the ITD condition, which suggests binaural processing does occur for this condition. However, it is not certain whether the improvement in the ILD condition is due to binaural processing or use of the improvement in signal-to-noise ratio for a single vowel at each ear.  相似文献   

3.
The position and image-width of the simultaneous images produced by very short tone pulses were measured as a function of interaural time difference (ITD) at both low- (250 and 800 Hz) and high- (2500 and 8000 Hz) frequencies using a direct-estimation technique.

Primary images are lateralized towards the ear receiving the leading stimulus. At low frequencies image position is proportional to interaural phase-difference (IPD) below 90° and remains at the lead-ear for larger values. At high frequencies image position is proportional to ITD up to 500-1000 μsec. Secondary images are reported on the opposite side of the head for IPDs greater than 180° at low frequencies, and at ITDs greater than 500 μsec at high frequencies. Image width is approximately constant for all ITDs and both images at a given frequency, but becomes more compact as frequency increases.

The data are discussed in terms of onset cues and stimulus fine-structure cues. The best explanation is in terms of an onset mechanism, but one that is calibrated in terms of IPD at low frequencies. The existence of double images is explained in terms of a breakdown in the mechanism determining fusion.  相似文献   

4.
The spatial‐temporal association indicates that time is represented spatially along a left‐to‐right line. It is unclear whether the spatial‐temporal association is mainly related to a perceptual or a motor component. In addition, the spatial‐temporal association is not consistently found using a time reproduction task. Our rationale for this finding is that, classically, a non‐lateralized button for performing the task has been used. Using two lateralized response buttons, the aim of the study was to find a spatial‐temporal association in a time reproduction task. To account for the perceptual component, reference and target stimuli were presented in different spaces through four experiments. In all experiments, a Spatial‐Temporal Association of Response Codes (STEARC) effect was found and this effect was not modulated by the spatial position of both reference and target stimuli. The results suggested that the spatial‐temporal association was mainly derived from the spatial information provided by response buttons, reflecting a motor but not visuospatial influence.  相似文献   

5.
The tuning of auditory spatial attention with respect to interaural level and time difference cues (ILDs and ITDs) was explored using a rhythmic masking release (RMR) procedure. Listeners heard tone sequences defining one of two simple target rhythms, interleaved with arhythmic masking tones, presented over headphones. There were two conditions, which differed only in the ILD of the tones defining the target rhythm: For one condition, ILD was 0 dB and the perceived lateral position was central, and for the other, ILD was 4 dB and the perceived lateral position was to the right; target tone ITD was always zero. For the masking tones, ILD was fixed at 0 dB and ITDs were varied, giving rise to a range of lateral positions determined by ITD. The listeners' task was to attend to and identify the target rhythm. The data showed that target rhythm identification accuracy was low, indicating that maskers were effective, when target and masker shared spatial position, but not when they shared only ITD. A clear implication is that at least within the constraints of the RMR paradigm, overall spatial position, and not ITD, is the substrate for auditory spatial attention.  相似文献   

6.
Even when the speaker, context, and speaking style are held fixed, the physical properties of naturally spoken utterances of the same speech sound vary considerably. This variability imposes limits on our ability to distinguish between different speech sounds. We present a conceptual framework for relating the ability to distinguish between speech sounds in single-token experiments (in which each speech sound is represented by a single wave form) to resolution in multiple-token experiments. Experimental results indicate that this ability is substantially reduced by an increase in the number of tokens from 1 to 4, but that there is little further reduction when the number of tokens increases to 16. Furthermore, although there is little relation between the ability to distinguish between a given pair of tokens in the multiple- and the 1-token experiments, there is a modest correlation between the ability to distinguish specific vowel tokens in the 4- and 16-token experiments. These results suggest that while listeners use a multiplicity of cues to distinguish between single tokens of a pair of vowel sounds, so that performance is highly variable both across tokens and listeners, they use a smaller set when distinguishing between populations of naturally produced vowel tokens, so that variability is reduced. The effectiveness of the cues used in the latter case is limited more by internal noise than by the variability of the cues themselves.  相似文献   

7.
Frequency- and amplitude-modulated (FM and AM) sounds are the building blocks of complex sounds. In the present study, we investigated the ability of human observers to process spatial information in an important class of FM sounds: broadband directional sweeps common in natural communication signals such as speech. The stimuli consisted of linear or logarithmic unidirectional FM pulses that swept either up or down in frequency at various rates. Spatial localization thresholds monotonically improved as sweep duration decreased and as sweep rate increased, but no difference in performance was observed between logarithmic and linear or between upand down-frequency sweeps. Counterintuitive reversals in localization were observed which suggested that the localization of high-frequency sweeps may be strongly dominated by amplitude information even in situations in which one might consider timing cues to be critical. Implications of these findings for the localization of complex sounds are discussed.  相似文献   

8.
The role of interaural time difference (ITD) in perceptual grouping and selective attention was explored in 3 experiments. Experiment 1 showed that listeners can use small differences in ITD between 2 sentences to say which of 2 short, constant target words was part of the attended sentence, in the absence of talker or fundamental frequency differences. Experiments 2 and 3 showed that listeners do not explicitly track components that share a common ITD. Their inability to segregate a harmonic from a target vowel by a difference in ITD was not substantially changed by the vowel being placed in a sentence context, where the sentence shared the same ITD as the rest of the vowel. The results indicate that in following a particular auditory sound source over time, listeners attend to perceived auditory objects at particular azimuthal positions rather than attend explicitly to those frequency components that share a common ITD.  相似文献   

9.
Several studies have demonstrated that mammals, birds and fish use comparable spatial learning strategies. Unfortunately, except in insects, few studies have investigated spatial learning mechanisms in invertebrates. Our study aimed to identify the strategies used by cuttlefish (Sepia officinalis) to solve a spatial task commonly used with vertebrates. A new spatial learning procedure using a T-maze was designed. In this maze, the cuttlefish learned how to enter a dark and sandy compartment. A preliminary test confirmed that individual cuttlefish showed an untrained side-turning preference (preference for turning right or left) in the T-maze. This preference could be reliably detected in a single probe trial. In the following two experiments, each individual was trained to enter the compartment opposite to its side-turning preference. In Experiment 1, distal visual cues were provided around the maze. In Experiment 2, the T-maze was surrounded by curtains and two proximal visual cues were provided above the apparatus. In both experiments, after acquisition, strategies used by cuttlefish to orient in the T-maze were tested by creating a conflict between the formerly rewarded algorithmic behaviour (turn, response learning) and the visual cues identifying the goal (place learning). Most cuttlefish relied on response learning in Experiment 1; the two strategies were used equally often in Experiment 2. In these experiments, the salience of cues provided during the experiment determined whether cuttlefish used response or place learning to solve this spatial task. Our study demonstrates for the first time the presence of multiple spatial strategies in cuttlefish that appear to closely parallel those described in vertebrates.  相似文献   

10.
The role of visual and body movement information in infant search   总被引:1,自引:0,他引:1  
Three experiments investigated the use of visual input and body movement input arising from movement through the world on spatial orientation. Infants between 9 1/2 and 18 months participated in a search task in which they searched for a toy hidden in 1 of 2 containers. Prior to beginning search, either the infants or the containers were rotated 180 degrees; these rotations occurred in a lit or dark environment. These experiments were distinguished by the environmental cues for object location; Experiment 1 used a position cue, Experiment 2 a color cue, and Experiment 3 both position and color cues. Accuracy was better in Experiments 2 and 3 than in Experiment 1. All studies found that search was best after infant movement in the light; all other conditions led to equivalently worse performance. These results are discussed relative to a theoretical characterization of spatial coding focusing on the uses of spatial information.  相似文献   

11.
Understanding animals’ spatial perception is a critical step toward discerning their cognitive processes. The spatial sense is multimodal and based on both the external world and mental representations of that world. Navigation in each species depends upon its evolutionary history, physiology, and ecological niche. We carried out foraging experiments on wild vervet monkeys (Chlorocebus pygerythrus) at Lake Nabugabo, Uganda, to determine the types of cues used to detect food and whether associative cues could be used to find hidden food. Our first and second set of experiments differentiated between vervets’ use of global spatial cues (including the arrangement of feeding platforms within the surrounding vegetation) and/or local layout cues (the position of platforms relative to one another), relative to the use of goal-object cues on each platform. Our third experiment provided an associative cue to the presence of food with global spatial, local layout, and goal-object cues disguised. Vervets located food above chance levels when goal-object cues and associative cues were present, and visual signals were the predominant goal-object cues that they attended to. With similar sample sizes and methods as previous studies on New World monkeys, vervets were not able to locate food using only global spatial cues and local layout cues, unlike all five species of platyrrhines thus far tested. Relative to these platyrrhines, the spatial location of food may need to stay the same for a longer time period before vervets encode this information, and goal-object cues may be more salient for them in small-scale space.  相似文献   

12.
This project investigated the ways in which visual cues and bodily cues from self-motion are combined in spatial navigation. Participants completed a homing task in an immersive virtual environment. In Experiments 1A and 1B, the reliability of visual cues and self-motion cues was manipulated independently and within-participants. Results showed that participants weighted visual cues and self-motion cues based on their relative reliability and integrated these two cue types optimally or near-optimally according to Bayesian principles under most conditions. In Experiment 2, the stability of visual cues was manipulated across trials. Results indicated that cue instability affected cue weights indirectly by influencing cue reliability. Experiment 3 was designed to mislead participants about cue reliability by providing distorted feedback on the accuracy of their performance. Participants received feedback that their performance with visual cues was better and that their performance with self-motion cues was worse than it actually was or received the inverse feedback. Positive feedback on the accuracy of performance with a given cue improved the relative precision of performance with that cue. Bayesian principles still held for the most part. Experiment 4 examined the relations among the variability of performance, rated confidence in performance, cue weights, and spatial abilities. Participants took part in the homing task over two days and rated confidence in their performance after every trial. Cue relative confidence and cue relative reliability had unique contributions to observed cue weights. The variability of performance was less stable than rated confidence over time. Participants with higher mental rotation scores performed relatively better with self-motion cues than visual cues. Across all four experiments, consistent correlations were found between observed weights assigned to cues and relative reliability of cues, demonstrating that the cue-weighting process followed Bayesian principles. Results also pointed to the important role of subjective evaluation of performance in the cue-weighting process and led to a new conceptualization of cue reliability in human spatial navigation.  相似文献   

13.
Three experiments examined repetition priming for meaningful environmental sounds (e.g., clock ticking, tooth brushing, toilet flushing, etc.) in a sound stem identification paradigm using brief sound cues. Prior encoding of target sounds together with their associated names facilitated subsequent identification of sound stems relative to nonstudied controls. In contrast, prior exposure to names alone in the absence of the environmental sounds did not prime subsequent sound stem identification performance at all (Experiments 1 and 3). Explicit and implicit memory were dissociated such that sound stem cued recall was higher following semantic than nonsemantic encoding, whereas sound priming was insensitive to manipulations of depth encoding (Experiments 2 and 3). These results extend the findings of long-term repetition priming into the auditory nonverbal domain and suggest that priming for environmental sounds is mediated primarily by perceptual processes.  相似文献   

14.
In 4 experiments, the authors investigated the influence of situational familiarity with the judgmental context on the process of lie detection. They predicted that high familiarity with a situation leads to a more pronounced use of content cues when making judgments of veracity. Therefore, they expected higher classification accuracy of truths and lies under high familiarity. Under low situational familiarity, they expected that people achieve lower accuracy rates because they use more nonverbal cues for their veracity judgments. In all 4 experiments, participants with high situational familiarity achieved higher accuracy rates in classifying both truthful and deceptive messages than participants with low situational familiarity. Moreover, mediational analyses demonstrated that higher classification accuracy in the high-familiarity condition was associated with more use of verbal content cues and less use of nonverbal cues.  相似文献   

15.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

16.
Two experiments are reported in which the possibility that auditory attention may be controlled in a stimulus-driven manner by duration, intensity, and timbre cues was examined. In both experiments, listeners were presented with a cue followed, after a variable time period of a 150-, 450-, or 750-msec stimulus onset asynchrony (SOA), by a target. In three different conditions for each experiment, the duration, intensity, or timbre relation between the cue and the target was varied so that, on 50% of the trials, the two sounds were identical and, on 50% of the trials, the two sounds were different in the manipulated feature. The two experiments differed only in the judgment required, with listeners in Experiment 1 identifying the duration, intensity, or timbre of the target and listeners in Experiment 2 indicating whether the target incorporated a brief silent gap. In both experiments, performance was observed to depend on both the similarity of and the time between the cue and the target. Specifically, whereas at the 150-msec SOA performance was best when the target was identical to the preceding cue, at the 750-msec SOA performance was best when the cue and the target differed. This pattern establishes the existence of duration-, intensity-, and timbre-based auditory inhibition of return. The theoretical implications of these results are considered.  相似文献   

17.
Two experiments are reported in which the possibility that auditory attention may be controlled in a stimulus-driven manner by duration, intensity, and timbre cues was examined. In both experiments, listeners were presented with a cue followed, after a variable time period of a 150-, 450-, or 750-msec stimulus onset asynchrony (SOA), by a target. In three different conditions for each experiment, the duration, intensity, or timbre relation between the cue and the target was varied so that, on 50% of the trials, the two sounds were identical and, on 50% of the trials, the two sounds were different in the manipulated feature. The two experiments differed only in the judgment required, with listeners in Experiment 1 identifying the duration, intensity, or timbre of the target and listeners in Experiment 2 indicating whether the target incorporated a brief silent gap. In both experiments, performance was observed to depend on both the similarity of and the time between the cue and the target. Specifically, whereas at the 150-msec SOA performance was best when the target was identical to the preceding cue, at the 750-msec SOA performance was best when the cue and the target differed. This pattern establishes the existence of duration-, intensity-, and timbre-based auditory inhibition of return. The theoretical implications of these results are considered.  相似文献   

18.
The present experiments tested whether endogenous and exogenous cues produce separate effects on target processing. In Experiment 1, participants discriminated whether an arrow presented left or right of fixation pointed to the left or right. For 1 group, the arrow was preceded by a peripheral noninformative cue. For the other group, the arrow was preceded by a central, symbolic, informative cue. The 2 types of cues modulated the spatial Stroop effect in opposite ways, with endogenous cues producing larger spatial Stroop effects for valid trials and exogenous cues producing smaller spatial Stroop effects for valid trials. In Experiments 2A and 2B, the influence of peripheral noninformative and peripheral informative cues on the spatial Stroop effect was directly compared. The spatial Stroop effect was smaller for valid than for invalid trials for both types of cues. These results point to a distinction between the influence of central and peripheral attentional cues on performance and are not consistent with a unitary view of endogenous and exogenous attention.  相似文献   

19.
The Barnes maze is a spatial memory task that requires subjects to learn the position of a hole that can be used to escape the brightly lit, open surface of the maze. Two experiments assessed the relative importance of spatial (extra-maze) versus proximal visible cues in solving the maze. In Experiment 1, four groups of mice were trained either with or without a discrete visible cue marking the location of the escape hole, which was either in a fixed or variable location across trials. In Experiment 2, all mice were trained with the discrete visible cue marking the target hole location. Two groups were identical to the cued-target groups from Experiment 1, with either fixed or variable escape locations. For these mice, the discrete cue either was the sole predictor of the target location or was perfectly confounded with the spatial extra-maze cues. The third group also used a cued variable target, but a curtain was drawn around the maze to prevent the use of spatial cues to guide navigation. Probe trials with all escape holes blocked were conducted to dissociate the use of spatial and discrete proximal cues. We conclude that the Barnes maze can be solved efficiently using spatial, visual cue, or serial-search strategies. However, mice showed a strong preference for using the distal room cues, even when a discrete visible cue clearly marked the escape location. Importantly, these data show that the cued-target control version of the Barnes maze as typically conducted does not dissociate spatial from nonspatial abilities.  相似文献   

20.
Similarities have been observed in the localization of the final position of moving visual and moving auditory stimuli: Perceived endpoints that are judged to be farther in the direction of motion in both modalities likely reflect extrapolation of the trajectory, mediated by predictive mechanisms at higher cognitive levels. However, actual comparisons of the magnitudes of displacement between visual tasks and auditory tasks using the same experimental setup are rare. As such, the purpose of the present free-field study was to investigate the influences of the spatial location of motion offset, stimulus velocity, and motion direction on the localization of the final positions of moving auditory stimuli (Experiment 1 and 2) and moving visual stimuli (Experiment 3). To assess whether auditory performance is affected by dynamically changing binaural cues that are used for the localization of moving auditory stimuli (interaural time differences for low-frequency sounds and interaural intensity differences for high-frequency sounds), two distinct noise bands were employed in Experiments 1 and 2. In all three experiments, less precise encoding of spatial coordinates in paralateral space resulted in larger forward displacements, but this effect was drowned out by the underestimation of target eccentricity in the extreme periphery. Furthermore, our results revealed clear differences between visual and auditory tasks. Displacements in the visual task were dependent on velocity and the spatial location of the final position, but an additional influence of motion direction was observed in the auditory tasks. Together, these findings indicate that the modality-specific processing of motion parameters affects the extrapolation of the trajectory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号