首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial extent of temporal cortices. Most importantly, regions previously reported as selective for speech over environmental sounds also contained distributed information. The results indicate that temporal cortices supporting complex auditory processing, including regions previously described as speech-selective, are in fact highly heterogeneous.  相似文献   

2.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

3.
We examine the evidence that speech and musical sounds exploit different acoustic cues: speech is highly dependent on rapidly changing broadband sounds, whereas tonal patterns tend to be slower, although small and precise changes in frequency are important. We argue that the auditory cortices in the two hemispheres are relatively specialized, such that temporal resolution is better in left auditory cortical areas and spectral resolution is better in right auditory cortical areas. We propose that cortical asymmetries might have developed as a general solution to the need to optimize processing of the acoustic environment in both temporal and frequency domains.  相似文献   

4.
Audiovisual timing perception can recalibrate following prolonged exposure to asynchronous auditory and visual inputs. It has been suggested that this might contribute to achieving perceptual synchrony for auditory and visual signals despite differences in physical and neural signal times for sight and sound. However, given that people can be concurrently exposed to multiple audiovisual stimuli with variable neural signal times, a mechanism that recalibrates all audiovisual timing percepts to a single timing relationship could be dysfunctional. In the experiments reported here, we showed that audiovisual temporal recalibration can be specific for particular audiovisual pairings. Participants were shown alternating movies of male and female actors containing positive and negative temporal asynchronies between the auditory and visual streams. We found that audiovisual synchrony estimates for each actor were shifted toward the preceding audiovisual timing relationship for that actor and that such temporal recalibrations occurred in positive and negative directions concurrently. Our results show that humans can form multiple concurrent estimates of appropriate timing for audiovisual synchrony.  相似文献   

5.
The ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.  相似文献   

6.
In this study, we investigated the interactions between temporal and spatial information in auditory working memory. In two experiments, participants were presented with sequences of sounds originating from different locations in space and were then asked to recall either their position or their serial order. In Experiment 1, attention during encoding was manipulated by contrasting 'pure' blocks (i.e., location-only or serial-order-only trials) to 'mixed' blocks (i.e., different percentages of spatial and serial-order trials). In Experiment 2, 'pure' blocks were contrasted to blocks in which spatial and serial-order trials were intermixed with a third task requiring a semantic categorization of sounds. Results from both experiments showed that, whereas serial-order recall is linearly affected by the simultaneous encoding of a concurrent feature, the recall of position is mostly unaffected by concurrent feature encoding. Contrastingly, overall performance level was lower for spatial recall than serial recall. We concluded that serial order and location of items appear to be independently encoded in auditory working memory. Serial order is easier to recall, but strongly affected by the processing of concurrent item dimensions, while item location is more difficult to recall, but relatively automatic, as shown by its strong resistance to interfering dimensions in encoding.  相似文献   

7.
We examined the role of Pavlovian and operant relations in behavioral momentum by arranging response-contingent alternative reinforcement in one component of a three-component multiple concurrent schedule with rats. This permitted the simultaneous arranging of different response-reinforcer (operant) and stimulus-reinforcer (Pavlovian) contingencies during three baseline conditions. Auditory or visual stimuli were used as discriminative stimuli within the multiple concurrent schedules. Resistance to change of a target response was assessed during a single session of extinction following each baseline condition. The rate of the target response during baseline varied inversely with the rate of response-contingent reinforcement derived from a concurrent source, regardless of whether the discriminative stimuli were auditory or visual. Resistance to change of the target response, however, did depend on the discriminative-stimulus modality. Resistance to change in the presence of visual stimuli was a positive function of the Pavlovian contingencies, whereas resistance to change was unrelated to either the operant or Pavlovian contingencies when the discriminative stimuli were auditory. Stimulus salience may be a factor in determining the differences in resistance to change across sensory modalities.  相似文献   

8.
Two visual world experiments investigated the activation of semantically related concepts during the processing of environmental sounds and spoken words. Participants heard environmental sounds such as barking or spoken words such as “puppy” while viewing visual arrays with objects such as a bone (semantically related competitor) and candle (unrelated distractor). In Experiment 1, a puppy (target) was also included in the visual array; in Experiment 2, it was not. During both types of auditory stimuli, competitors were fixated significantly more than distractors, supporting the coactivation of semantically related concepts in both cases; comparisons of the two types of auditory stimuli also revealed significantly larger effects with environmental sounds than spoken words. We discuss implications of these results for theories of semantic knowledge.  相似文献   

9.
Accuracy rates for auditory and tactile recognition of naturalistic stimuli over a 7-day period are compared. 40 subjects listened to 50, 107, or 194 naturalistic sounds and were tested immediately or after delays of 2 or 7 days. 30 other subjects handled but did not visually inspect 150 common objects and were tested over the same three delay intervals. Recognition accuracy for sounds was 87.5%, 82.5%, and 80.4% while common objects were recognized at 96.0%, 93.8%, and 88.5% rates of accuracy. Tactile recognition memory was superior to auditory recognition memory. The recognition accuracies of both modalities were affected by the delay interval. The number of items inspected had no effect on the recognition memory for sounds. Following a delay of 1 wk., the accuracy of recognition relative to the original level of function was 92% for both modalities.  相似文献   

10.
The importance of selecting between a target and a distractor in producing auditory negative priming was examined in three experiments. In Experiment 1, participants were presented with a prime pair of sounds, followed by a probe pair of sounds. For each pair, listeners were to identify the sound presented to the left ear. Under these conditions, participants were especially slow to identify a sound in the probe pair if it had been ignored in the preceding prime pair. Evidence of auditory negative priming was also apparent when the prime sound was presented in isolation to only one ear (Experiment 2) and when the probe target was presented in isolation to one ear (Experiment 3). In addition, the magnitude of the negative priming effect was increased substantially when only a single prime sound was presented. These results suggest that the emergence of auditory negative priming does not depend on selection between simultaneous target and distractor sounds.  相似文献   

11.
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.  相似文献   

12.
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as inferior frontal gyrus (IFG) and motor cortices, even in the absence of an explicit task. To investigate this, we applied spectral mixes of a flute sound and either vowels or specific music instrument sounds (e.g. trumpet) in an fMRI study, in combination with three different instructions. The instructions either revealed no information about stimulus features, or explicit information about either the music instrument or the vowel features. The results demonstrated that, besides an involvement of posterior temporal areas, stimulus expectancy modulated in particular a network comprising IFG and premotor cortices during this passive listening task.  相似文献   

13.
Recent work on non-visual modalities aims to translate, extend, revise, or unify claims about perception beyond vision. This paper presents central lessons drawn from attention to hearing, sounds, and multimodality. It focuses on auditory awareness and its objects, and it advances more general lessons for perceptual theorizing that emerge from thinking about sounds and audition. The paper argues that sounds and audition no better support the privacy of perception’s objects than does vision; that perceptual objects are more diverse than an exclusively visual perspective suggests; and that multimodality is rampant. In doing so, it presents an account according to which audition affords awareness as of not just sounds, but also environmental happenings beyond sounds.  相似文献   

14.
The purpose of the present study was to examine the nature of auditory representations by manipulating the semantic and physical relationships between auditory objects. On each trial, listeners heard a group of four simultaneous sounds for 1 sec, followed by 350 msec of noise, and then either the same sounds or three of the same plus a new one. Listeners completed a change-detection task and an object-encoding task. For change detection, listeners made a same-different judgment for the two groups of sounds. Object encoding was measured by presenting probe sounds that either were or were not present in the two groups. In Experiments 1 and 3, changing the target to an object that was acoustically different from but semantically the same as the original target resulted in more errors on both tasks than when the target changed to an acoustically and semantically different object. In Experiment 2, comparison of semantic and acoustic effects demonstrated that acoustics provide a weaker cue than semantics for both change detection and object encoding. The results suggest that listeners rely more on semantic information than on physical detail.)  相似文献   

15.
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element (i.e., harmonic) that "popped out" as a separate individuated auditory object and yielded the perception of concurrent sound objects. On each trial, participants indicated whether the incoming complex sound contained a brief gap or not. The gap (i.e., signal) was always inserted in the middle of one of the tonal elements. Our findings were consistent with an object-based account in which perception of two simultaneous auditory objects interfered with signal detection. This effect was observed for a wide range of gap durations and was greater when the mistuned harmonic was perceived as a separate object. These results suggest that attention may be initially shared among concurrent sound objects thereby reducing listeners' ability to process acoustic details belonging to a particular sound object. These findings provide new theoretical insight for our understanding of auditory attention and auditory scene analysis.  相似文献   

16.
In three experiments, we addressed the issue of attention effects on unattended sound processing when one auditory stream is selected from three potential streams, creating a simple model of the cocktail party situation. We recorded event-related brain potentials (ERPs) to determine the way in which unattended, task-irrelevant sounds were stored in auditory memory (i.e., as one integrated stream or as two distinct streams). Subjects were instructed to ignore all the sounds and attend to a visual task or to selectively attend to a subset of the sounds and perform a task with the sounds (Experiments 1 and 2). A third (behavioral) experiment was conducted to test whether global pattern violations (used in Experiments 1 and 2) were perceptible when the sounds were segregated. We found that the mismatch negativity ERP component, an index of auditory change detection, was evoked by infrequent pattern violations occurring in the unattended sounds when all the sounds were ignored, but not when attention was focused on a subset of the sounds. The results demonstrate that multiple unattended sound streams can segregate by frequency range but that selectively attending to a subset of the sounds can modify the extent to which the unattended sounds are processed. These results are consistent with models in animal and human studies showing that attentional control can limit the processing of unattended input in favor of attended sensory inputs, thereby facilitating the ability to achieve behavioral goals.  相似文献   

17.
A cross‐modal dual attention experiment was completed by 198 undergraduates in three blocks that each consisted of an orientation task and a concurrent listening task. For the orientation task, participants located regions on an LCD that were cued by speech or one of four types of symbolic auditory cues (i.e. earcons); the concurrent task required participants to listen to and answer questions about GRE sample test passages. Results indicated the orientation task had no effect on comprehension of the passages compared to a passage‐only control for four of the five auditory cue types. All auditory cues resulted in high performance for the orientation task, with speech and complex sounds exhibiting the highest performance. Implications for auditory display design and for assistive technologies for visually impaired persons are discussed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
Change blindness, or the failure to detect (often large) changes to visual scenes, has been demonstrated in a variety of different situations. Failures to detect auditory changes are far less studied, and thus little is known about the nature of change deafness. Five experiments were conducted to explore the processes involved in change deafness by measuring explicit change detection as well as auditory object encoding. The experiments revealed that considerable change deafness occurs, even though auditory objects are encoded quite well. Familiarity with the objects did not affect detection or recognition performance. Whereas spatial location was not an effective cue, fundamental frequency and the periodicity/aperiodicity of the sounds provided important cues for the change-detection task. Implications for the mechanisms responsible for change deafness and auditory sound organization are discussed.  相似文献   

19.
The present study investigated whether memory for a room-sized spatial layout learned through auditory localization of sounds exhibits orientation dependence similar to that observed for spatial memory acquired from stationary viewing of the environment. Participants learned spatial layouts by viewing objects or localizing sounds and then performed judgments of relative direction among remembered locations. The results showed that direction judgments following auditory learning were performed most accurately at a particular orientation in the same way as were those following visual learning, indicating that auditorily encoded spatial memory is orientation dependent. In combination with previous findings that spatial memories derived from haptic and proprioceptive experiences are also orientation dependent, the present finding suggests that orientation dependence is a general functional property of human spatial memory independent of learning modality.  相似文献   

20.
Behavioral studies suggested heightened impact of emotionally laden perceptual input in schizophrenia spectrum disorders, in particular in patients with prominent positive symptoms. De-coupling of prefrontal and posterior cortices during stimulus processing, which is related to loosening of control of the prefrontal cortex over incoming affectively laden information, may underlie this abnormality. Pre-selected groups of individuals with low versus high positive schizotypy (lower and upper quartile of a large screening sample) were tested. During exposure to auditory displays of strong emotions (anger, sadness, cheerfulness), individuals with elevated levels of positive schizotypal symptoms showed lesser prefrontal–posterior coupling (EEG coherence) than their symptom-free counterparts (right hemisphere). This applied to negative emotions in particular and was most pronounced during confrontation with anger. The findings indicate a link between positive symptoms and a heightened impact particularly of threatening emotionally laden stimuli which might lead to exacerbation of positive symptoms and inappropriate behavior in interpersonal situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号