首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The sharing of processing resources between the senses was investigated by examining the effects of visual task load on auditory event-related brain potentials (ERPs). In Experiment 1, participants completed both a zero-back and a one-back visual task while a tone pattern or a harmonic series was presented. N1 and P2 waves were modulated by visual task difficulty, but neither mismatch negativity (MMN) elicited by deviant stimuli from the tone pattern nor object-related negativity (ORN) elicited by mistuning from the harmonic series was affected. In Experiment 2, participants responded to identity (what) or location (where) in vision, while ignoring sounds alternating in either pitch (what) or location (where). Auditory ERP modulations were consistent with task difficulty, rather than with task specificity. In Experiment 3, we investigated auditory ERP generation under conditions of no visual task. The results are discussed with respect to a distinction between process-general (N1 and P2) and processspecific (MMN and ORN) auditory ERPs.  相似文献   

2.
Sounds emitted by different sources arrive at our ears as a mixture that must be disentangled before meaningful information can be retrieved. It is still a matter of debate whether this decomposition happens automatically or requires the listener’s attention. These opposite positions partly stem from different methodological approaches to the problem. We propose an integrative approach that combines the logic of previous measurements targeting either auditory stream segregation (interpreting a mixture as coming from two separate sources) or integration (interpreting a mixture as originating from only one source). By means of combined behavioral and event-related potential (ERP) measures, our paradigm has the potential to measure stream segregation and integration at the same time, providing the opportunity to obtain positive evidence of either one. This reduces the reliance on zero findings (i.e., the occurrence of stream integration in a given condition can be demonstrated directly, rather than indirectly based on the absence of empirical evidence for stream segregation, and vice versa). With this two-way approach, we systematically manipulate attention devoted to the auditory stimuli (by varying their task relevance) and to their underlying structure (by delivering perceptual tasks that require segregated or integrated percepts). ERP results based on the mismatch negativity (MMN) show no evidence for a modulation of stream integration by attention, while stream segregation results were less clear due to overlapping attention-related components in the MMN latency range. We suggest future studies combining the proposed two-way approach with some improvements in the ERP measurement of sequential stream segregation.  相似文献   

3.
The physiological processes underlying the segregation of concurrent sounds were investigated through the use of event-related brain potentials. The stimuli were complex sounds containing multiple harmonics, one of which could be mistuned so that it was no longer an integer multiple of the fundamental. Perception of concurrent auditory objects increased with degree of mistuning and was accompanied by negative and positive waves that peaked at 180 and 400 ms poststimulus, respectively. The negative wave, referred to as object-related negativity, was present during passive listening, but the positive wave was not. These findings indicate bottom-up and top-down influences during auditory scene analysis. Brain electrical source analyses showed that distinguishing simultaneous auditory objects involved a widely distributed neural network that included auditory cortices, the medial temporal lobe, and posterior association cortices.  相似文献   

4.
5.
Two tone bursts separated by a silent interval and imbedded in a background white noise were presented to elderly subjects (M age = 71.3 years) and young subjects (M age = 22.2 years). Subjects were required to judge when the two tone bursts fused perceptually by adjusting the duration of tone-one. The bursts were separated by six discrete interstimulus intervals of 4, 8, 16, 24, 32, or 40 ms. The tone-two burst was held constant at 100 ms. Fusion point was defined as that critical tone-one duration at which the two tones fused (were perceived as one). Elderly subjects reached fusion point at longer critical tone-one durations than young subjects at each interval tested. The function relating tone-one duration and interval conformed to an exponential curve and is discussed with respect to an exponential decay model of the inhibitory interactions of neural systems responding to onsets and offsets of sensory events.  相似文献   

6.
Autism is characterized by varying degrees of disorders in language, communication and imagination. What are the prospects for making sense of this heterogeneous condition? Advances in identifying phenotypes in relation to subgroups within autism, based on disproportionate language impairment, have been recently reported by Tager-Flusberg and Joseph. The symptom severity of these subgroups requires investigation for underlying deficits, such as in auditory processing. Other recent reports support the view that a deficit in auditory processing might be a key factor in autism.  相似文献   

7.
Several lines of evidence suggest that the human brain contains special-purpose machinery for processing information about visual scenes. In particular, a region in medial occipitotemporal cortex—the “parahippocampal place area”, or PPA—represents the geometric structure of scenes as defined primarily by their background elements. Neuroimaging studies have demonstrated that the PPA responds preferentially to scenes but not to the objects within them, while neuropsychological studies have shown that damage to this region leads to an impaired ability to learn new scenes. More recent evidence suggests that the PPA encodes novel scenes in a viewpoint-specific manner and that these codes are more reliable in good navigators than bad navigators. The PPA may be part of a larger network of regions involved in processing navigationally relevant spatial information. The role of this region in place recognition and gist comprehension is also discussed.  相似文献   

8.
The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that were language-familiar, lexical, meaningful in Hungarian but language-unfamiliar, not lexical, meaningless in German, and words with the opposite characteristics. The roles of frequently presented stimuli (Standards) and infrequently presented one (Deviants) were fully crossed. Language-familiar and language-unfamiliar Deviants elicited the Mismatch Negativity component of the event-related brain potential. We found differences in processes of change detection depending on whether the Standard was language-familiar, or not. Whereas, the lexicality of the Deviant had no effect on the processes of change detection. Also, language-familiar Standards processed differently than language-unfamiliar ones. We suggest that pre-attentive (default) tuning to meaningful words sets up language-specific preparatory processes that affect change detection in speech sequences.  相似文献   

9.
Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are comparatively much poorer at recognizing pitch-shifted tone sequences. This apparent disparity may reflect fundamental differences in the neural mechanisms underlying the representation of sound in songbirds. Alternatively, because non-human studies have used sine-tone stimuli almost exclusively, tolerance to pitch height changes in the context of natural signals may be underestimated. Here, we show that European starlings, a species of songbird, can maintain accurate recognition of the songs of other starlings when the pitch of those songs is shifted by as much as ±40%. We observed accurate recognition even for songs pitch-shifted well outside the range of frequencies used during training, and even though much smaller pitch shifts in conspecific songs are easily detected. With similar training using human piano melodies, recognition of the pitch-shifted melodies is very limited. These results demonstrate that non-human pitch processing is more flexible than previously thought and that the flexibility in pitch processing strategy is stimulus dependent.  相似文献   

10.
11.
12.
Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent evidence suggests it is affected by attention. In Experiments 1 and 2, it is shown that the effect of attention is not a general suppression of streaming on an unattended side of the ascending auditory pathway or in unattended frequency regions. Experiments 3 and 4 investigate the effect on streaming of physical gaps in the sequence and of brief switches in attention away from a sequence. The results demonstrate that after even short gaps or brief switches in attention, streaming is reset. The implications are discussed, and a hierarchical decomposition model is proposed.  相似文献   

13.
Recent evidence suggests that spatial frequency (SF) processing of simple and complex visual patterns is flexible. The use of spatial scale in scene perception seems to be influenced by people's expectations. However as yet there is no direct evidence for top-down attentional effects on flexible scale use in scene perception. In two experiments we provide such evidence. We presented participants with low- and high-pass SF filtered scenes and cued their attention to the relevant scale. In Experiment 1 we subsequently presented them with hybrid scenes (both low- and high-pass scenes present). We observed that participants reported detecting the cued component of hybrids. To explore if this might be due to decision biases, in Experiment 2, we replaced hybrids with images containing meaningful scenes at uncued SFs and noise at the cued SFs (invalid cueing). We found that participants performed poorly on invalid cueing trials. These findings are consistent with top-down attentional modulation of early spatial frequency processing in scene perception.  相似文献   

14.
A simple demonstration of auditory top-down processing is described in which one speech embedded in several others becomes much clearer when participants read a text of the target speech at the same time as they hear it.  相似文献   

15.
16.
Subjects were required to perform perceptual tasks when stimuli were presented simultaneously in the auditory and tactile modalities and when they were presented in one of the modalities alone. The results indicated that when the demands on cognitive processes are small, auditory and tactile stimuli presented simultaneously can be processed as well as when stimuli are presented in only one modality. In a task which required a large amount of cognitive processing, it became difficult for subjects to maintain high levels of performance in both modalities and the distribution of attention became an important determinant of performance. The data were consistent with a theory that cognitive, but not perceptual, processing is disrupted when subjects have difficulty performing two perceptual tasks simultaneously.  相似文献   

17.
Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker??s phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory?Cvisual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.  相似文献   

18.
A listener presented with two speech signals must at times sacrifice the processing of one signal in order to understand the other. This study was designed to distinguish costs related to interference from a second signal (selective attention) from costs related to performing two tasks simultaneously (divided attention). Listeners presented with two processed speech-in-noise stimuli, one to each ear, either (1) identified keywords in both or (2) identified keywords in one and detected the presence of speech in the other. Listeners either knew which ear to report in advance (single task) or were cued afterward (partial-report dual task). When the dual task required two identification judgments, performance suffered relative to the single-task condition (as measured by percent correct judgments). Two different tasks (identification for one stimulus and detection for the other) resulted in much smaller reductions in performance when the cue came afterward. We concluded that the degree to which listeners can simultaneously process dichotic speech stimuli seems to depend not only on the amount of interference between the two stimuli, but also on whether there is competition for limited processing resources. We suggest several specific hypotheses as to the structural mechanisms that could constitute these limited resources.  相似文献   

19.
In five experiments, participants made speeded target/nontarget classification responses to singly presented auditory stimuli. Stimuli were defined via vocal identity and location in Experiments 1 and 2 and frequency and location in the remaining experiments. Performance was examined in two conditions inspired by visual search: In the feature condition, responses were based on the detection of unique stimulus features; in the conjunction condition, unique combinations of features were critical. Experiment 1 showed a conjunction benefit, since classifications were faster in the conjunction condition than in the feature condition. Potential confounds were eliminated in Experiments 2 and 3, which resulted in the observation of conjunction costs. In Experiments 4 and 5, we examined, respectively, whether the cost could be explained in terms of differences in interstimulus similarity and target template complexity across the main conditions. Both accounts were refuted. It seems that when the identification of particular feature combinations is necessary, conjunction processing in audition becomes an effortful process.  相似文献   

20.
Participants made speeded target-nontarget responses to singly presented auditory stimuli in 2 tasks. In within-dimension conditions, participants listened for either of 2 target features taken from the same dimension; in between-dimensions conditions, the target features were taken from different dimensions. Judgments were based on the presence or absence of either target feature. Speech sounds, defined relative to sound identity and locale, were used in Experiment 1, whereas tones, comprising pitch and locale components, were used in Experiments 2 and 3. In all cases, participants performed better when the target features were taken from the same dimension than when they were taken from different dimensions. Data suggest that the auditory and visual systems exhibit the same higher level processing constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号