首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

2.
The role of attention in the formation of auditory streams   总被引:1,自引:0,他引:1  
There is controversy over whether stream segregation is an attention-dependent process. Part of the argument is related to the initial formation of auditory streams. It has been suggested that attention is needed only to form the streams, but not to maintain them once they have been segregated. The question of whether covert attention at the beginning of a to-be-ignored set of sounds will be enough to initiate the segregation process remains open. Here, we investigate this question by (1) using a methodology that does not require the participant to make an overt response to assess how the unattended sounds are organized and (2) structuring the test sound sequence to account for the covert attention explanation. The results of four experiments provide evidence to support the view that attention is not always required for the formation of auditory streams.  相似文献   

3.
In vision, it is well established that the perceptual load of a relevant task determines the extent to which irrelevant distractors are processed. Much less research has addressed the effects of perceptual load within hearing. Here, we provide an extensive test using two different perceptual load manipulations, measuring distractor processing through response competition and awareness report. Across four experiments, we consistently failed to find support for the role of perceptual load in auditory selective attention. We therefore propose that the auditory system – although able to selectively focus processing on a relevant stream of sounds – is likely to have surplus capacity to process auditory information from other streams, regardless of the perceptual load in the attended stream. This accords well with the notion of the auditory modality acting as an ‘early-warning’ system as detection of changes in the auditory scene is crucial even when the perceptual demands of the relevant task are high.  相似文献   

4.
Temporal-cuing studies show faster responding to stimuli at an attended versus unattended time point. Whether the mechanisms involved in this temporal orienting of attention are located early or late in the processing stream has not been answered unequivocally. To address this question, we measured event-related potentials in two versions of an auditory temporal cuing task: Stimuli at the uncued time point either required a response (Experiment 1) or did not (Experiment 2). In both tasks, attention was oriented to the cued time point, but attention could be selectively focused on the cued time point only in Experiment 2. In both experiments, temporal orienting was associated with a late positivity in the timerange of the P3. An early enhancement in the timerange of the auditory N1 was observed only in Experiment 2. Thus, temporal attention improves auditory processing at early sensory levels only when it can be focused selectively.  相似文献   

5.
The ability to selectively attend to an auditory stimulus appears to decline with age and may result from losses in the ability to inhibit the processing of irrelevant stimuli (i.e., the inhibitory deficit hypothesis; L. Hasher & R. T. Zacks, 1988). It is also possible that declines in the ability to selectively attend are a result of age-related hearing losses. Three experiments examined whether older and younger adults differed in their ability to inhibit the processing of distracting stimuli when the listening situation was adjusted to correct for individual differences in hearing. In all 3 experiments, younger and older adults were equally affected by irrelevant stimuli, unattended stimuli, or both. The implications for auditory attention research and for possible differences between auditory and visual processing are discussed.  相似文献   

6.
The notion of automatic syntactic analysis received support from some event-related potential (ERP) studies. However, none of these studies tested syntax processing in the presence of a concurrent speech stream. Here we present two concurrent continuous speech streams, manipulating two variables potentially affecting speech processing in a fully crossed design: attention (focused vs. divided) and task (lexical – detecting numerals vs. syntactical – detecting syntactic violations). ERPs elicited by syntactic violations and numerals as targets were compared with those for distractors (task-relevant events in the unattended speech stream) and attended and unattended task-irrelevant events. As was expected, only target numerals elicited the N2b and P3 components. The amplitudes of these components did not significantly differ between focused and divided attention. Both task-relevant and task-irrelevant syntactic violations elicited the N400 ERP component within the attended but not in the unattended speech stream. P600 was only elicited by target syntactic violations. These results provide no support for the notion of automatic syntactic analysis. Rather, it appears that task-relevance is a prerequisite of P600 elicitation, implying that in-depth syntactic analysis occurs only for attended speech under everyday listening situations.  相似文献   

7.
Toward a neurophysiological theory of auditory stream segregation   总被引:2,自引:0,他引:2  
Auditory stream segregation (or streaming) is a phenomenon in which 2 or more repeating sounds differing in at least 1 acoustic attribute are perceived as 2 or more separate sound sources (i.e., streams). This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent neurophysiological studies that have provided important insights into the mechanisms of streaming. On the basis of these studies, segregation of sounds is likely to occur beginning in the auditory periphery and continuing at least to primary auditory cortex for simple cues such as pure-tone frequency but at stages as high as secondary auditory cortex for more complex cues such as periodicity pitch. Attention-dependent and perception-dependent processes are likely to take place in primary or secondary auditory cortex and may also involve higher level areas outside of auditory cortex. Topographic maps of acoustic attributes, stimulus-specific suppression, and competition between representations are among the neurophysiological mechanisms that likely contribute to streaming. A framework for future research is proposed.  相似文献   

8.
Two pairs of experiments studied the effects of attention and of unilateral neglect on auditory streaming. The first pair showed that the build up of auditory streaming in normal participants is greatly reduced or absent when they attend to a competing task in the contralateral ear. It was concluded that the effective build up of streaming depends on attention. The second pair showed that patients with an attentional deficit toward the left side of space (unilateral neglect) show less stream segregation of tone sequences presented to their left than to their right ears. Streaming in their right ears was similar to that for stimuli presented to either ear of healthy and of brain-damaged controls, who showed no across-ear asymmetry. This result is consistent with an effect of attention on streaming, constrains the neural sites involved, and reveals a qualitative difference between the perception of left- and right-sided sounds by neglect patients.  相似文献   

9.
Unexpected auditory stimuli are potent distractors, able to break through selective attention and disrupt performance in an unrelated visual task. This study examined the processing fate of novel sounds by examining the extent to which their semantic content is analyzed and whether the outcome of this processing can impact on subsequent behavior. This issue was investigated across five laboratory experiments in which participants categorized visual left and right arrows while instructed to ignore irrelevant sounds. The results showed that auditory novels that were incongruent with the visual target (e.g., word “left” presented before a right arrow) disrupted performance over and above congruent novels (semantic effect) while both types of novels delayed responses in the visual task compared to a standard sound (novelty effect). No semantic effect was observed for congruent and incongruent standards, suggesting that novelty detection is necessary for involuntary semantic processing to unravel. While the novelty effect augmented as the difference between novels and the standard increased, the semantic effect was immune to this variation. Furthermore, the novelty effect decreased across the task while the semantic effect did not. A general cognitive framework is proposed encompassing these new findings and previous work in an attempt to account for the behavioral impact of irrelevant auditory novels on primary task performance.  相似文献   

10.
Often, the sound arriving at the ears is a mixture from many different sources, but only 1 is of interest. To assist with selection, the auditory system structures the incoming input into streams, each of which ideally corresponds to a single source. Some authors have argued that this process of streaming is automatic and invariant, but recent evidence suggests it is affected by attention. In Experiments 1 and 2, it is shown that the effect of attention is not a general suppression of streaming on an unattended side of the ascending auditory pathway or in unattended frequency regions. Experiments 3 and 4 investigate the effect on streaming of physical gaps in the sequence and of brief switches in attention away from a sequence. The results demonstrate that after even short gaps or brief switches in attention, streaming is reset. The implications are discussed, and a hierarchical decomposition model is proposed.  相似文献   

11.
When auditory material segregates into "streams," is the unattended stream actually organized as an entity? An affirmative answer is suggested by the observation that the organizational structure of the unattended material interacts with the structure of material to which the subject is trying to attend. Specificially, a to-be-rejected stream can, because of its structure, capture from a to-be-judged stream elements that would otherwiise be acceptable members of the to-be-judged stream.  相似文献   

12.
In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.  相似文献   

13.
Three experiments attempted to clarify the effect of altering the spatial presentation of irrelevant auditory information. Previous research using serial recall tasks demonstrated a left-ear disadvantage for the presentation of irrelevant sounds (). Experiments 1 and 2 examined the effects of manipulating the location of irrelevant sound on either a mental arithmetic task () or a missing-item task (; Experiment 4). Experiment 3 altered the amount of change in the irrelevant stream to assess how this affected the level of interference elicited. Two prerequisites appear necessary to produce the left-ear disadvantage; the presence of ordered structural changes in the irrelevant sound and the requirement for serial order processing of the attended information. The existence of a left-ear disadvantage highlights the role of the right hemisphere in the obligatory processing of auditory information.  相似文献   

14.
Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception, can be enhanced through an implicit task-irrelevant learning procedure that has been shown to produce visual perceptual learning. The single-formant sounds were paired at subthreshold levels with the attended targets in an auditory identification task. Results showed that task-irrelevant learning occurred for the unattended stimuli. Surprisingly, the magnitude of this learning effect was similar to that following explicit training on auditory formant transition detection using discriminable stimuli in an adaptive procedure, whereas explicit training on the subthreshold stimuli produced no learning. These results suggest that in adults learning of speech parts can occur at least partially through implicit mechanisms.  相似文献   

15.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

16.
Buchan JN  Munhall KG 《Perception》2011,40(10):1164-1182
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.  相似文献   

17.
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /d?g/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.  相似文献   

18.
Unexpected events often distract us. In the laboratory, novel auditory stimuli have been shown to capture attention away from a focal visual task and yield specific electrophysiological responses as well as a behavioral cost to performance. Distraction is thought to follow ineluctably from the sound’s low probability of occurrence or, put more simply, its unexpected occurrence. Our study challenges this view with respect to behavioral distraction and argues that past research failed to identify the informational value of sound as a mediator of novelty distraction. We report an experiment showing that (1) behavioral novelty distraction is only observed when the sound announces the occurrence and timing of an upcoming visual target (as is the case in all past research); (2) that no such distraction is observed for deviant sounds conveying no such information; and that (3) deviant sounds can actually facilitate performance when these, but not the standards, convey information. We conclude that behavioral novelty distraction, as observed in oddball tasks, is observed in the presence of novel sounds but only when the cognitive system can take advantage of the auditory distracters to optimize performance.  相似文献   

19.
Young adult subjects attended selectively to brief noise bursts delivered in free field via a horizontal array of seven loudspeakers spaced apart by 9° of angle. Frequent “standard” stimuli (90%) and infrequent “target/deviant” stimuli (10%) of increased bandwidth were delivered at a fast rate in a random sequence equiprobably from each speaker. In separate runs, the subjects’ task was to selectively attend to the leftmost, center, or rightmost speaker and to press a button to the infrequent “target” stimuli occurring at the designated spatial location. Behavioral detection rates and concurrently recorded event-related potentials (ERPs) indicated that auditory attention was deployed as a finely tuned gradient around the attended sound source, thus providing support for gradient models of auditory spatial attention. Furthermore, the ERP data suggested that the spatial focusing of attention was achieved in two distinct stages, with an early more broadly tuned filtering of inputs occurring over the first 80–200 msec after stimulus onset, followed by a more narrowly focused selection of attended-location deviants that began at around 250 msec and closely resembled the behavioral gradient of target detections.  相似文献   

20.
The effects of auditory context on the preattentive and perceptual organization of tone sequences were investigated. Two sets of experiments were conducted in which the pitch of contextual tones was varied, bringing about two different contextual manipulations. Preattentive auditory organization was indexed by the mismatch negativity event-related potential, which is elicited by violations of auditory regularities even when participants ignore the sounds (e.g., by reading a book). The perceptual effects of the contextual manipulations on auditory grouping were assessed using target-detection and orderjudgment tasks. The close correspondence found between the effects of auditory context on the perceptual and preattentive measures of auditory grouping suggests that a large part of contextual processing is preattentive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号