首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Apparent changes in auditory scenes are often unnoticed. This change deafness phenomenon was examined in auditory scenes that comprise human voices. In two experiments, listeners were required to detect changes between two auditory scenes comprising two, three, and four talkers who voiced four‐syllable words. One of the voices in the first scene was randomly selected and was replaced with a new word in change trials. The rationale was that higher stimulus familiarity conferred by human voices compared to other everyday sounds, together with encoding and memory advantages for verbal stimuli and the modular processing of speech in auditory processing, should positively influence the change detection efficiency, and the change deafness phenomenon should not be observed when listeners are explicitly required to detect the obvious changes. Contrary to the prediction, change deafness was significantly observed in three‐ and four‐talker conditions. This indicates that change deafness occurs in listeners even for highly familiar stimuli. This suggests the limited ability for perceptual organization of auditory scenes comprising even a relatively small number of voices (three or four).  相似文献   

2.
《Ecological Psychology》2013,25(2):87-110
Rising acoustic intensity can indicate movement of a sound source toward a listener. Perceptual overestimation of intensity change could provide a selective advantage by indicating that the source is closer than it actually is, providing a better opportunity for the listener to prepare for the source's arrival. In Experiment 1, listeners heard equivalent rising and falling level sounds and indicated whether one demonstrated a greater change in loudness than the other. In 2 subsequent experiments listeners heard equivalent approaching and receding sounds and indicated perceived starting and stopping points of the auditory motion. Results indicate that rising intensity changed in loudness more than equivalent falling intensity, and approaching sounds were perceived as starting and stopping closer than equidistant receding sounds. Both effects were greater for tones than for noise. Evidence is presented that suggests that an asymmetry in the neural coding of egocentric auditory motion is an adaptation that provides advanced warning of looming acoustic sources.  相似文献   

3.
Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early cues and then updated when more information arises. These studies have uniformly shown evidence for immediate integration for a variety of phonetic distinctions. We attempted to extend this to fricatives, a class of speech sounds which requires not only temporal integration of asynchronous cues (the frication, followed by the formant transitions 150–350 ms later), but also integration across different frequency bands and compensation for contextual factors like coarticulation. Eye movements in the visual world paradigm showed clear evidence for a memory buffer. Results were replicated in five experiments, ruling out methodological factors and tying the release of the buffer to the onset of the vowel. These findings support a general auditory account for speech by suggesting that the acoustic nature of particular speech sounds may have large effects on how they are processed. It also has major implications for theories of auditory and speech perception by raising the possibility of an encapsulated memory buffer in early auditory processing.  相似文献   

4.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

5.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

6.
Reflected sounds are often treated as an acoustic problem because they produce false localization cues and decrease speech intelligibility. However, their properties are shaped by the acoustic properties of the environment and therefore are a potential source of information about that environment. The objective of this study was to determine whether information carried by reflected sounds can be used by listeners to enhance their awareness of their auditory environment. Twelve listeners participated in two auditory training tasks in which they learned to identify three environments based on a limited subset of sounds and then were tested to determine whether they could transfer that learning to new, unfamiliar sounds. Results showed that significant learning occurred despite the task difficulty. An analysis of stimulus attributes suggests that it is easiest to learn to identify reflected sound when it occurs in sounds with longer decay times and broadly distributed dominant spectral components.  相似文献   

7.
Abstract—Left-hemisphere (LH) superiority for speech perception is a fundamental neurocognitive aspect of language, and is particularly strong for consonant perception. Two key theoretical aspects of the LH advantage for consonants remain controversial, however: the processing mode (auditory vs. linguistic) and the developmental basis of the specialization (innate vs. experience dependent). Click consonants offer a unique opportunity to evaluate these theoretical issues. Brief and spectrally complex, oral clicks exemplify the acoustic properties that have been proposed for an auditorily based LH specialization, yet they retain linguistic significance only for listeners whose languages employ them as consonants (e.g., Zulu). Speakers of other languages (e.g., English) perceive these clicks as nonspeech sounds. We assessed Zulu versus English listeners' hemispheric asymmetries for clicks, in and out of syllable context, in a dichotic-listening task. Performance was good for both groups, but only Zulus showed an LH advantage. Thus, linguistic processing and experience both appear to be crucial.  相似文献   

8.
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.  相似文献   

9.
Five experiments on the identifiability of synthetic vowels masked by wideband sounds are reported. In each experiment, identification thresholds (signal/masker ratios, in decibels) were measured for two versions of four vowels: a vibrated version, in which FO varied sinusoidally around 100 Hz; and a steady version, in which F0 was fixed at 100 Hz. The first three experiments were performed on naive subjects. Experiment 1 showed that for maskers consisting of bursts of pink noise, vibrato had no effect on thresholds. In Experiment 2, where the maskers were periodic pulse trains with an F0 randomly varied between 120 and 140 Hz from trial to trial, vibrato slightly improved thresholds when the sound pressure level of the maskers was 40 dB, but had no effect for 65-dB maskers. In Experiment 3, vibrated rather than steady pulse trains were used as maskers; when these maskers were at 40 dB, the vibrated versions of the vowels were slightly less identifiable than their steady versions; but, as in Experiment 2, vibrato had no effect when the maskers were at 65 dB. Experiment 4 showed that the unmasking effect of vibrato found in Experiment 2 disappeared in subjects trained in the identification task. Finally, Experiment 5 indicated that in trained listeners, vibrato had no influence on identification performance even when the maskers and the vowels had synchronous onsets and offsets. We conclude that vibrating a vowel masked by a wideband sound can affect its identification threshold, but only for tonal maskers and in untrained listeners. This effect of vibrato should probably be considered as a Gestalt phenomenon originating from central auditory mechanisms.  相似文献   

10.
To make the electroencephalogram (EEG) recording procedure more tolerable, listeners have been allowed in some experiments to watch an audible video while their auditory P1, NI, P2, and mismatch negativity (MMN) event-related potentials (ERPs) to experimental sounds have been measured. However, video sounds may degrade auditory ERPs to experimental sounds. This concern was tested with 19 adults who were instructed to ignore standard and deviant tones presented through headphones while they watched a video with the soundtrack audible in one condition and silent in the other. Video sound impaired the size, latency, and split-half reliability of the MMN, and it decreased the size of the P2. However, it had little effect on the P1 or N1 or on the split-half reliability of the P1—N1—P2 waveform, which was significantly more reliable than the MMN waveform regardless of whether the video sound was on or off. The impressive reliability of the P1 and N1 components allows for the use of video sound during EEG recording, and they may prove useful for assessing auditory processing in listeners who cannot tolerate long testing sessions.  相似文献   

11.
Previous probe-signal studies of auditory spatial attention have shown faster responses to sounds at an expected versus an unexpected location, making no distinction between the use of interaural time difference (ITD) cues and interaural-level difference cues. In 5 experiments, performance on a same-different spatial discrimination task was used in place of the reaction time metric, and sounds, presented over headphones, were lateralized only by an ITD. In all experiments, performance was better for signals lateralized on the expected side of the head, supporting the conclusion that ITDs can be used as a basis for covert orienting. The performance advantage generalized to all sounds within the spatial focus and was not dissipated by a trial-by-trial rove in frequency or by a rove in spectral profile. Successful use by the listeners of a cross-modal, centrally positioned visual cue provided evidence for top-down attentional control.  相似文献   

12.
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.  相似文献   

13.
Localization of low-pass sounds was tested in relation to aspects of Wallach’s (1939, 1940) hypotheses about the role of head movement in front/back and elevation discrimination. With a 3-sec signal, free movement of the head offered only small advantage over a single rotation through 45° for detecting elevation differences. Very slight rotation, as observed using a 0.5-sec signal, seemed sufficient to prevent front/back confusion. Cluster analysis showed that, in detecting elevation, some listeners benefited from rotation, some benefited from natural movement, and some from both. Evidence was found indicating that a moving auditory system generates information for the whereabouts of sounds, even when the movement does not result in the listener facing the source. Results offer significant if partial support for Wallach’s hypotheses.  相似文献   

14.
The purpose of the present study was to examine the nature of auditory representations by manipulating the semantic and physical relationships between auditory objects. On each trial, listeners heard a group of four simultaneous sounds for 1 sec, followed by 350 msec of noise, and then either the same sounds or three of the same plus a new one. Listeners completed a change-detection task and an object-encoding task. For change detection, listeners made a same-different judgment for the two groups of sounds. Object encoding was measured by presenting probe sounds that either were or were not present in the two groups. In Experiments 1 and 3, changing the target to an object that was acoustically different from but semantically the same as the original target resulted in more errors on both tasks than when the target changed to an acoustically and semantically different object. In Experiment 2, comparison of semantic and acoustic effects demonstrated that acoustics provide a weaker cue than semantics for both change detection and object encoding. The results suggest that listeners rely more on semantic information than on physical detail.)  相似文献   

15.
Phonological working memory is known be (a) inversely related to the duration of the items to be learned (word-length effect), and (b) impaired by the presence of irrelevant speech-like sounds (irrelevant-speech effect). As it is discussed controversially whether these memory disruptions are subject to attentional control, both effects were studied in sighted participants and in a sample of early blind individuals who are expected to be superior in selectively attending to auditory stimuli. Results show that, while performance depended on word length in both groups, irrelevant speech interfered with recall only in the sighted group, but not in blind participants. This suggests that blind listeners may be able to effectively prevent irrelevant sound from being encoded in the phonological store, presumably due to superior auditory processing. The occurrence of a word-length effect, however, implies that blind and sighted listeners are utilizing the same phonological rehearsal mechanism in order to maintain information in the phonological store.  相似文献   

16.
The importance of selecting between a target and a distractor in producing auditory negative priming was examined in three experiments. In Experiment 1, participants were presented with a prime pair of sounds, followed by a probe pair of sounds. For each pair, listeners were to identify the sound presented to the left ear. Under these conditions, participants were especially slow to identify a sound in the probe pair if it had been ignored in the preceding prime pair. Evidence of auditory negative priming was also apparent when the prime sound was presented in isolation to only one ear (Experiment 2) and when the probe target was presented in isolation to one ear (Experiment 3). In addition, the magnitude of the negative priming effect was increased substantially when only a single prime sound was presented. These results suggest that the emergence of auditory negative priming does not depend on selection between simultaneous target and distractor sounds.  相似文献   

17.
Three experiments investigated the nature of visuo-auditory crossmodal cueing in a triadic setting: participants had to detect an auditory signal while observing another agent’s head facing one of the two laterally positioned auditory sources. Experiment 1 showed that when the agent’s eyes were open, sounds originating on the side of the agent’s gaze were detected faster than sounds originating on the side of the agent’s visible ear; when the agent’s eyes were closed this pat-tern of responses was reversed. Two additional experiments showed that the results were sensitive to whether participants could infer a hearing function on the part of the agent. When no ear was depicted on the agent, only a gaze-side advantage was observed (Experiment 2), but when the agent’s ear was covered (Experiment 3), an ear side advantage was observed only when hearing could still be inferred (i.e., wearing the hat) but not when hearing was inferred to be diminished (i.e., wearing a helmet). The findings are discussed in the context of inferential and simulation processes and joint attention mechanisms.  相似文献   

18.
Previous research has found that pictures (e.g., a picture of an elephant) are remembered better than words (e.g., the word "elephant"), an empirical finding called the picture superiority effect (Paivio & Csapo. Cognitive Psychology 5(2):176-206, 1973). However, very little research has investigated such memory differences for other types of sensory stimuli (e.g. sounds or odors) and their verbal labels. Four experiments compared recall of environmental sounds (e.g., ringing) and spoken verbal labels of those sounds (e.g., "ringing"). In contrast to earlier studies that have shown no difference in recall of sounds and spoken verbal labels (Philipchalk & Rowe. Journal of Experimental Psychology 91(2):341-343, 1971; Paivio, Philipchalk, & Rowe. Memory & Cognition 3(6):586-590, 1975), the experiments reported here yielded clear evidence for an auditory analog of the picture superiority effect. Experiments 1 and 2 showed that sounds were recalled better than the verbal labels of those sounds. Experiment 2 also showed that verbal labels are recalled as well as sounds when participants imagine the sound that the word labels. Experiments 3 and 4 extended these findings to incidental-processing task paradigms and showed that the advantage of sounds over words is enhanced when participants are induced to label the sounds.  相似文献   

19.
The irrelevant sound effect (ISE) typically refers to a disruptive effect of a to‐be‐ignored sound in serial recall tasks, where lists of visually presented items (digits and letters) must be recalled in serial order. Although extensively studied in adults, studies on developmental aspects of the ISE are scarce. The present study aims to increase our understanding of developmental changes of auditory distraction in children beyond serial recall. Two tasks (i.e., word categorization and evaluation of simple mathematical equations) were designed to test retrieval from semantic memory. Proportion correct and reaction times (adjusted for speed–accuracy tradeoff) were measured in 8–9 and 12–13‐year‐olds. Results revealed a developmental change in the susceptibility to auditory distraction. Whereas older children were not affected by background sounds, younger children showed impairment in both proportion correct and adjusted reaction times. Overall, results suggest that attention distraction and immature attention control mechanisms contribute to ISEs in young children. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
An important question is the extent to which declines in memory over time are due to passive loss or active interference from other stimuli. The purpose of the present study was to determine the extent to which implicit memory effects in the perceptual organization of sound sequences are subject to loss and interference. Toward this aim, we took advantage of two recently discovered context effects in the perceptual judgments of sound patterns, one that depends on stimulus features of previous sounds and one that depends on the previous perceptual organization of these sounds. The experiments measured how listeners’ perceptual organization of a tone sequence (test) was influenced by the frequency separation, or the perceptual organization, of the two preceding sequences (context1 and context2). The results demonstrated clear evidence for loss of context effects over time but little evidence for interference. However, they also revealed that context effects can be surprisingly persistent. The robust effects of loss, followed by persistence, were similar for the two types of context effects. We discuss whether the same auditory memories might contain information about basic stimulus features of sounds (i.e., frequency separation), as well as the perceptual organization of these sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号