首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Through hearing we learn about source events: events in which objects move or interact so that they vibrate and produce sound waves, such as when they roll, collide, or scrape together. It is often claimed that we do not simply hear sounds and infer what event caused them, but hear source events themselves, through hearing sounds. Here I investigate how the idea that we hear source events should be understood, with a focus on how hearing an event relates to hearing the objects involved in that event. I argue that whereas we see events such as rollings and collisions by seeing objects move through space, this cannot be how we hear them, and go on to examine two other possible models. On the first, we hear events but not their participant objects. On the second, to hear an event is to hear the appearance of an object to change. I argue that neither is satisfactory and endorse a third option: to hear a source event is to hear an object as extending through time.  相似文献   

2.
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naive (untrained) listeners showed that this incongruency advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of -7.5 dB, but there is about five percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to a specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the IA is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions.  相似文献   

3.
Playback experiments have been a useful tool for studying the function of sounds and the relevance of different sound characteristics in signal recognition in many different species of vertebrates. However, successful playback experiments in sound-producing fish remain rare, and few studies have investigated the role of particular sound features in the encoding of information. In this study, we set-up an apparatus in order to test the relevance of acoustic signals in males of the cichlid Metriaclima zebra. We found that territorial males responded more to playbacks by increasing their territorial activity and approaching the loudspeaker during and after playbacks. If sounds are used to indicate the presence of a competitor, we modified two sound characteristics, that is, the pulse period and the number of pulses, in order to investigate whether the observed behavioural response was modulated by the temporal structure of sounds recorded during aggressive interactions. Modified sounds yielded little or no effect on the behavioural response they elicited in territorial males, suggesting a high tolerance for variations in pulse period and number of pulses. The biological function of sounds in M. zebra and the lack of responsiveness to our temporal modifications are discussed.  相似文献   

4.
Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds—extensively edited samples produced by a French horn and a tenor saxophone—following either resynthesized speech or a short passage of music. Preceding contexts were “colored” by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.  相似文献   

5.
Numerous music cultures use nonsense syllables to represent percussive sounds. Covert reciting of these syllable sequences along with percussion music aids active listeners in keeping track of music. Owing to the acoustic dissimilarity between the representative syllables and the referent percussive sounds, associative learning is necessary for the oral representation of percussion music. We used functional magnetic resonance imaging (fMRI) to explore the neural processes underlying oral rehearsals of music. There were four music conditions in the experiment: (1) passive listening to unlearned percussion music, (2) active listening to learned percussion music, (3) active listening to the syllable representation of (2), and (4) active listening to learned melodic music. Our results specified two neural substrates of the association mechanisms involved in the oral representation of percussion music. First, information integration of heard sounds and the auditory consequences of subvocal rehearsals may engage the right planum temporale during active listening to percussion music. Second, mapping heard sounds to articulatory and laryngeal gestures may engage the left middle premotor cortex.  相似文献   

6.
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre‐babbling infants (at 4–6 months) prefer listening to vowel sounds with infant vocal properties over vowel sounds with adult vocal properties. A listening preference favoring infant vowels may derive from their higher voice pitch, which has been shown to attract infant attention in infant‐directed speech (IDS). In addition, infants' nascent articulatory abilities may induce a bias favoring infant speech given that 4‐ to 6‐month‐olds are beginning to produce vowel sounds. We created infant and adult /i/ (‘ee’) vowels using a production‐based synthesizer that simulates the act of speaking in talkers at different ages and then tested infants across four experiments using a sequential preferential listening task. The findings provide the first evidence that infants preferentially attend to vowel sounds with infant voice pitch and/or formants over vowel sounds with no infant‐like vocal properties, supporting the view that infants' production abilities influence how they process infant speech. The findings with respect to voice pitch also reveal parallels between IDS and infant speech, raising new questions about the role of this speech register in infant development. Research exploring the underpinnings and impact of this perceptual bias can expand our understanding of infant language development.  相似文献   

7.
Reflected sounds are often treated as an acoustic problem because they produce false localization cues and decrease speech intelligibility. However, their properties are shaped by the acoustic properties of the environment and therefore are a potential source of information about that environment. The objective of this study was to determine whether information carried by reflected sounds can be used by listeners to enhance their awareness of their auditory environment. Twelve listeners participated in two auditory training tasks in which they learned to identify three environments based on a limited subset of sounds and then were tested to determine whether they could transfer that learning to new, unfamiliar sounds. Results showed that significant learning occurred despite the task difficulty. An analysis of stimulus attributes suggests that it is easiest to learn to identify reflected sound when it occurs in sounds with longer decay times and broadly distributed dominant spectral components.  相似文献   

8.
Listeners identified spoken words, letters, and numbers and the spatial location of these utterances in three listening conditions as a function of the number of simultaneously presented utterances. The three listening conditions were a normal listening condition, in which the sounds were presented over seven possible loudspeakers to a listener seated in a sound-deadened listening room; a one-headphone listening condition, in which a single microphone that was placed in the listening room delivered the sounds to a single headphone worn by the listener in a remote room; and a stationary KEMAR listening condition, in which binaural recordings from an acoustic manikin placed in the listening room were delivered to a listener in the remote room. The listeners were presented one, two, or three simultaneous utterances. The results show that utterance identification was better in the normal listening condition than in the one-headphone condition, with the KEMAR listening condition yielding intermediate levels of performance. However, the differences between listening in the normal and in the one-headphone conditions were much smaller when two, rather than three, utterances were presented at a time. Localization performance was good for both the normal and the KEMAR listening conditions and at chance for the one-headphone condition. The results suggest that binaural processing is probably more important for solving the “cocktail party” problem when there are more than two concurrent sound sources.  相似文献   

9.
It is commonly observed that a speaker vocally imitates a sound that she or he intends to communicate to an interlocutor. We report on an experiment that examined the assumption that vocal imitations can effectively communicate a referent sound and that they do so by conveying the features necessary for the identification of the referent sound event. Participants were required to sort a set of vocal imitations of everyday sounds. The resulting clusters corresponded in most of the cases to the categories of the referent sound events, indicating that the imitations enabled the listeners to recover what was imitated. Furthermore, a binary decision tree analysis showed that a few characteristic acoustic features predicted the clusters. These features also predicted the classification of the referent sounds but did not generalize to the categorization of other sounds. This showed that, for the speaker, vocally imitating a sound consists of conveying the acoustic features important for recognition, within the constraints of human vocal production. As such vocal imitations prove to be a phenomenon potentially useful to study sound identification.  相似文献   

10.
Do young infants treat speech as a special signal, compared with structurally similar non‐speech sounds? We presented 2‐ to 7‐month‐old infants with nonsense speech sounds and complex non‐speech analogues. The non‐speech analogues retain many of the spectral and temporal properties of the speech signal, including the pitch contour information which is known to be salient to young listeners, and thus provide a stringent test for a potential listening bias for speech. Our results show that infants as young as 2 months of age listened longer to speech sounds. This listening selectivity indicates that early‐functioning biases direct infants’ attention to speech, granting speech a special status in relation to other sounds.  相似文献   

11.
Decoding the function and meaning of a foreign culture's sounds and gestures is a notoriously difficult problem. It is even more challenging when we think about the sounds and gestures of nonhuman animals. This essay provides a review of what is currently known about the informational content and function of primate vocalizations, emphasizing the problems underlying the construction of a primate “dictionary.” In contrast to the Oxford English Dictionary, this dictionary provides entries to emotional expressions as well as potentially referential expressions. It therefore represents a guide to what animals do with their vocalizations, as well as how they are represented by signalers and perceivers. I begin by a discussion of the unit problem, of how an acoustic space is carved up into functionally significant components leading to a species‐specific repertoire or lexicon of sorts. This section shows how little we know about the units of organization within animal vocal repertoires, and how such lack of information constrains our ability to tackle the problem of syntactic structure. In Section III, I review research on the production and perception of vocal signals that appear to be functionally referential. This work shows that several nonhuman primates produce vocalizations that share some of the key properties of reference, but certainly not all; the components that are missing raises questions about their role as precursors to human words. In Section IV, I explore the social uses of vocalizations, assessing whether the signal contains sufficient information for listeners to judge a caller's credibility; ultimately, caller credibility determines how receivers select an appropriate response. Results show that individuals can use calls to assess whether someone is reliable or unreliable, and that such attributes are associated with individuals and particular contexts. I conclude by synthesizing the issues presented and then raise some directions for future conceptual and methodological progress.  相似文献   

12.
In the present article, I show that sounds are properties that are not physical in a narrow sense. First, I argue that sounds are properties using Moorean-style arguments and defend this property view from various arguments against it that make use of salient disanalogies between sounds and colors. The first disanalogy is that we talk of objects making sounds but not of objects making colors. The second is that we count and quantify over sounds but not colors. The third is that sounds can survive qualitative change in their auditory properties, but colors cannot survive change in their chromatic properties. Next, I provide a taxonomy of property views of sound. As the property view of sound has been so rarely discussed, many of the views available have never been articulated. My taxonomy will articulate these views and how they are related to one another. I taxonomize sounds according to three characteristics: dispositional/non-dispositional, relational/non-relational and reductive/non-reductive. Finally, mirroring a popular argument in the color literature, I argue that physical views in the narrow sense are unable to accommodate the similarity and difference relations in which sounds essentially stand. I end replying to three objections.  相似文献   

13.
14.
15.
The human central auditory system has a remarkable ability to establish memory traces for invariant features in the acoustic environment despite continual acoustic variations in the sounds heard. By recording the memory-related mismatch negativity (MMN) component of the auditory electric and magnetic brain responses as well as behavioral performance, we investigated how subjects learn to discriminate changes in a melodic pattern presented at several frequency levels. In addition, we explored whether musical expertise facilitates this learning. Our data show that especially musicians who perform music primarily without a score learn easily to detect contour changes in a melodic pattern presented at variable frequency levels. After learning, their auditory cortex detects these changes even when their attention is directed away from the sounds. The present results thus show that, after perceptual learning during attentive listening has taken place, changes in a highly complex auditory pattern can be detected automatically by the human auditory cortex and, further, that this process is facilitated by musical expertise.  相似文献   

16.
Pilfering corvids use observational spatial memory to accurately locate caches that they have seen another individual make. Accordingly, many corvid cache-protection strategies limit the transfer of visual information to potential thieves. Eurasian jays (Garrulus glandarius) employ strategies that reduce the amount of visual and auditory information that is available to competitors. Here, we test whether or not the jays recall and use both visual and auditory information when pilfering other birds’ caches. When jays had no visual or acoustic information about cache locations, the proportion of available caches that they found did not differ from the proportion expected if jays were searching at random. By contrast, after observing and listening to a conspecific caching in gravel or sand, jays located a greater proportion of caches, searched more frequently in the correct substrate type and searched in fewer empty locations to find the first cache than expected. After only listening to caching in gravel and sand, jays also found a larger proportion of caches and searched in the substrate type where they had heard caching take place more frequently than expected. These experiments demonstrate that Eurasian jays possess observational spatial memory and indicate that pilfering jays may gain information about cache location merely by listening to caching. This is the first evidence that a corvid may use recalled acoustic information to locate and pilfer caches.  相似文献   

17.
People across the world seek out beautiful sounds in nature, such as a babbling brook or a nightingale song, for positive human experiences. However, it is unclear whether this positive aesthetic response is driven by a preference for the perceptual features typical of nature sounds versus a higher‐order association of nature with beauty. To test these hypotheses, participants provided aesthetic judgments for nature and urban soundscapes that varied on ease of recognition. Results demonstrated that the aesthetic preference for nature soundscapes was eliminated for the sounds hardest to recognize, and moreover the relationship between aesthetic ratings and several measured acoustic features significantly changed as a function of recognition. In a follow‐up experiment, requiring participants to classify these difficult‐to‐identify sounds into nature or urban categories resulted in a robust preference for nature sounds and a relationship between aesthetic ratings and our measured acoustic features that was more typical of easy‐to‐identify sounds. This pattern of results was replicated with computer‐generated artificial noises, which acoustically shared properties with the nature and urban soundscapes but by definition did not come from these environments. Taken together, these results support the conclusion that the recognition of a sound as either natural or urban dynamically organizes the relationship between aesthetic preference and perceptual features and that these preferences are not inherent to the acoustic features. Implications for nature's role in cognitive and affective restoration are discussed.  相似文献   

18.
Abstract:   Jaegwon Kim has argued that unless mental events are reducible to subvening physical events, they are at best overdeterminers of their effects. Recently, nonreductive physicalists have endorsed this consequence claiming that the relationship between mental events and their physical bases is tight enough to render any such overdetermination nonredundant, and hence benign. I focus on instances of this strategy that appeal to the notion of constitution. Ultimately, I argue that there is no way to understand the relationship between irreducible mental events and their physical bases such as to both eliminate causal redundancy and preserve the efficacy of mental events.  相似文献   

19.
Two experiments determined the influence of the range and number of auditory sensory consequences, associated with a rapid timing task, on the development of motor recognition. Experiment 1 observed no beneficial effect upon subsequent movement-transfer performance from experience with the criterion-movement-time sound compared to experience with either a narrow or a wide range of sounds that bracketed the criterion sound (not including the criterion); 60 sound trials prior to transfer did not produce better transfer than did six sounds. The second experiment examined transfer outside of the range of previous listening experience by having subjects transfer to one of two possible criterion movement times after having received either constant, or one of two types of variable, listening experience. Transfer performance was influenced by the amount of variability in listening experience. These results were seen as support for a schematic representation for motor recognition memory (Schmidt, 1975)  相似文献   

20.
In this article I consider the relationship between natural sounds and music. I evaluate two prominent accounts of this relationship. These accounts satisfy an important condition, the difference condition: musical sounds are different from natural sounds. However, they fail to meet an equally important condition, the interaction condition: musical sounds and natural sounds can interact in aesthetically important ways to create unified aesthetic objects. I then propose an alternative account of the relationship between natural sounds and music that meets both conditions. I argue that natural sounds are distinct from music in that they express a kind of alterity or “otherness,” which occurs in two ways. It occurs referentially, because the sources of natural sounds are natural objects rather than artifactual objects, such as instruments; it also occurs acoustically, because natural sounds tend to contain more microtones than macrotones. On my account, the distinction between music and natural sounds is both conventional and vague; it therefore allows music and natural sounds to come together.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号