首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Playback experiments have been a useful tool for studying the function of sounds and the relevance of different sound characteristics in signal recognition in many different species of vertebrates. However, successful playback experiments in sound-producing fish remain rare, and few studies have investigated the role of particular sound features in the encoding of information. In this study, we set-up an apparatus in order to test the relevance of acoustic signals in males of the cichlid Metriaclima zebra. We found that territorial males responded more to playbacks by increasing their territorial activity and approaching the loudspeaker during and after playbacks. If sounds are used to indicate the presence of a competitor, we modified two sound characteristics, that is, the pulse period and the number of pulses, in order to investigate whether the observed behavioural response was modulated by the temporal structure of sounds recorded during aggressive interactions. Modified sounds yielded little or no effect on the behavioural response they elicited in territorial males, suggesting a high tolerance for variations in pulse period and number of pulses. The biological function of sounds in M. zebra and the lack of responsiveness to our temporal modifications are discussed.  相似文献   

2.
In the Crocodylia order, all species are known for their ability to produce sounds in several communication contexts. Though recent experimental studies have brought evidence of the important biological role of young crocodilian calls, especially at hatching time, the juvenile vocal repertoire still needs to be clarified in order to describe thoroughly the crocodilian acoustic communication channel. The goal of this study is to investigate the acoustic features (structure and information coding) in the contact call of juveniles from three different species (Nile crocodile Crocodylus niloticus, Black caiman, Melanosuchus niger and Spectacled caiman, Caiman crocodilus). We have shown that even though substantial structural differences exist between the calls of different species, they do not seem relevant for crocodilians. Indeed, juveniles and adults from the species studied use a similar and non-species-specific way of encoding information, which relies on frequency modulation parameters. Interestingly, using conditioning experiments, we demonstrated that this tolerance in responses to signals of different acoustic structures was unlikely to be related to a lack of discriminatory abilities. This result reinforced the idea that crocodilians have developed adaptations to use sounds efficiently for communication needs.  相似文献   

3.
Discriminating temporal relationships in speech is crucial for speech and language development. However, temporal variation of vowels is difficult to perceive for young infants when it is determined by surrounding speech sounds. Using a familiarization-discrimination paradigm, we show that English-learning 6- to 9-month-olds are capable of discriminating non-native acoustic vowel duration differences that systematically vary with subsequent consonantal durations. Furthermore, temporal regularity of stimulus presentation potentially makes the task easier for infants. These findings show that young infants can process fine-grained temporal aspects of speech sounds, a capacity that lays the foundation for building a phonological system of their ambient language(s).  相似文献   

4.
A characteristic feature of hearing systems is their ability to resolve both fast and subtle amplitude modulations of acoustic signals. This applies also to grasshoppers, which for mate identification rely mainly on the characteristic temporal patterns of their communication signals. Usually the signals arriving at a receiver are contaminated by various kinds of noise. In addition to extrinsic noise, intrinsic noise caused by stochastic processes within the nervous system contributes to making signal recognition a difficult task. The authors asked to what degree intrinsic noise affects temporal resolution and, particularly, the discrimination of similar acoustic signals. This study aims at exploring the neuronal basis for sexual selection, which depends on exploiting subtle differences between basically similar signals. Applying a metric, by which the similarities of spike trains can be assessed, the authors investigated how well the communication signals of different individuals of the same species could be discriminated and correctly classified based on the responses of auditory neurons. This spike train metric yields clues to the optimal temporal resolution with which spike trains should be evaluated.  相似文献   

5.
Recent evidence from acoustic analysis and playback experiments indicates that adult female rhesus monkey (Macaca mulatta) coos are individually distinctive but their screams are not. In this study, the authors compared discrimination of individual identity in these sounds by naive human listeners who judged whether 2 sounds had been produced by the same monkey or 2 monkeys. Each of 3 experiments using this same-different design showed significantly better discrimination of vocalizer identity from coos than from screams. Experiment 1 demonstrated the basic finding. Experiment 2 also tested the effect of non-identity-related scream variation, and Experiment 3 added a comparison with human vowel sounds. Outcomes suggest that acoustic structural differences in coos and screams influence salience of caller-identity cues, with significant implications for understanding the functions of these calls.  相似文献   

6.
We examine the evidence that speech and musical sounds exploit different acoustic cues: speech is highly dependent on rapidly changing broadband sounds, whereas tonal patterns tend to be slower, although small and precise changes in frequency are important. We argue that the auditory cortices in the two hemispheres are relatively specialized, such that temporal resolution is better in left auditory cortical areas and spectral resolution is better in right auditory cortical areas. We propose that cortical asymmetries might have developed as a general solution to the need to optimize processing of the acoustic environment in both temporal and frequency domains.  相似文献   

7.
Ear advantage for the processing of dichotic speech sounds can be separated into two components. One of these components is an ear advantage for those phonetic features that are based on spectral acoustic cues. This ear advantage follows the direction of a given individual's ear dominance for the processing of spectral information in dichotic sounds, whether speech or nonspeech. The other factor represents a right-ear advantage for the processing of temporal information in dichotic sounds, whether speech or nonspeech. The present experiments were successful in dissociating these two factors. Since the results clearly show that ear advantage for speech is influenced by ear dominance for spectral information, a full understanding of the asymmetry in the perceptual salience of speech sounds in any individual will not be possible without knowing his ear dominance.  相似文献   

8.
Speech sounds are highly variable, yet listeners readily extract information from them and transform continuous acoustic signals into meaningful categories during language comprehension. A central question is whether perceptual encoding captures acoustic detail in a one-to-one fashion or whether it is affected by phonological categories. We addressed this question in an event-related potential (ERP) experiment in which listeners categorized spoken words that varied along a continuous acoustic dimension (voice-onset time, or VOT) in an auditory oddball task. We found that VOT effects were present through a late stage of perceptual processing (N1 component, ~100 ms poststimulus) and were independent of categorization. In addition, effects of within-category differences in VOT were present at a postperceptual categorization stage (P3 component, ~450 ms poststimulus). Thus, at perceptual levels, acoustic information is encoded continuously, independently of phonological information. Further, at phonological levels, fine-grained acoustic differences are preserved along with category information.  相似文献   

9.
Many animal species that rely mainly on calls to communicate produce individual acoustic structures, but we wondered whether individuals of species better known as visual communicants, with small vocal repertoires, would also exhibit individual distinctiveness in calls. Moreover, theoretical advances concerning the evolution of social intelligence are usually based on primate species data, but relatively little is known about the social cognitive capacities of non-primate mammals. However, some non-primate species demonstrate auditory recognition of social categories and possess mental representation of their social network. Horses (Equus caballus) form stable social networks and although they display a large range of visual signals, they also use long-distance whinny calls to maintain contact. Here, we investigated the potential existence of individual acoustic signatures in whinny calls and the ability of horses to discriminate by ear individuals varying in their degree of familiarity. Our analysis of the acoustic structure of whinnies of 30 adult domestic horses (ten stallions, ten geldings, ten mares) revealed that some of the frequency and temporal parameters carried reliable information about the caller’s sex, body size and identity. However, no correlations with age were found. Playback experiments evaluated the behavioural significance of this variability. Twelve horses heard either control white noise or whinnies emitted by group members, familiar neighbours or unfamiliar horses. While control sounds did not induce any particular response, horses discriminated the social category of the callers and reacted with a sound-specific behaviour (vigilance and attraction varied with familiarity). Our results support the existence of social knowledge in horses and suggest a process of vocal coding/decoding of information.  相似文献   

10.
Studies of acoustic interactions in animal groups, such as chorusing insects, anurans, and birds, have been invaluable in showing how cooperation and competition shape signal structure and use. The begging calls of nestling birds are ideal for such studies, because they function both as a cooperative signals of the brood's needs and as competitive signals for parental allocation within the brood. Nonetheless, studies of acoustic interactions among nestlings are rare. Here we review our work on acoustic interactions in nestling tree swallows (Tachycineta bicolor), especially how calls are used in competition for parental feedings. Nestlings attracted parental attention and responded to acoustic interference mainly by increasing call output. However, nestlings also gave more similar calls when they called together and decreased their call bandwidth when exposed to elevated noise. We suggest that these competitive uses of calls might intensify the cooperative brood signal, affecting both parental provisioning and vocal development. Given their tremendous variation across species, begging calls offer promising opportunities for developmental and comparative studies of acoustic signaling.  相似文献   

11.
12.
Budgerigars (Melopsittacus undulatus), canaries (Serinus canaria), and zebra finches (Poephila guttata castanotis) were tested for their ability to discriminate among distance calls of each species. For comparison, starlings (Sturnus vulgaris) were tested on the same sounds. Response latencies to detect a change in a repeating background of sound were taken as a measure of the perceptual similarity among calls. All 4 species showed clear evidence of 3 perceptual categories corresponding to the calls of the 3 species. Also, budgerigars, canaries, and zebra finches showed an enhanced ability to discriminate among calls of their own species over the calls of the others. Starlings discriminated more efficiently among canary calls than among budgerigar or zebra finch calls. The results show species differences in discrimination of species-specific acoustic communication signals and provide insight into the nature of specialized perceptual processes.  相似文献   

13.
Operant conditioning and multidimensional scaling procedures were used to study auditory perception of complex sounds in the budgerigar. In a same-different discrimination task, budgerigars learned to discriminate among natural vocal signals. Multidimensional scaling procedures were used to arrange these complex acoustic stimuli in a two-dimensional space reflecting perceptual organization. Results show that budgerigars group vocal stimuli according to functional and acoustical categories. Studies with only contact calls show that birds also make within-category discriminations. The acoustic cues in contact calls most salient to budgerigars appear to be quite complex. There is a suggestion that the sex of the signaler may also be encoded in these calls. The results from budgerigars were compared with the results from humans tested on some of the same sets of complex sounds.  相似文献   

14.
Outside of the laboratory, listening conditions are often less than ideal, and when attending to sounds from a particular source, portions are often obliterated by extraneous noises. However, listeners possess rather elegant reconstructive mechanisms. Restoration can be complete, so that missing segments are indistinguishable from those actually present and the listener is unaware that the signal is fragmented. This phenomenon, called temporal induction (TI), has been studied extensively with nonverbal signals and to a lesser extent with speech. Earlier studies have demonstrated that TI can produce illusory continuity spanning gaps of a few hundred milliseconds when portions of a signal are replaced by a louder sound capable of masking the signal were it actually present. The present study employed various types of speech signals with periodic gaps and measured the effects upon intelligibility produced by filling these gaps with noises. Enhancement of intelligibility through multiple phonemic restoration occurred when the acoustic requirements for TI were met and when sufficient contextual information was available in the remaining speech fragments. It appears that phonemic restoration is a specialized form of TI that uses linguistic skills for the reconstruction of obliterated speech.  相似文献   

15.
Outside of the laboratory, listening conditions are often less than ideal, and when attending to sounds from a particular source, portions are often obliterated by extraneous noises. However, listeners possess rather elegant reconstructive mechanisms. Restoration can be complete, so that missing segments are indistinguishable from those actually present and the listener is unaware that the signal is fragmented. This phenomenon, called temporal induction (TI), has been studied extensively with nonverbal signals and to a lesser extent with speech. Earlier studies have demonstrated that TI can produce illusory continuity spanning gaps of a few hundred milliseconds when portions of a signal are replaced by a louder sound capable of masking the signal were it actually present. The present study employed various types of speech signals with periodic gaps and measured the effects upon intelligibility produced by filling these gaps with noises. Enhancement of intelligibility through multiple phonemic restoration occurred when the acoustic requirements for TI were met and when sufficient contextual information was available in the remaining speech fragments. It appears that phonemic restoration is a specialized form of TI that uses linguistic skills for the reconstruction of obliterated speech.  相似文献   

16.
The ability of 73 male bullfrogs (Rana catesbeiana) to detect single mistuned harmonics in an otherwise periodic signal was studied. Bullfrogs in their natural environment were presented with playbacks of synthetic signals, resembling their species advertisement calls, that differed in the frequency of 1 harmonic component (out of 22). There were significant differences in the number and latency of the males' evoked vocal responses to these stimuli, suggesting that males were sensitive to the differences between the sounds. Differences in envelope shape (rate and depth of amplitude modulation) produced by the harmonic mistunings may underlie the differences in response. Frogs, like birds and humans, can discriminate sounds on the basis of harmonic structure, indicating that this is a general perceptual trait shared among vertebrates.  相似文献   

17.
This study surveyed the vocalization repertoire of descendants of wild-trapped Rattus rattus. Sound recordings synchronized with behavioral observations were conducted in an animal colony living undisturbed under seminatural conditions. Analyses of sound recordings revealed 10 distinct acoustic signals, 5 of which were in the ultrasonic frequency range. The time course and the frequency pattern of the analyzed sounds were similar to those described for R. norvegicus, and they occurred in comparable situations. A species-specific difference may be the intensity of the emitted sounds. The possible communicative function of the acoustic signals is discussed.  相似文献   

18.
Speech sounds can be classified on the basis of their underlying articulators or on the basis of the acoustic characteristics resulting from particular articulatory positions. Research in speech perception suggests that distinctive features are based on both articulatory and acoustic information. In recent years, neuroelectric and neuromagnetic investigations provided evidence for the brain's early sensitivity to distinctive features and their acoustic consequences, particularly for place of articulation distinctions. Here, we compare English consonants in a Mismatch Field design across two broad and distinct places of articulation - labial and coronal - and provide further evidence that early evoked auditory responses are sensitive to these features. We further add to the findings of asymmetric consonant processing, although we do not find support for coronal underspecification. Labial glides (Experiment 1) and fricatives (Experiment 2) elicited larger Mismatch responses than their coronal counterparts. Interestingly, their M100 dipoles differed along the anterior/posterior dimension in the auditory cortex that has previously been found to spatially reflect place of articulation differences. Our results are discussed with respect to acoustic and articulatory bases of featural speech sound classifications and with respect to a model that maps distinctive phonetic features onto long-term representations of speech sounds.  相似文献   

19.
In large social groups acoustic communication signals are prone to signal masking by conspecific sounds. Bottlenose dolphins (Tursiops truncatus) use highly distinctive signature whistles that counter masking effects. However, they can be found in very large groups where masking by conspecific sounds may become unavoidable. In this study we used passive acoustic localization to investigate how whistle rates of wild bottlenose dolphins change in relation to group size and behavioral context. We found that individual whistle rates decreased when group sizes got larger. Dolphins displayed higher whistle rates in contexts when group members were more dispersed as in socializing and in nonpolarized movement than during coordinated surface travel. Using acoustic localization showed that many whistles were produced by groups nearby and not by our focal group. Thus, previous studies based on single hydrophone recordings may have been overestimating whistle rates. Our results show that although bottlenose dolphins whistle more in social situations they also decrease vocal output in large groups where the potential for signal masking by other dolphin whistles increases.  相似文献   

20.
Warren, Bashford, and Gardner (1990) found that when sequences consisting of 10 40-msec steady-state vowels were presented in recycled format, minimal changes in order (interchanging the position of two adjacent phonemes) produced easily recognizable differences in verbal organization, even though the vowel durations were well below the threshold for identification of order. The present study was designed to determine if this ability to discriminate between different arrangements of components is limited to speech sounds subject to verbal organization, or if it reflects a more general auditory ability. In the first experiment. 10 40-msec sinusoidal tones were substituted for the vowels; it was found that the easy discrimination of minimal changea in order is not limited to speech sounds. A second experiment substituted 10 40-msec frozen noise segments for the vowels. The succession of noise segments formed a 400-msec frozen noise pattern that cannot be considered as a sequence of individual sounds, as can the succession of vowels or tones. Nevertheless, listeners again could discriminate between patterns differing.only in the order of two adjacent 40-msec segments. These results, together with other evidence, indicate that it is not necessary foracoustic sequences of brief items (such as phonemes and tones) to be processed asperceptual sequences (that is, as a succession of discrete identifiable sounds) for different arrangements to be discriminated. Instead, component acoustic elements form distinctive “temporal compounds,” which permit listeners to distinguish between different arrangements of portions of an acoustic pattern without the need for segmentation into an ordered series of component items. Implications for models dealing with the recognition of speech and music are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号