首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Humans have a strong tendency to spontaneously group visual or auditory stimuli together in larger patterns. One of these perceptual grouping biases is formulated as the iambic/trochaic law, where humans group successive tones alternating in pitch and intensity as trochees (high–low and loud–soft) and alternating in duration as iambs (short–long). The grouping of alternations in pitch and intensity into trochees is a human universal and is also present in one non-human animal species, rats. The perceptual grouping of sounds alternating in duration seems to be affected by native language in humans and has so far not been found among animals. In the current study, we explore to which extent these perceptual biases are present in a songbird, the zebra finch. Zebra finches were trained to discriminate between short strings of pure tones organized as iambs and as trochees. One group received tones that alternated in pitch, a second group heard tones alternating in duration, and for a third group, tones alternated in intensity. Those zebra finches that showed sustained correct discrimination were next tested with longer, ambiguous strings of alternating sounds. The zebra finches in the pitch condition categorized ambiguous strings of alternating tones as trochees, similar to humans. However, most of the zebra finches in the duration and intensity condition did not learn to discriminate between training stimuli organized as iambs and trochees. This study shows that the perceptual bias to group tones alternating in pitch as trochees is not specific to humans and rats, but may be more widespread among animals.  相似文献   

2.
Vocal tract resonances, called formants, are the most important parameters in human speech production and perception. They encode linguistic meaning and have been shown to be perceived by a wide range of species. Songbirds are also sensitive to different formant patterns in human speech. They can categorize words differing only in their vowels based on the formant patterns independent of speaker identity in a way comparable to humans. These results indicate that speech perception mechanisms are more similar between songbirds and humans than realized before. One of the major questions regarding formant perception concerns the weighting of different formants in the speech signal (“acoustic cue weighting”) and whether this process is unique to humans. Using an operant Go/NoGo design, we trained zebra finches to discriminate syllables, whose vowels differed in their first three formants. When subsequently tested with novel vowels, similar in either their first formant or their second and third formants to the familiar vowels, similarity in the higher formants was weighted much more strongly than similarity in the lower formant. Thus, zebra finches indeed exhibit a cue weighting bias. Interestingly, we also found that Dutch speakers when tested with the same paradigm exhibit the same cue weighting bias. This, together with earlier findings, supports the hypothesis that human speech evolution might have exploited general properties of the vertebrate auditory system.  相似文献   

3.
The ability to form perceptual equivalence classes from variable input stimuli is common in both animals and humans. Neural circuitry that can disambiguate ambiguous stimuli to arrive at perceptual constancy has been documented in the barn owl's inferior colliculus where sound-source azimuth is signaled by interaural phase differences spanning the frequency spectrum of the sound wave. Extrapolating from the sound-localization system of the barn owl to human speech, 2 hypothetical models are offered to conceptualize the neural realization of relative invariance in (a) categorization of stop consonants/b, d, g/ across varying vowel contexts and (b) vowel identity across speakers. 2 computational algorithms employing real speech data were used to establish acoustic commonalities to form neural mappings representing phonemic equivalence classes in the form of functional arrays similar to those seen in the barn owl.  相似文献   

4.
A hallmark of the human language faculty is the use of syntactic rules. The natural vocalizations of animals are syntactically simple, but several studies indicate that animals can detect and discriminate more complex structures in acoustic stimuli. However, how they discriminate such structures is often not clear. Using an artificial grammar learning paradigm, zebra finches were tested in a Go/No-go experiment for their ability to distinguish structurally different three-element sound sequences. In Experiment 1, zebra finches learned to discriminate ABA and BAB from ABB, AAB, BBA, and ABB sequences. Tests with probe sounds consisting of four elements suggested that the discrimination was based on attending to the presence or absence of repeated A- and B-elements. One bird generalized the discrimination to a new element type. In Experiment 2, we continued the training by adding four-element songs following a ‘first and last identical versus different’ rule that could not be solved by attending to repetitions. Only two out of five birds learned the overall discrimination. Testing with novel probes demonstrated that discrimination was not based on using the ‘first and last identical’ rule, but on attending to the presence or absence of the individual training stimuli. The two birds differed in the strategies used. Our results thus demonstrate only a limited degree of abstract rule learning but highlight the need for extensive and critical probe testing to examine the rules that animals (and humans) use to solve artificial grammar learning tasks. They also underline that rule learning strategies may differ between individuals.  相似文献   

5.
Subjects (average age 21 years, recruited by personal contact and through a school) were presented with a spoken sentence on tape and then heard six speakers of the same sex, including the original speaker, say the same sentence. They were required to indicate which was the original speaker. The task was repeated with seven different sentences and sets of speakers. One group of subjects heard short sentences containing an average of 2.14 different vowel sounds and 6.28 syllables, another group heard short sentences containing an average of 6.14 vowel sounds (7.28 syllables) and a third group heard longer sentences containing an average of 6.28 vowel sounds (11.00 syllables). Accuracy of speaker identification improved significantly when more vowel sounds were heard, but increased sentence length had no significant effect on performance. Performance was significantly better when the listener was the same sex as the speaker than when the listener was of the other sex.  相似文献   

6.
Complex sounds vary along a number of acoustic dimensions. These dimensions may exhibit correlations that are familiar to listeners due to their frequent occurrence in natural sounds—namely, speech. However, the precise mechanisms that enable the integration of these dimensions are not well understood. In this study, we examined the categorization of novel auditory stimuli that differed in the correlations of their acoustic dimensions, using decision bound theory. Decision bound theory assumes that stimuli are categorized on the basis of either a single dimension (rule based) or the combination of more than one dimension (information integration) and provides tools for assessing successful integration across multiple acoustic dimensions. In two experiments, we manipulated the stimulus distributions such that in Experiment 1, optimal categorization could be accomplished by either a rule-based or an information integration strategy, while in Experiment 2, optimal categorization was possible only by using an information integration strategy. In both experiments, the pattern of results demonstrated that unidimensional strategies were strongly preferred. Listeners focused on the acoustic dimension most closely related to pitch, suggesting that pitch-based categorization was given preference over timbre-based categorization. Importantly, in Experiment 2, listeners also relied on a two-dimensional information integration strategy, if there was immediate feedback. Furthermore, this strategy was used more often for distributions defined by a negative spectral correlation between stimulus dimensions, as compared with distributions with a positive correlation. These results suggest that prior experience with such correlations might shape short-term auditory category learning.  相似文献   

7.
The categorical discrimination of synthetic human speech sounds by rhesus macaques was examined using the cardiac component of the orienting response. A within-category change which consisted of stimuli differing acoustically in the onset of F2 and F3 transitions, but which are identified by humans as belonging to thesame phonetic category, were responded to differently from a no-change control condition. Stimuli which differed by the same amount in the onset of F2 and F3 transitions, but which human observers identify as belonging toseparate phonetic categories, were differentiated to an even greater degree than the within-category stimuli. The results provide ambiguous data for an articulatory model of human speech perception and are interpreted instead in terms of a feature-detector model of auditory perception.  相似文献   

8.
Primates can learn to categorize complex shapes, but as yet it is unclear how this categorization learning affects the representation of shape in visual cortex. Previous studies that have examined the effect of categorization learning on shape representation in the macaque inferior temporal (IT) cortex have produced diverse and conflicting results that are difficult to interpret owing to inadequacies in design. The present study overcomes these issues by recording IT responses before and after categorization learning. We used parameterized shapes that varied along two shape dimensions. Monkeys were extensively trained to categorize the shapes along one of the two dimensions. Unlike previous studies, our paradigm counterbalanced the relevant categorization dimension across animals. We found that categorization learning increased selectivity specifically for the category-relevant stimulus dimension (i.e., an expanded representation of the trained dimension), and that the ratio of within-category response similarities to between-category response similarities increased for the relevant dimension (i.e., category tuning). These small effects were only evident when the learned category-related effects were disentangled from the prelearned stimulus selectivity. These results suggest that shape-categorization learning can induce minor category-related changes in the shape tuning of IT neurons in adults, suggesting that learned, category-related changes in neuronal response mainly occur downstream from IT.  相似文献   

9.
Lim SJ  Holt LL 《Cognitive Science》2011,35(7):1390-1405
Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights.  相似文献   

10.
Operant-conditioning techniques were used to investigate the ability of zebra finches (Taeniopygia guttata) and Bengalese finches (Lonchura striata domestica) to detect a zebra finch or a Bengalese finch target song intermixed with other birdsongs. Sixteen birds were trained to respond to the presence of a particular target song, either of their own species (n = 8) or of another species (n = 8). The birds were able to learn a discrimination between song mixtures that contained a target song and song mixtures that did not, and they were able to maintain their response to the target song when it was mixed with novel songs. Zebra finches, but not Bengalese finches, learned the discrimination with a conspecific target more quickly and were worse at detecting a Bengalese finch in the presence of a conspecific song. The results indicate that selective attention to birdsongs within an auditory scene is related to their biological relevance.  相似文献   

11.
Absolute pitch (AP) is the ability to classify individual pitches without an external referent. The authors compared results from pigeons (Columba livia, a nonsongbird species) with results (R. Weisman, M. Njegovan, C. Sturdy, L. Phillmore, J. Coyle, & D. Mewhort, 1998) from zebra finches (Taeniopygia guttata, a songbird species) and humans (Homo sapiens) in AP tests that required classification of contiguous tones into 3 or 8 frequency ranges on the basis of correlations between the tones in each frequency range and reward. Pigeons' 3-range discriminations were similar in accuracy to those of zebra finches and humans. In the more challenging 8-range task, pigeons, like zebra finches, discriminated shifts from reward to nonreward from range to range across all 8 ranges, whereas humans discriminated only the 1st and last ranges. Taken together with previous research, the present experiments suggest that birds may have more accurate AP than mammals.  相似文献   

12.
A man, woman or child saying the same vowel do so with very different voices. The auditory system solves the complex problem of extracting what the man, woman or child has said despite substantial differences in the acoustic properties of their voices. Much of the acoustic variation between the voices of men and woman is due to changes in the underlying anatomical mechanisms for producing speech. If the auditory system knew the sex of the speaker then it could potentially correct for speaker sex related acoustic variation thus facilitating vowel recognition. This study measured the minimum stimulus duration necessary to accurately discriminate whether a brief vowel segment was spoken by a man or woman, and the minimum stimulus duration necessary to accuately recognise what vowel was spoken. Results showed that reliable vowel recognition precedesreliable speaker sex discrimination, thus questioning the use of speaker sex information in compensating for speaker sex related acoustic variation in the voice. Furthermore, the pattern of performance across experiments where the fundamental frequency and formant frequency information of speaker's voices were systematically varied, was markedly different depending on whether the task was speaker-sex discrimination or vowel recognition. This argues for there being little relationship between perception of speaker sex (indexical information) and perception of what has been said (linguistic information) at short durations.  相似文献   

13.
Working memory uses central sound representations as an informational basis. The central sound representation is the temporally and feature-integrated mental representation that corresponds to phenomenal perception. It is used in (higher-order) mental operations and stored in long-term memory. In the bottom-up processing path, the central sound representation can be probed at the level of auditory sensory memory with the mismatch negativity (MMN) of the event-related potential. The present paper reviews a newly developed MMN paradigm to tap into the processing of speech sound representations. Preattentive vowel categorization based on F1-F2 formant information occurs in speech sounds and complex tones even under conditions of high variability of the auditory input. However, an additional experiment demonstrated the limits of the preattentive categorization of language-relevant information. It tested whether the system categorizes complex tones containing the F1 and F2 formant components of the vowel /a/ differently than six sounds with nonlanguage-like F1-F2 combinations. From the absence of an MMN in this experiment, it is concluded that no adequate vowel representation was constructed. This shows limitations of the capability of preattentive vowel categorization.  相似文献   

14.
The durations of animals' brief vocalizations provide conspecifics with important recognition cues. In the present experiments, zebra finches and humans (trained musicians) were rewarded for responding after S+ (standard) auditory signals from 56 to 663 ms and not for responding after shorter or longer S- (comparison) durations from 10 to 3684 ms. With either a single standard (Experiment 1) or multiple standards (Experiment 2), both zebra finches and humans timed brief signals to about the same level of accuracy. The results were in qualitative agreement with predictions from scalar timing theory and its connectionist implementation in both experiments. The connectionist model provides a good quantitative account of temporal gradients with a single standard (Experiment 1) but not with multiple standards (Experiment 2).  相似文献   

15.
Male Bengalese finches are left-side dominant for the motor control of song in the sensorimotor nucleus (the high vocal center, or HVc) of the telencephalon. We examined whether perceptual discrimination of songs might also be lateralized in this species. Twelve male Bengalese finches were trained by operant conditioning to discriminate between a Bengalese finch song and a zebra finch song. Before training, the left HVc was lesioned in four birds and the right HVc was lesioned in four other birds. The remaining four birds were used as controls without surgery. Birds with a left HVc lesion required significantly more time to learn to discriminate between the two songs than did birds with a right HVc lesion or intact control birds. These results suggest that the left HVc is not only dominant for the motor control of song, but also for the perceptual discrimination of song. Accepted after revision: 11 September 2001 Electronic Publication  相似文献   

16.
刘文理  乐国安 《心理学报》2012,44(5):585-594
采用启动范式, 以汉语听者为被试, 考察了非言语声音是否影响言语声音的知觉。实验1考察了纯音对辅音范畴连续体知觉的影响, 结果发现纯音影响到辅音范畴连续体的知觉, 表现出频谱对比效应。实验2考察了纯音和复合音对元音知觉的影响, 结果发现与元音共振峰频率一致的纯音或复合音加快了元音的识别, 表现出启动效应。两个实验一致发现非言语声音能够影响言语声音的知觉, 表明言语声音知觉也需要一个前言语的频谱特征分析阶段, 这与言语知觉听觉理论的观点一致。  相似文献   

17.
Operant conditioning and multidimensional scaling procedures were used to study auditory perception of complex sounds in the budgerigar. In a same-different discrimination task, budgerigars learned to discriminate among natural vocal signals. Multidimensional scaling procedures were used to arrange these complex acoustic stimuli in a two-dimensional space reflecting perceptual organization. Results show that budgerigars group vocal stimuli according to functional and acoustical categories. Studies with only contact calls show that birds also make within-category discriminations. The acoustic cues in contact calls most salient to budgerigars appear to be quite complex. There is a suggestion that the sex of the signaler may also be encoded in these calls. The results from budgerigars were compared with the results from humans tested on some of the same sets of complex sounds.  相似文献   

18.
刘文理  祁志强 《心理科学》2016,39(2):291-298
采用启动范式,在两个实验中分别考察了辅音范畴和元音范畴知觉中的启动效应。启动音是纯音和目标范畴本身,目标音是辅音范畴和元音范畴连续体。结果发现辅音范畴连续体知觉的范畴反应百分比受到纯音和言语启动音影响,辅音范畴知觉的反应时只受言语启动音影响;元音范畴连续体知觉的范畴反应百分比不受两种启动音影响,但元音范畴知觉的反应时受到言语启动音影响。实验结果表明辅音范畴和元音范畴知觉中的启动效应存在差异,这为辅音和元音范畴内在加工机制的差异提供了新证据。  相似文献   

19.
The sensitive period is a special time for auditory learning in songbirds. However, little is known about perception and discrimination of song during this period of development. The authors used a go/no-go operant task to compare discrimination of conspecific song from reversed song in juvenile and adult zebra finches (Taeniopygia guttata), and to test for possible developmental changes in perception of syllable structure and syllable syntax. In Experiment 1, there were no age or sex differences in the ability to learn the discrimination, and the birds discriminated the forward from reversed song primarily on the basis of local syllable structure. Similar results were found in Experiment 2 with juvenile birds reared in isolation from song. Experiment 3 found that juvenile zebra finches could discriminate songs on the basis of syllable order alone, although this discrimination was more difficult than one based on syllable structure. The results reveal well-developed song discrimination and song perception in juvenile zebra finches, even in birds with little experience with song.  相似文献   

20.
There is a rich history of behavioral and neurobiological research focused on the ‘syntax’ of birdsong as a model for human language and complex auditory perception. Zebra finches are one of the most widely studied songbird species in this area of investigation. As they produce song syllables in a fixed sequence, it is reasonable to assume that adult zebra finches are also sensitive to the order of syllables within their song; however, results from electrophysiological and behavioral studies provide somewhat mixed evidence on exactly how sensitive zebra finches are to syllable order as compared, say, to syllable structure. Here, we investigate how well adult zebra finches can discriminate changes in syllable order relative to changes in syllable structure in their natural song motifs. In addition, we identify a possible role for experience in enhancing sensitivity to syllable order. We found that both male and female adult zebra finches are surprisingly poor at discriminating changes to the order of syllables within their species-specific song motifs, but are extraordinarily good at discriminating changes to syllable structure (i.e., reversals) in specific syllables. Direct experience or familiarity with a song, either using the bird’s own song (BOS) or the song of a flock mate as the test stimulus, improved both male and female zebra finches’ sensitivity to syllable order. However, even with experience, birds remained much more sensitive to structural changes in syllables. These results help to clarify some of the ambiguities from the literature on the discriminability of changes in syllable order in zebra finches, provide potential insight on the ethological significance of zebra finch song features, and suggest new avenues of investigation in using zebra finches as animal models for sequential sound processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号