首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
There is a rich history of behavioral and neurobiological research focused on the ‘syntax’ of birdsong as a model for human language and complex auditory perception. Zebra finches are one of the most widely studied songbird species in this area of investigation. As they produce song syllables in a fixed sequence, it is reasonable to assume that adult zebra finches are also sensitive to the order of syllables within their song; however, results from electrophysiological and behavioral studies provide somewhat mixed evidence on exactly how sensitive zebra finches are to syllable order as compared, say, to syllable structure. Here, we investigate how well adult zebra finches can discriminate changes in syllable order relative to changes in syllable structure in their natural song motifs. In addition, we identify a possible role for experience in enhancing sensitivity to syllable order. We found that both male and female adult zebra finches are surprisingly poor at discriminating changes to the order of syllables within their species-specific song motifs, but are extraordinarily good at discriminating changes to syllable structure (i.e., reversals) in specific syllables. Direct experience or familiarity with a song, either using the bird’s own song (BOS) or the song of a flock mate as the test stimulus, improved both male and female zebra finches’ sensitivity to syllable order. However, even with experience, birds remained much more sensitive to structural changes in syllables. These results help to clarify some of the ambiguities from the literature on the discriminability of changes in syllable order in zebra finches, provide potential insight on the ethological significance of zebra finch song features, and suggest new avenues of investigation in using zebra finches as animal models for sequential sound processing.  相似文献   

2.
Previous studies have suggested that French listeners experience difficulties when they have to discriminate between words that differ in stress. A limitation is that these studies used stress patterns that do not respect the rules of stress placement in French. In this study, three stress patterns were tested on bisyllabic words (1) the legal stress pattern in French, namely words that were unstressed compared to words that bore primary stress on their last syllable (/?u?i/-/?u’?i/), (2) an illegal stress location pattern, namely words that bore primary stress on their first syllable compared to words that bore primary stress on their last syllable (/’?u?i/-/?u’?i/) and (3) an illegal pattern that involves an unstressed word, namely words that were unstressed compared to words that bore primary stress on their first syllable (/?u?i/-/’?u?i/). In an ABX task, participants heard three items produced by three different speakers and had to indicate whether X was identical to A or B. The stimuli A and B varied in stress (/?u’?i/-/?u?i/-/?u’?i/), in one phoneme (/?u’?i/-/?u’???/-/?u’?i/) or in both stress and one phoneme (/?u’?i/-/?u???/-/?u’?i/). The results showed that French listeners are fully able to discriminate between two words differing in stress provided that the stress pattern included an unstressed word. More importantly, they suggest that the French listeners’ difficulties mainly reside in locating stress within words.  相似文献   

3.
The sensitive period is a special time for auditory learning in songbirds. However, little is known about perception and discrimination of song during this period of development. The authors used a go/no-go operant task to compare discrimination of conspecific song from reversed song in juvenile and adult zebra finches (Taeniopygia guttata), and to test for possible developmental changes in perception of syllable structure and syllable syntax. In Experiment 1, there were no age or sex differences in the ability to learn the discrimination, and the birds discriminated the forward from reversed song primarily on the basis of local syllable structure. Similar results were found in Experiment 2 with juvenile birds reared in isolation from song. Experiment 3 found that juvenile zebra finches could discriminate songs on the basis of syllable order alone, although this discrimination was more difficult than one based on syllable structure. The results reveal well-developed song discrimination and song perception in juvenile zebra finches, even in birds with little experience with song.  相似文献   

4.
When a formant transition and the remainder of a syllable are presented to subjects' opposite ears, most subjects perceive two simultaneous sounds: a syllable and a nonspeech chirp. It has been demonstrated that, when the remainder of the syllable (base) is kept unchanged, the identity of the perceived syllable will depend on the kind of transition presented at the opposite ear. This phenomenon, called duplex perception, has been interpreted as the result of the independent operation of two perceptual systems or modes, the phonetic and the auditory mode. In the present experiments, listeners were required to identify and discriminate such duplex syllables. In some conditions, the isolated transition was embedded in a temporal sequence of capturing transitions sent to the same ear. This streaming procedure significantly weakened the contribution of the transition to the perceived phonetic identity of the syllable. It is likely that the sequential integration of the isolated transition into a sequence of capturing transitions affected its fusion with the contralateral base. This finding contrasts with the idea that the auditory and phonetic processes are operating independently of each other. The capturing effect seems to be more consistent with the hypothesis that duplex perception occurs in the presence of conflicting cues for the segregation and the integration of the isolated transition with the base.  相似文献   

5.
The present study explores how stimulus variability in speech production influences the 2-month-old infant's perception and memory for speech sounds. Experiment 1 focuses on the consequences of talker variability for the infant's ability to detect differences between speech sounds. When tested with high-amplitude sucking (HAS) procedure, infants who listened to versions of a syllable, such as [symbol: see text], produced by 6 male and 6 female talkers, detected a change to another syllable, such as [symbol: see text], uttered by the same group of talkers. In fact, infants exposed to multiple talkers performed as well as other infants who heard utterances produced by only a single talker. Moreover, other results showed that infants discriminate the voices of the individual talkers, although discriminating one mixed group of talkers (3 males and 3 females) from another is too difficult for them. Experiment 2 explored the consequences of talker variability on infants' memory for speech sounds. The HAS procedure was modified by introducing a 2-min delay period between the familiarization and test phases of the experiment. Talker variability impeded infants' encoding of speech sounds. Infants who heard versions of the same syllable produced by 12 different talkers did not detect a change to a new syllable produced by the same talkers after the delay period. However, infants who heard the same syllable produced by a single talker were able to detect the phonetic change after the delay. Finally, although infants who heard productions from a single talker retained information about the phonetic structure of the syllable during the delay, they apparently did not retain information about the identity of the talker. Experiment 3 reduced the range of variability across talkers and investigated whether variability interferes with retention of all speech information. Although reducing the range of variability did not lead to retention of phonetic details, infants did recognize a change in the gender of the talkers' voices (from male to female or vice versa) after a 2-min delay. Two additional experiments explored the consequences of limiting the variability to a single talker. In Experiment 4, with an immediate testing procedure, infants exposed to 12 different tokens of one syllable produced by the same talker discriminated these from 12 tokens of another syllable.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

6.
The eye movements of Finnish first and second graders were monitored as they read sentences where polysyllabic words were either hyphenated at syllable boundaries, alternatingly coloured (every second syllable black, every second red) or had no explicit syllable boundary cues (e.g., ta-lo vs. talo vs. talo = “house”). The results showed that hyphenation at syllable boundaries slows down reading of first and second graders even though syllabification by hyphens is very common in Finnish reading instruction, as all first-grade textbooks include hyphens at syllable boundaries. When hyphens were positioned within a syllable (t-alo vs. ta-lo), beginning readers were even more disrupted. Alternate colouring did not affect reading speed, no matter whether colours signalled syllable structure or not. The results show that beginning Finnish readers prefer to process polysyllabic words via syllables rather than letter by letter. At the same time they imply that hyphenation encourages sequential syllable processing, which slows down the reading of children, who are already capable of parallel syllable processing or recognising words directly via the whole-word route.  相似文献   

7.
The metaphonological abilities of illiterate Spanish-speaking people were evaluated. A group of rudimentary readers who have had deficient schooling was taken as control. The subjects were asked to discriminate between pairs of syllables that are minimally different in terms of phonetic features, to evaluate rhyme relations, to judge whether or not a particular phoneme or syllable was present in an utterance, and to delete and reverse phonemes and syllables. The results show that illiterates are quite good at phonetic discrimination. Almost half of them demonstrated an unequivocal ability to appreciate rhyme. However, their performance on the syllable tasks, and especially on the phoneme tasks, was very poor. There was almost no overlap between the scores of illiterates and rudimentary readers in the phonemic tasks. The present study confirms previous indications that phonemic awareness does not develop as a mere consequence of cognitive or linguistic maturation. It extends this claim to languages that, like Spanish, possess only a small set of vowels. On the other hand, the lack of phonemic awareness does not imply any substantial inferiority in phonemic sensitivity, i.e. the ability to discriminate between minimal pairs.  相似文献   

8.
Discrimination of polysyllabic sequences by one- to four-month-old infants   总被引:1,自引:0,他引:1  
The goal of this research was to ascertain the effects of suprasegmental parameters (fundamental frequency, amplitude, and duration) on discrimination of polysyllabic sequences by 1- to 4-month-old infants. A high-amplitude sucking procedure, with synthesized female speech, was used. Results indicate that young infants can discriminate the three-syllable sequences [marana] versus [malana] when suprasegmental characteristics typical of infant-directed speech emphasize the middle syllable. However, infants failed to demonstrate discrimination when adult-directed suprasegmentals were used and in several other experimental conditions in which prosodic parameters were manipulated. The pattern of results obtained in the six experiments suggests that the exaggerated suprasegmentals of infant-directed speech may function as a perceptual catalyst, facilitating discrimination by focusing the infant's attention on a distinctive syllable within polysyllabic sequences.  相似文献   

9.
Two experiments examined if visual word access varies cross-linguistically by studying Spanish/English adult bilinguals, priming two syllable CVCV words both within (Experiment 1) and across (Experiment 2) syllable boundaries in the two languages. Spanish readers accessed more first syllables based on within syllable primes compared to English readers. In contrast, syllable-based primes helped English readers recognize more words than in Spanish, suggesting that experienced English readers activate a larger unit in the initial stages of word recognition. Primes spanning the syllable boundary affected readers of both languages in similar ways. In this priming context, primes that did not span the syllable boundary helped Spanish readers recognize more syllables, while English readers identified more words, further confirming the importance of the syllable in Spanish and suggesting a larger unit in English. Overall, the experiments provide evidence that readers use different units in accessing words in the two languages.  相似文献   

10.
If a place-of-articulation contrast is created between the auditory and the visual component syllables of videotaped speech, frequently the syllable that listeners report they have heard differs phonetically from the auditory component. These “McGurk effects”, as they have come to be called, show that speech perception may involve some kind of intermodal process. There are two classes of these phenomena: fusions and combinations. Perception of the syllable /da/ when auditory /ba/ and visual /ga/ are presented provides a clear example of the former, and perception of the string /bga/ after presentation of auditory /ga/ and visual /ba/ an unambiguous instance of the latter. Besides perceptual fusions and combinations, hearing visually presented component syllables also shows an influence of vision on audition. It is argued that these “visual” responses arise from basically the same underlying processes that yield fusions and combinations, respectively. In the present study, the visual component of audiovisually incongruous CV-syllables was presented in the left and the right visual hemifield, respectively. Audiovisual fusion responses showed a left hemifield advantage, and audiovisual combination responses a right hemifield advantage. This finding suggests that the process of audiovisual integration differs between audiovisual fusions and combinations and, furthermore, that the two cerebral hemispheres contribute differentially to the two classes of response.  相似文献   

11.
French children program the words they write syllable by syllable. We examined whether the syllable the children use to segment words is determined phonologically (i.e., is derived from speech production processes) or orthographically. Third, 4th and 5th graders wrote on a digitiser words that were mono-syllables phonologically (e.g. barque = [baRk]) but bi-syllables orthographically (e.g. barque = bar.que). These words were matched to words that were bi-syllables both phonologically and orthographically (e.g. balcon = [bal.kõ] and bal.con). The results on letter stroke duration and fluency yielded significant peaks at the syllable boundary for both types of words, indicating that the children use orthographic rather than phonological syllables as processing units to program the words they write.  相似文献   

12.
综述了音节在语言产生中的作用。首先简单介绍了音节的概念;然后介绍了两类语言产生理论关于音节存储和音节如何起作用的不同观点;第三,从实验研究的主要范式和主要问题两个方面分析了目前在语言产生领域对音节的研究成果及存在的问题。主要的研究范式有:掩蔽启动范式、重复启动范式、内隐启动范式和图画-词汇干扰实验范式。研究的主要问题有:音节是否是言语产生中的功能单位,音节在言语产生中是如何起作用的,以及音节启动效应的发生位置。最后根据汉语音节的特点,分析了汉语词汇产生中对音节的研究以及今后的研究趋势  相似文献   

13.
Phoneme awareness is critical for literacy acquisition in English, but relatively little is known about the early development of phonological awareness in ESL (English as a second language) bilinguals when their two languages have different phonological structures. Using parallel tasks in English and Mandarin, we tracked the development of L1 (first language) and L2 (second language) syllable and phoneme awareness longitudinally in English-L1 and Mandarin-L1 prereaders (n=70, 4- and 5-year-olds) across three 6-month intervals. In English, the English-L1 children's performance was better in phoneme awareness at all three time points, but the Mandarin-L1 children's syllable awareness was equivalent to the English-L1 children's syllable awareness by Time 3. In Mandarin, the English-L1 children's phoneme awareness, but not their syllable awareness, was also significantly better than that of the Mandarin-L1 children at all three time points. Cross-lagged correlations revealed that only the English-L1 children applied their L1 syllable and phoneme awareness to their L2 (Mandarin) processing by Time 2 and that the Mandarin-L1 children seemed to require exposure to English (L2) before they developed phoneme awareness in either language. The data provide further evidence that phonological awareness is a language-general ability but that cross-language application depends on the similarity between the phonological structures of a child's L1 and L2. Implications for classroom teaching are briefly discussed.  相似文献   

14.
We propose that much of the variance among right-handed subjects in perceptual asymmetries on standard behavioral measures of laterality arises from individual differences in characteristic patterns of asymmetric hemispheric arousal. Dextrals with large right-visual-field (RVF) advantages on a tachistoscopic syllable-identification task (assumed to reflect characteristically higher left-hemisphere than right-hemisphere arousal) outperformed those having weak or no visual-field asymmetries (assumed to reflect characteristically higher right-hemisphere than left-hemisphere arousal). The two groups were equal, however, in asymmetries of error patterns that are thought to indicate linguistic or nonlinguistic encoding strategies. For both groups, relations between visual fields in the ability to discriminate the accuracy of performance followed the pattern of syllable identification itself, suggesting that linguistic and metalinguistic processes are based on the same laterally specialized functions. Subjects with strong RVF advantages had a pessimistic bias for rating performance, and those with weak or no asymmetries had an optimistic bias, particularly for the left visual field (LVF). This is concordant with evidence that the arousal level of the right hemisphere is closely related to affective mood. Finally, consistent with the arousal model, leftward asymmetries on a free-vision face-processing task became larger as RVF advantages on the syllable task diminished and as optimistic biases for the LVF, relative to the RVF, increased.  相似文献   

15.
Recent accounts of the pathomechanism underlying apraxia of speech (AOS) were based on the speech production model of Levelt, Roelofs, and Meyer, and Meyer (1999)1999. The apraxic impairment was localized to the phonetic encoding level where the model postulates a mental store of motor programs for high-frequency syllables. Varley and Whiteside (2001a) assumed that in patients with AOS syllabic motor programs are no longer accessible and that these patients are required to use a subsyllabic encoding route. In this study, we tested this hypothesis by exploring the influence of syllable frequency and syllable structure on word repetition in 10 patients with AOS. A significant effect of syllable frequency on error rates was found. Moreover, apraxic errors on consonant clusters were influenced by their position relative to syllable boundaries. These results demonstrate that apraxic patients have access to the syllabary, but that they fail to retrieve the syllabic motor patterns correctly. Our findings are incompatible with a subsyllabic route model of apraxia of speech.  相似文献   

16.
Ss heard a passage from Lewis Carroll’s Through the Looking Glass and were asked to indicate, as quickly as possible, whenever they heard a mispronunciation. Mispronunciations were produced by changing one consonant sound in a three-syllable word by one, two, or four distinctive features (e.g., busily to “pizily,” “visily,” or “sizily”). Mispronunciations involving a single feature change were seldom detected, while two and four feature changes were readily detected. The syllable in which a mispronunciation occurred did not affect the probability of detecting a mispronunciation. However, reaction times to mispronounced words were at least a third of a second slower when they occurred in the-first syllable of the word. The results were taken to support the notion that words are identified by their distinctive features.  相似文献   

17.
18.
Mirman D  Magnuson JS  Estes KG  Dixon JA 《Cognition》2008,108(1):271-280
Many studies have shown that listeners can segment words from running speech based on conditional probabilities of syllable transitions, suggesting that this statistical learning could be a foundational component of language learning. However, few studies have shown a direct link between statistical segmentation and word learning. We examined this possible link in adults by following a statistical segmentation exposure phase with an artificial lexicon learning phase. Participants were able to learn all novel object-label pairings, but pairings were learned faster when labels contained high probability (word-like) or non-occurring syllable transitions from the statistical segmentation phase than when they contained low probability (boundary-straddling) syllable transitions. This suggests that, for adults, labels inconsistent with expectations based on statistical learning are harder to learn than consistent or neutral labels. In contrast, a previous study found that infants learn consistent labels, but not inconsistent or neutral labels.  相似文献   

19.
Subjects were played sequences of two consonant-vowel syllables which they had to judge as “same” or “different.” The first syllable was always played binaurally and the second came either to the left or to the right ear. The two syllables were always on different pitches. Subjects were faster in judging the syllables “different” when the second syllable came to the right than to the left ear, but there was no ear difference for “same” judgments. This experiment suggests that ear differences can be obtained under monaural stimulation on a task involving a simple phonetic judgment. It also suggests that some process of categorization is necessary in tasks which show left hemisphere superiority for verbal material.  相似文献   

20.
Two experiments were run in order to investigate the relationship between syllable length of number names and eye-fixation durations during silent reading of one- and two-digit numbers. In Experiment 1, subjects had to read a series of three numbers and recall them orally; in Experiment 2, subjects had to indicate manually whether the value of the middle number was between the values of the outer numbers. The effect of syllable length was controlled for possible confounding effects of number frequency and number magnitude. Findings indicated that fixation duration depended on syllable length of number names in the first task, but not in the second task. The results call into question the claim that phonological encoding is imperative in visual processing; phonological encoding was used only when the numbers had to be recalled, and not when they were coded for computational purposes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号