首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Infants learn phonotactic regularities from brief auditory experience   总被引:1,自引:0,他引:1  
Chambers KE  Onishi KH  Fisher C 《Cognition》2003,87(2):B69-B77
Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-old infants from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position (e.g. /baep/ not /paeb/). In a later head-turn preference test, infants listened longer to new syllables that violated the experimental phonotactic constraints than to new syllables that honored them. Thus, infants rapidly learned phonotactic regularities from brief auditory experience and extended them to unstudied syllables, documenting the sensitivity of the infant's language processing system to abstractions over linguistic experience.  相似文献   

2.
Onishi KH  Chambers KE  Fisher C 《Cognition》2002,83(1):B13-B23
Three experiments asked whether phonotactic regularities not present in English could be acquired by adult English speakers from brief listening experience. Subjects listened to consonant-vowel-consonant (CVC) syllables displaying restrictions on consonant position. Responses in a later speeded repetition task revealed rapid learning of (a) first-order regularities in which consonants were restricted to particular positions (e.g. [baep] not *[paeb]), and (b) second-order regularities in which consonant position depended on the adjacent vowel (e.g. [baep] or [pIb], not *[paeb] or *[bIp]). No evidence of learning was found for second-order regularities in which consonant position depended on speaker's voice. These results demonstrated that phonotactic constraints are rapidly learned from listening experience and that some types of contingencies (consonant-vowel) are more easily learned than others (consonant-voice).  相似文献   

3.
Adults can learn new artificial phonotactic constraints by producing syllables that exhibit the constraints. The experiments presented here tested the limits of phonotactic learning in production using speech errors as an implicit measure of learning. Experiment 1 tested a constraint in which the placement of a consonant as an onset or coda depended on the identity of a nonadjacent consonant. Participant speech errors reflected knowledge of the constraint but not until the 2nd day of testing. Experiment 2 tested a constraint in which consonant placement depended on an extralinguistic factor, the speech rate. Participants were not able to learn this constraint. Together, these experiments suggest that phonotactic-like constraints are acquired when mutually constraining elements reside within the phonological system.  相似文献   

4.
In this study, the nature of speech perception of native Mandarin Chinese was compared with that of American English speakers, using synthetic visual and auditory continua (from /ba/ to /da/) in an expanded factorial design. In Experiment 1, speakers identified synthetic unimodal and bimodal speech syllables as either /ba/ or /da/. In Experiment 2, Mandarin speakers were given nine possible response alternatives. Syllable identification was influenced by both visual and auditory sources of information for both Mandarin and English speakers. Performance was better described by the fuzzy logical model of perception than by an auditory dominance model or a weighted-averaging model. Overall, the results are consistent with the idea that although there may be differences in information (which reflect differences in phonemic repertoires, phonetic realizations of the syllables, and the phonotactic constraints of languages), the underlying nature of audiovisual speech processing is similar across languages.  相似文献   

5.
Redford MA 《Cognition》2008,107(3):785-816
Three experiments addressed the hypothesis that production factors constrain phonotactic learning in adult English speakers, and that this constraint gives rise to a markedness effect on learning. In Experiment 1, an acoustic measure was used to assess consonant–consonant coarticulation in naturally produced nonwords, which were then used as stimuli in a phonotactic learning experiment. Results indicated that sonority-rising sequences were more coarticulated than -plateauing sequences, and that listeners learned novel-rising onsets more readily than novel-plateauing onsets. Experiments 2 and 3 addressed the specific questions of whether (1) the acoustic correlates of coarticulation or (2) the coarticulatory patterns of self-productions constrained learning. In Experiment 2, stimuli acoustics were altered to control for coarticulatory differences between sequence type, but a clear markedness effect was still observed. In Experiment 3, listeners’ self-productions were gathered and used to predict their treatment of novel-rising and -plateauing sequences. Results were that listeners’ coarticulatory patterns predicted their treatment of novel sequences. Overall, the findings suggest that the powerful effects of statistical learning are moderated by the perception–production loop in language.  相似文献   

6.
The importance of visual cues in speech perception is illustrated by the McGurk effect, whereby a speaker’s facial movements affect speech perception. The goal of the present study was to evaluate whether the McGurk effect is also observed for sung syllables. Participants heard and saw sung instances of the syllables /ba/ and /ga/ and then judged the syllable they perceived. Audio-visual stimuli were congruent or incongruent (e.g., auditory /ba/ presented with visual /ga/). The stimuli were presented as spoken, sung in an ascending and descending triad (C E G G E C), and sung in an ascending and descending triad that returned to a semitone above the tonic (C E G G E C#). Results revealed no differences in the proportion of fusion responses between spoken and sung conditions confirming that cross-modal phonemic information is integrated similarly in speech and song.  相似文献   

7.
Perceptual changes are experienced during rapid and continuous repetition of a speech form, leading to an auditory illusion known as the verbal transformation effect. Although verbal transformations are considered to reflect mainly the perceptual organization and interpretation of speech, the present study was designed to test whether or not speech production constraints may participate in the emergence of verbal representations. With this goal in mind, we examined whether variations in the articulatory cohesion of repeated nonsense words--specifically, temporal relationships between articulatory events--could lead to perceptual asymmetries in verbal transformations. The first experiment displayed variations in timing relations between two consonantal gestures embedded in various nonsense syllables in a repetitive speech production task. In the second experiment, French participants repeatedly uttered these syllables while searching for verbal transformation. Syllable transformation frequencies followed the temporal clustering between consonantal gestures: The more synchronized the gestures, the more stable and attractive the syllable. In the third experiment, which involved a covert repetition mode, the pattern was maintained without external speech movements. However, when a purely perceptual condition was used in a fourth experiment, the previously observed perceptual asymmetries of verbal transformations disappeared. These experiments demonstrate the existence of an asymmetric bias in the verbal transformation effect linked to articulatory control constraints. The persistence of this effect from an overt to a covert repetition procedure provides evidence that articulatory stability constraints originating from the action system may be involved in auditory imagery. The absence of the asymmetric bias during a purely auditory procedure rules out perceptual mechanisms as a possible explanation of the observed asymmetries.  相似文献   

8.
Comparisons between infant-directed and adult-directed speech were conducted to determine whether word-final syllables are highlighted in infant-directed speech. Samples of adult-directed and infant-directed speech were collected from 8 mothers of 6-month-old and 8 mothers of 9- month-old infants. Mothers were asked to label seven objects both to an experimenter and to their infant. Duration, pitch, and amplitude were measured for whole words and for each of the target word syllables. As in prior research, the infant-directed targets were higher pitched and longer than adult-directed targets. The results also extend beyond previous results in showing that lengthening of final syllables in infant-directed speech is particularly exaggerated. Results of analyses comparing word-final versus nonfinal unstressed syllables in utterance-medial position in infant-directed speech showed that lengthening of unstressed word-final syllables occurs even in utterance-internal positions. These results could suggest a mechanism for proposals that word-final syllables are perceptually salient to young children.  相似文献   

9.
The influence of diet on cortical processing of syllables was examined at 3 and 6 months in 239 infants who were breastfed or fed milk or soy-based formula. Event-related potentials to syllables differing in voice-onset-time were recorded from placements overlying brain areas specialized for language processing. P1 component amplitude and latency measures indicated that at both ages infants in all groups could extract and discriminate categorical information from syllables. Between-syllable amplitude differences—present across groups—were generally greater for SF infants. Responses peaked earlier over left hemisphere speech-perception than speech-production areas. Encoding was faster in BF than formula-fed infants. The results show that in preverbal infants: (1) discrimination of phonetic information occurs in early stages of cortical processing; (2) areas overlying brain regions of speech perception are activated earlier than those involved in speech production; and (3) these processes are differentially modulated by infant diet and environmental factors.  相似文献   

10.
In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Under some conditions, visual information can override auditory information to the extent that identification judgments of a visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiovisually compatible syllables are phonetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked to match an audio syllable/va/either to an audiovisually consistent syllable (audio/va/-video/fa/) or an audiovisually discrepant syllable (audio/ba/-video/fa/). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio/va/ to the audiovisually consistent/va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.  相似文献   

11.
We examined whether the orientation of the face influences speech perception in face-to-face communication. Participants identified auditory syllables, visible syllables, and bimodal syllables presented in an expanded factorial design. The syllables were /ba/, /va/, /δa/, or /da/. The auditory syllables were taken from natural speech whereas the visible syllables were produced by computer animation of a realistic talking face. The animated face was presented either as viewed in normal upright orientation or inverted orientation (180° frontal rotation). The central intent of the study was to determine if an inverted view of the face would change the nature of processing bimodal speech or simply influence the information available in visible speech. The results with both the upright and inverted face views were adequately described by the fuzzy logical model of perception (FLMP). The observed differences in the FLMP’s parameter values corresponding to the visual information indicate that inverting the view of the face influences the amount of visible information but does not change the nature of the information processing in bimodal speech perception  相似文献   

12.
Zebra finch (Taeniopygia guttata) song is composed of syllables delivered in a set order. Little is known about the program that controls this temporal delivery. A decision to sing or not to sing may or may not affect the entire song. Song, once commenced, may continue or may halt. If song is halted, stops may occur only at certain points. Seven zebra finches were presented with short bursts of strobe light while engaged in song. The variables of interest were whether the birds stopped and where they stopped. The results can be summarized as follows: Ongoing zebra finch song can be interrupted, interruptions occur at discrete locations in song, and the locations almost always fall between song syllables. These results reveal a functional representation of song production and place constraints on possible neural mechanisms that underlie song production in zebra finches and probably other oscine species. The results also raise hypotheses about the elements of song perception and memory.  相似文献   

13.
Some reaction time experiments are reported on the relation between the perception and production of phonetic features in speech. Subjects had to produce spoken consonant-vowel syllables rapidly in response to other consonant-vowel stimulus syllables. The stimulus syllables were presented auditorily in one condition and visually in another. Reaction time was measured as a function of the phonetic features shared by the consonants of the stimulus and response syllables. Responses to auditory stimulus syllables were faster when the response syllables started with consonants that had the same voicing feature as those of the stimulus syllables. A shared place-of-articulation feature did not affect the speed of responses to auditory stimulus syllables, even though the place feature was highly salient. For visual stimulus syllables, performance was independent of whether the consonants of the response syllables had the same voicing, same place of articulation, or no shared features. This pattern of results occurred in cases where the syllables contained stop consonants and where they contained fricatives. It held for natural auditory stimuli as well as artificially synthesized ones. The overall data reveal a close relation between the perception and production of voicing features in speech. It does not appear that such a relation exists between perceiving and producing places of articulation. The experiments are relevant to the motor theory of speech perception and to other models of perceptual-motor interactions.  相似文献   

14.
In the McGurk effect, perception of audiovisually discrepant syllables can depend on auditory, visual, or a combination of audiovisual information. Undersome conditions, Vi8Ual information can override auditory information to the extent that identification judgments of a-visually influenced syllable can be as consistent as for an analogous audiovisually compatible syllable. This might indicate that visually influenced and analogous audiuvisually-compatible syllables-are-phictnetically equivalent. Experiments were designed to test this issue using a compelling visually influenced syllable in an AXB matching paradigm. Subjects were asked tomatch an audio syllable /val either to an audiovisually consistent syllable (audio /val-video /fa/) or an audiovisually discrepant syllable (audio /bs/-video ifa!). It was hypothesized that if the two audiovisual syllables were phonetically equivalent, then subjects should choose them equally often in the matching task. Results show, however, that subjects are more likely to match the audio /va/ to the audiovisually consistent /va/, suggesting differences in phonetic convincingness. Additional experiments further suggest that this preference is not based on a phonetically extraneous dimension or on noticeable relative audiovisual discrepancies.  相似文献   

15.
Using an auditory-preference procedure we found that 3-month-olds listened significantly longer to alliterative CVCs than to non-alliterative CVCs. This finding demonstrates that 3-month-olds are sensitive to syllable onsets and is discussed in relation to early speech perception and similar results found with 9-month-olds [Jusczyk, P. W., Goodman, M. B., & Baumann, A. (1999). Nine-month-olds' attention to sound similarities in syllables. Journal of Memory & Language, 40, 62-82].  相似文献   

16.
The present experiment investigated the hypothesis that age-related declines in cognitive functioning are partly due to a decrease in peripheral sensory functioning. In particular, it was suggested that some of the decline in serial recall for verbal material might be due to even small amounts of degradation due to noise or hearing loss. Older and younger individuals identified and recalled nonsense syllables in order at a number of different speech-to-noise ratios. Performance on the identification task was significantly correlated with performance on a subsequent serial recall task. However, this was restricted to the case in which the stimuli were presented in a substantial amount of noise. These data show that even small changes in sensory processing can lead to real and measurable declines in cognitive functioning as measured by a serial recall task.  相似文献   

17.
Phonotactic probability, neighborhood density, and onset density were manipulated in 4 picture-naming tasks. Experiment 1 showed that pictures of words with high phonotactic probability were named more quickly than pictures of words with low phonotactic probability. This effect was consistent over multiple presentations of the pictures (Experiment 2). Manipulations of phonotactic probability and neighborhood density showed only an influence of phonotactic probability (Experiment 3). In Experiment 4, pictures of words with sparse onsets were named more quickly than pictures of words with dense onsets. The results of these experiments provide additional constraints on the architecture and processes involved in models of speech production, as well as constraints on the connections between the recognition and production systems.  相似文献   

18.
French children program the words they write syllable by syllable. We examined whether the syllable the children use to segment words is determined phonologically (i.e., is derived from speech production processes) or orthographically. Third, 4th and 5th graders wrote on a digitiser words that were mono-syllables phonologically (e.g. barque = [baRk]) but bi-syllables orthographically (e.g. barque = bar.que). These words were matched to words that were bi-syllables both phonologically and orthographically (e.g. balcon = [bal.kõ] and bal.con). The results on letter stroke duration and fluency yielded significant peaks at the syllable boundary for both types of words, indicating that the children use orthographic rather than phonological syllables as processing units to program the words they write.  相似文献   

19.
Timbre discrimination in zebra finch (Taeniopygia guttata) song syllables   总被引:3,自引:0,他引:3  
Zebra finch (Taeniopygia guttata) songs include syllables of a fundamental frequency and harmonics. Individual harmonics in 1 syllable can be more or less emphasized. The functional role of this variability is unknown. These experiments provide evidence of how the phenomenon is perceived. We trained 12 male and female zebra finches on a go-no-go operant procedure to discriminate between 2 song syllables that varied only in the absence of the 2nd or 5th harmonic. Training involved many thousands of trials. Both sexes used the presence or absence of the 2nd harmonic as the sole discriminative cue. Females had more difficulty learning to perform the task when the presence of the 2nd harmonic was the go stimulus, which indicates that their use of the information was biased by stimulus-response contingencies. The results are discussed in terms of a broad strategy to understand how animals perceive sounds used in communication.  相似文献   

20.
Three experiments on short-term serial memory for spoken syllables are reported. The stimuli were CVC (consonant-vowel-consonant) syllables in Experiment 1, CCVs in Experiment 2, and VCCs in Experiment 3. Analyses of subjects' errors showed that the phonemes within a syllable were not equally free to break apart and recombine. Certain groups of phonemes-the vowel-final consonant group of a CVC, the initial cluster of a CCV, and a vowel-liquid group within a VCC-tended to behave as units. These results are consistent with the view that syllables are coded in terms of an onset (initial consonant or cluster) and a rime (remainder). Errors in short-term memory for spoken syllables are affected by the linguistic structure of the syllables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号