首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The pitch patterns present in speech addressed to infants may play an important role in perceptual processing by infants. In this study, the high-amplitude sucking procedure was used to assess discrimination by 2- to 3-month-old infants of rising versus falling pitch patterns in 400-msec synthetic [ra] and [la] tokens. The syllables’ intonation contour was modeled on infant-directed speech, and covered a range characteristic of an adult female speaker (180–300 Hz). Group data indicated that the 2- to 3-month-old infants discriminated the pitch contour for both stimuli. Results are discussed with reference to previous studies of syllabic pitch perception.  相似文献   

2.
Prosodic cues drive speech segmentation and guide syllable discrimination. However, less is known about the attentional mechanisms underlying an infant's ability to benefit from prosodic cues. This study investigated how 6- to 8-month-old Italian infants allocate their attention to strong vs. weak syllables after familiarization with four repeats of a single CV sequence with alternating strong and weak syllables (different syllables on each trial). In the discrimination test-phase, either the strong or the weak syllable was replaced by a pure tone matching the suprasegmental characteristics of the segmental syllable, i.e., duration, loudness and pitch, whereas the familiarized stimulus was presented as a control. By using an eye-tracker, attention deployment (fixation times) and cognitive resource allocation (pupil dilation) were measured under conditions of high and low saliency that corresponded to the strong and weak syllabic changes, respectively. Italian learning infants were found to look longer and also to show, through pupil dilation, more attention to changes in strong syllable replacement rather than weak syllable replacement, compared to the control condition. These data offer insights into the strategies used by infants to deploy their attention towards segmental units guided by salient prosodic cues, like the stress pattern of syllables, during speech segmentation.  相似文献   

3.
The present study investigated burst cue discrimination in 3- to 4-month-old infants with the natural speech stimuli [bu] and [gu]. The experimental stimuli consisted of either a [bu] or a [gu] burst attached to the formants of the [bu], such that the sole difference between the two stimuli was the initial burst cue. Infants were tested using a cardiac orienting response (OR) paradigm which consisted of 20 tokens of one stimulus (e.g. [bu]) followed by 20 tokens of the second syllable (20/20 paradigm). An OR to the stimulus change revealed that young infants can discriminate burst cue differences in speech stimuli. Discussion of the results focused on asymmetries observed in the data and the relationship of these findings to our previous failure to demonstrate burst discrimination using the habituation/dishabituation cardiac measure generally employed with older infants.  相似文献   

4.
Previous research suggests that infant speech perception reorganizes in the first year: young infants discriminate both native and non‐native phonetic contrasts, but by 10–12 months difficult non‐native contrasts are less discriminable whereas performance improves on native contrasts. In the current study, four experiments tested the hypothesis that, in addition to the influence of native language experience, acoustic salience also affects the perceptual reorganization that takes place in infancy. Using a visual habituation paradigm, two nasal place distinctions that differ in relative acoustic salience, acoustically robust labial‐alveolar [ma]–[na] and acoustically less salient alveolar‐velar [na]–[?a], were presented to infants in a cross‐language design. English‐learning infants at 6–8 and 10–12 months showed discrimination of the native and acoustically robust [ma]–[na] (Experiment 1), but not the non‐native (in initial position) and acoustically less salient [na]–[?a] (Experiment 2). Very young (4–5‐month‐old) English‐learning infants tested on the same native and non‐native contrasts also showed discrimination of only the [ma]–[na] distinction (Experiment 3). Filipino‐learning infants, whose ambient language includes the syllable‐initial alveolar (/n/)–velar (/?/) contrast, showed discrimination of native [na]–[?a] at 10–12 months, but not at 6–8 months (Experiment 4). These results support the hypothesis that acoustic salience affects speech perception in infancy, with native language experience facilitating discrimination of an acoustically similar phonetic distinction [na]–[?a]. We discuss the implications of this developmental profile for a comprehensive theory of speech perception in infancy.  相似文献   

5.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   

6.
Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., ?ca-vi from cavia “guinea pig” vs. ?ka-vi from kaviaar “caviar”). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-?jec from projector “projector” vs. ?pro-jec from projectiel “projectile”), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.  相似文献   

7.
We investigate the hypothesis that infant-directed speech is a form of hyperspeech, optimized for intelligibility, by focusing on vowel devoicing in Japanese. Using a corpus of infant-directed and adult-directed Japanese, we show that speakers implement high vowel devoicing less often when speaking to infants than when speaking to adults, consistent with the hyperspeech hypothesis. The same speakers, however, increase vowel devoicing in careful, read speech, a speech style which might be expected to pattern similarly to infant-directed speech. We argue that both infant-directed and read speech can be considered listener-oriented speech styles—each is optimized for the specific needs of its intended listener. We further show that in non-high vowels, this trend is reversed: speakers devoice more often in infant-directed speech and less often in read speech, suggesting that devoicing in the two types of vowels is driven by separate mechanisms in Japanese.  相似文献   

8.
Using Mandarin Chinese, a "tone language" in which the pitch contours of syllables differentiate words, the authors examined the acoustic modifications of infant-directed speech (IDS) at the syllable level to test 2 hypotheses: (a) the overall increase in pitch and intonation contour that occurs in IDS at the phrase level would not distort lexical pitch at the syllable level and (b) IDS provides exaggerates cues to lexical tones. Sixteen Mandarin-speaking mothers were recorded while addressing their infants and addressing an adult. The results indicate that IDS does not distort the acoustic cues that are essential to word meaning at the syllable level; evidence of exaggeration of the acoustic differences in IDS was observed, extending previous findings of phonetic exaggeration to the lexical level.  相似文献   

9.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

10.
In this study a mother's instinctive accommodations of vocal fundamental frequency (f0) of infant-directed speech to two different infants was explored. Maternal speech directed to individual 3-mo.-old fraternal twin-infants was subjected to acoustic analysis. Natural samples of infant-directed speech were recorded at home. There were differences in the rate of infants' vocal responses. The mother changed her f0 and patterns of intonation contour when she spoke to each infant. When she spoke to the infant whose vocal response was less frequent than the other infant, she used a higher mean f0 and a rising intonation contour more than when she spoke to the other infant. The result suggested that the mother's speech characteristic is not inflexible and that the mother may use a higher f0 and rising contour as a strategy to elicit an infant's less frequent vocal response.  相似文献   

11.
Generally, infants prefer infant-directed speech to adult-directed speech. This study investigated which acoustic features of maternal infant-directed speech elicit effectively 3-mo.-old infants' vocal response. The participants were 40 Japanese mother and infant dyads. Vocal f0 from the mother's speech and the infant's vocalization was extracted using Computerized Speech Laboratory (CSL4300) and custom software. The acoustical features measured were mean fundamental frequency (f0), and f0 contour. The rate of the infant's vocal response was significantly higher When the maternal infant-directed speech was terminated with a falling contour rather than a rising or flat contour. There was no significant difference between the mean f0 of the maternal infant-directed speech followed or not followed by the infant's vocal response. This suggests that the falling contour of terminal maternal infant-directed speech serves to elicit the 3-mo.-old infant's vocal response.  相似文献   

12.
This study explored the temporal contingencies between infant and adult vocalizations as a function of the type of infant vocalization, whether adult caregivers’ vocalizations were infant-directed or other-directed, and the timescale of analysis. We analyzed excerpts taken from day-long home audio recordings that were collected from nineteen 12- to 13-month-old American infants and their caregivers using the LENA system. Three 5-minute sections having high child vocalization rates were identified within each recording and coded by trained researchers. Infant and adult vocalizations were sequenced and defined as contingent if they occurred within 1 s, 2 s, or 5 s of each other. When using 1 s or 2 s definitions of temporal adjacency, infant vocalizations generally predicted subsequent infant-directed adult vocalizations. A reflexive vocalization (i.e. a cry or a laugh) was the strongest predictor. Likewise, within 1–2 s timeframes, infant-directed adult speech generally predicted infant vocalizations with reflexive vocalizations being particularly predictive. Infant vocalizations predicted fewer subsequent other-directed adult vocalizations and were less likely following other-directed adult vocalizations when considering up to 5 s lags. This suggests an understudied communicative role for infants of non-infant-directed adult speech. These results demonstrate the importance of timescale in studying infant-adult interactions, support the communicative significance of reflexive infant vocalizations and other-directed adult speech in addition to more commonly studied vocalization types, and highlight the challenges of determining direction(s) of influence when using only two-event sequences.  相似文献   

13.
Comparisons between infant-directed and adult-directed speech were conducted to determine whether word-final syllables are highlighted in infant-directed speech. Samples of adult-directed and infant-directed speech were collected from 8 mothers of 6-month-old and 8 mothers of 9- month-old infants. Mothers were asked to label seven objects both to an experimenter and to their infant. Duration, pitch, and amplitude were measured for whole words and for each of the target word syllables. As in prior research, the infant-directed targets were higher pitched and longer than adult-directed targets. The results also extend beyond previous results in showing that lengthening of final syllables in infant-directed speech is particularly exaggerated. Results of analyses comparing word-final versus nonfinal unstressed syllables in utterance-medial position in infant-directed speech showed that lengthening of unstressed word-final syllables occurs even in utterance-internal positions. These results could suggest a mechanism for proposals that word-final syllables are perceptually salient to young children.  相似文献   

14.
The present study explores how stimulus variability in speech production influences the 2-month-old infant's perception and memory for speech sounds. Experiment 1 focuses on the consequences of talker variability for the infant's ability to detect differences between speech sounds. When tested with high-amplitude sucking (HAS) procedure, infants who listened to versions of a syllable, such as [symbol: see text], produced by 6 male and 6 female talkers, detected a change to another syllable, such as [symbol: see text], uttered by the same group of talkers. In fact, infants exposed to multiple talkers performed as well as other infants who heard utterances produced by only a single talker. Moreover, other results showed that infants discriminate the voices of the individual talkers, although discriminating one mixed group of talkers (3 males and 3 females) from another is too difficult for them. Experiment 2 explored the consequences of talker variability on infants' memory for speech sounds. The HAS procedure was modified by introducing a 2-min delay period between the familiarization and test phases of the experiment. Talker variability impeded infants' encoding of speech sounds. Infants who heard versions of the same syllable produced by 12 different talkers did not detect a change to a new syllable produced by the same talkers after the delay period. However, infants who heard the same syllable produced by a single talker were able to detect the phonetic change after the delay. Finally, although infants who heard productions from a single talker retained information about the phonetic structure of the syllable during the delay, they apparently did not retain information about the identity of the talker. Experiment 3 reduced the range of variability across talkers and investigated whether variability interferes with retention of all speech information. Although reducing the range of variability did not lead to retention of phonetic details, infants did recognize a change in the gender of the talkers' voices (from male to female or vice versa) after a 2-min delay. Two additional experiments explored the consequences of limiting the variability to a single talker. In Experiment 4, with an immediate testing procedure, infants exposed to 12 different tokens of one syllable produced by the same talker discriminated these from 12 tokens of another syllable.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

15.
This paper shows that maximal rate of speech varies as a function of syllable structure. For example, CCV syllables such as [sku] and CVC syllables such as [kus] are produced faster than VCC syllables such as [usk] when subjects repeat these syllables as fast as possible. Spectrographic analyses indicated that this difference in syllable duration was not confined to any one portion of the syllables: the vowel, the consonants and even the interval between syllable repetitions was longer for VCC syllables than for CVC and CCV syllables. These and other findings could not be explained in terms of word frequency, transition frequency of adjacent phonemes, or coarticulation between segments. Moreover, number of phonemes was a poor predictor of maximal rate for a wide variety of syllable structures, since VCC structures such as [ulk] were produced slower than phonemically longer CCCV structures such as [sklu], and V structures such as [a] were produced no faster than phonemically longer CV structures such as [ga]. These findings could not be explained by traditional models of speech production or articulatory difficulty but supported a complexity metric derived from a recently proposed theory of the serial production of syllables. This theory was also shown to be consistent with the special status of CV syllables suggested by Jakobson as well as certain aspects of speech errors, tongue-twisters and word games such as Double Dutch.  相似文献   

16.
Computation of Conditional Probability Statistics by 8-Month-Old Infants   总被引:3,自引:0,他引:3  
A recent report demonstrated that 8-month-olds can segment a continuous stream of speech syllables, containing no acoustic or prosodic cues to word boundaries, into wordlike units after only 2 min of listening experience (Saffran, Aslin, & Newport, 1996). Thus, a powerful learning mechanism capable of extracting statistical information from fluent speech is available early in development. The present study extends these results by documenting the particular type of statistical computation–transitional (conditional) probability–used by infants to solve this word-segmentation task. An artificial language corpus, consisting of a continuous stream of trisyllabic nonsense words, was presented to 8-month-olds for 3 min. A postfamiliarization test compared the infants' responses to words versus part-words (trisyllabic sequences spanning word boundaries). The corpus was constructed so that test words and part-words were matched in frequency, but differed in their transitional probabilities. Infants showed reliable discrimination of words from part-words, thereby demonstrating rapid segmentation of continuous speech into words on the basis of transitional probabilities of syllable pairs.  相似文献   

17.
Deviation of real speech from grammatical ideals due to disfluency and other speech errors presents potentially serious problems for the language learner. While infants may initially benefit from attending primarily or solely to infant-directed speech, which contains few grammatical errors, older infants may listen more to adult-directed speech. In a first experiment, Post-verbal infants preferred fluent speech to disfluent speech, while Pre-verbal infants showed no preference. In a second experiment, Post-verbal infants discriminated disfluent and fluent speech even when lexical information was removed, showing that they make use of prosodic properties of the speech stream to detect disfluency. Because disfluencies are highly correlated with grammatical errors, this sensitivity provides infants with a means of filtering ungrammaticality from their input.  相似文献   

18.
We investigated the conditions under which the [b]-[w] contrast is processed in a contextdependent manner, specifically in relation to syllable duration. In an earlier paper, Miller and Liberman (1979) demonstrated that when listeners use transition duration to differentiate [b] from [w], they treat it in relation to the duration of the syllable: As syllables from a [ba]-[wa] series varying in transition duration become longer, so, too, does the transition duration at the [b]-[w] perceptual boundary. In a subsequent paper, Shinn, Blumstein, and Jongman (1985) questioned the generality of this finding by showing that the effect of syllable duration is eliminated for [ba]-[wa] stimuli that are less schematicthan those used by Miller and Liberman. In the present investigation, we demonstrated that when these “more natural” stimuli are presented in a multitalker babble noise instead of in quiet (as was done by Shinn et al.), the syllable-duration effect emerges. Our findings suggest that the syllable-duration effect in particular, and context effects in general, may play a more important role in speech perception than Shinn et al. suggested.  相似文献   

19.
We tested 4–6‐ and 10–12‐month‐old infants to investigate whether the often‐reported decline in infant sensitivity to other‐race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own‐race or other‐race faces which were either accompanied by a speech syllable, no sound, or a non‐speech sound. Results indicated that 4–6‐ and 10–12‐month‐old infants discriminated own‐race as well as other‐race faces accompanied by a speech syllable, that only the 10–12‐month‐olds discriminated silent own‐race faces, and that 4–6‐month‐old infants discriminated own‐race and other‐race faces accompanied by a non‐speech sound but that 10–12‐month‐old infants only discriminated own‐race faces accompanied by a non‐speech sound. Overall, the results suggest that the ORE reported to date reflects infant responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing.  相似文献   

20.
In previous research, Saffran and colleagues [Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928; Saffran, J. R., Newport, E. L., & Aslin, R. N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606-621.] have shown that adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. They also showed that a similar learning mechanism operates with musical stimuli [Saffran, J. R., Johnson, R. E. K., Aslin, N., & Newport, E. L. (1999). Abstract Statistical learning of tone sequences by human infants and adults. Cognition, 70, 27-52.]. In this work we combined linguistic and musical information and we compared language learning based on speech sequences to language learning based on sung sequences. We hypothesized that, compared to speech sequences, a consistent mapping of linguistic and musical information would enhance learning. Results confirmed the hypothesis showing a strong learning facilitation of song compared to speech. Most importantly, the present results show that learning a new language, especially in the first learning phase wherein one needs to segment new words, may largely benefit of the motivational and structuring properties of music in song.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号