首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Final-syllable invariance is characteristic of diminutives (e.g., doggie), which are a pervasive feature of the child-directed speech registers of many languages. Invariance in word endings has been shown to facilitate word segmentation (Kempe, Brooks, & Gillis, 2005) in an incidental-learning paradigm in which synthesized Dutch pseudonouns were used. To broaden the cross-linguistic evidence for this invariance effect and to increase its ecological validity, adult English speakers (n=276) were exposed to naturally spoken Dutch or Russian pseudonouns presented in sentence contexts. A forced choice test was given to assess target recognition, with foils comprising unfamiliar syllable combinations in Experiments 1 and 2 and syllable combinations straddling word boundaries in Experiment 3. A control group (n=210) received the recognition test with no prior exposure to targets. Recognition performance improved with increasing final-syllable rhyme invariance, with larger increases for the experimental group. This confirms that word ending invariance is a valid segmentation cue in artificial, as well as naturalistic, speech and that diminutives may aid segmentation in a number of languages.  相似文献   

2.
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364 (1536), 3617–3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109 (9), 3253–3258, 2012; Jusczyk & Aslin, Cogn Psychol 29 , 1–23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co‐occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near‐infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.  相似文献   

3.
A series of four experiments was conducted to determine whether English-learning infants can use allophonic cues to word boundaries to segment words from fluent speech. Infants were familiarized with a pair of two-syllable items, such as nitrates and night rates and then were tested on their ability to detect these same words in fluent speech passages. The presence of allophonic cues to word boundaries did not help 9-month-olds to distinguish one of the familiarized words from an acoustically similar foil. Infants familiarized with nitrates were just as likely to listen to a passage about night rates as they were to listen to one about nitrates. Nevertheless, when the passages contained distributional cues that favored the extraction of the familiarized targets, 9-month-olds were able to segment these items from fluent speech. By the age of 10.5 months, infants were able to rely solely on allophonic cues to locate the familiarized target words in passages. We consider what implications these findings have for understanding how word segmentation skills develop.  相似文献   

4.
The goal of the study was to examine whether the ‘noun-bias’ phenomenon, which exists in the lexicon of Hebrew-speaking children, also exists in Hebrew child-directed speech (CDS) as well as in Hebrew adult-directed speech (ADS). In addition, we aimed to describe the use of the different classes of content words in the speech of Hebrew-speaking parents to their children at different ages compared to the speech of parents to adults (ADS). Thirty infants (age range 8:5–33 months) were divided into three stages according to age: pre-lexical, single-word, and early grammar. The ADS corpus included 18 Hebrew-speaking parents of children at the same three stages of language development as in the CDS corpus. The CDS corpus was collected from parent–child dyads during naturalistic activities at home: mealtime, bathing, and play. The ADS corpus was collected from parent–experimenter interactions including the parent watching a video and then being interviewed by the experimenter. 200 utterances of each sample were transcribed, coded for types and tokens and analyzed quantitatively and qualitatively. Results show that in CDS, when speaking to infants of all ages, parents’ use of types and tokens of verbs and nouns was similar and significantly higher than their use of adjectives or adverbs. In ADS, however, verbs were the main lexical category used by Hebrew-speaking parents in both types and tokens. It seems that both the properties of the input language (e.g. the pro-drop parameter) and the interactional styles of the caregivers are important factors that may influence the high presence of verbs in Hebrew-speaking parents’ ADS and CDS. The negative correlation between the widespread use of verbs in the speech of parents to their infants and the ‘noun-bias’ phenomenon in the Hebrew-child lexicon will be discussed in detail.  相似文献   

5.
Mattys SL  Jusczyk PW 《Cognition》2001,78(2):91-121
There is growing evidence that infants become sensitive to the probabilistic phonotactics of their ambient language sometime during the second half of their first year. The present study investigates whether 9-month-olds make use of phonotactic cues to segment words from fluent speech. Using the Headturn Preference Procedure, we found that infants listened to a CVC stimulus longer when the stimulus previously appeared in a sentential context with good phonotactic cues than when it appeared in one without such cues. The goodness of the phonotactic cues was estimated from the frequency with which the C.C clusters at the onset and offset of a CVC test stimulus (i.e. C.CVC.C) are found within and between words in child-directed speech, with high between-word probability associated with good cues to word boundaries. A similar segmentation result emerged when good phonotactic cues occurred only at the onset (i.e. C.CVC.C) or the offset (i.e. C.CVC.C) of the target words in the utterances. Together, the results suggest that 9-month-olds use probabilistic phonotactics to segment speech into words and that high-probability between-word clusters are interpreted as both word onsets and word offsets.  相似文献   

6.
During speech perception, listeners make judgments about the phonological category of sounds by taking advantage of multiple acoustic cues for each phonological contrast. Perceptual experiments have shown that listeners weight these cues differently. How do listeners weight and combine acoustic cues to arrive at an overall estimate of the category for a speech sound? Here, we present several simulations using a mixture of Gaussians models that learn cue weights and combine cues on the basis of their distributional statistics. We show that a cue‐weighting metric in which cues receive weight as a function of their reliability at distinguishing phonological categories provides a good fit to the perceptual data obtained from human listeners, but only when these weights emerge through the dynamics of learning. These results suggest that cue weights can be readily extracted from the speech signal through unsupervised learning processes.  相似文献   

7.
A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.  相似文献   

8.
In an eye-tracking experiment we examined whether Chinese readers were sensitive to information concerning how often a Chinese character appears as a single-character word versus the first character in a two-character word, and whether readers use this information to segment words and adjust the amount of parafoveal processing of subsequent characters during reading. Participants read sentences containing a two-character target word with its first character more or less likely to be a single-character word. The boundary paradigm was used. The boundary appeared between the first character and the second character of the target word, and we manipulated whether readers saw an identity or a pseudocharacter preview of the second character of the target. Linear mixed-effects models revealed reduced preview benefit from the second character when the first character was more likely to be a single-character word. This suggests that Chinese readers use probabilistic combinatorial information about the likelihood of a Chinese character being single-character word or a two-character word online to modulate the extent of parafoveal processing.  相似文献   

9.
Child-directed language can support language learning, but how? We addressed two questions: (1) how caregivers prosodically modulated their speech as a function of word familiarity (known or unknown to the child) and accessibility of referent (visually present or absent from the immediate environment); (2) whether such modulations affect children's unknown word learning and vocabulary development. We used data from 38 English-speaking caregivers (from the ECOLANG corpus) talking about toys (both known and unknown to their children aged 3–4 years) both when the toys are present and when absent. We analyzed prosodic dimensions (i.e., speaking rate, pitch and intensity) of caregivers’ productions of 6529 toy labels. We found that unknown labels were spoken with significantly slower speaking rate, wider pitch and intensity range than known labels, especially in the first mentions, suggesting that caregivers adjust their prosody based on children's lexical knowledge. Moreover, caregivers used slower speaking rate and larger intensity range to mark the first mentions of toys that were physically absent. After the first mentions, they talked about the referents louder with higher mean pitch when toys were present than when toys were absent. Crucially, caregivers’ mean pitch of unknown words and the degree of mean pitch modulation for unknown words relative to known words (pitch ratio) predicted children's immediate word learning and vocabulary size 1 year later. In conclusion, caregivers modify their prosody when the learning situation is more demanding for children, and these helpful modulations assist children in word learning.

Research Highlights

  • In naturalistic interactions, caregivers use slower speaking rate, wider pitch and intensity range when introducing new labels to 3–4-year-old children, especially in first mentions.
  • Compared to when toys are present, caregivers speak more slowly with larger intensity range to mark the first mentions of toys that are physically absent.
  • Mean pitch to mark word familiarity predicts children's immediate word learning and future vocabulary size.
  相似文献   

10.
N Cowan 《Acta psychologica》1991,77(2):121-135
First and second language acquisition both require that speech be segmented into familiar, multiphonemic units (e.g., words and common phrases). The present research examines one segmentation cue that is of considerable theoretical interest: the repetition of fixed sequences of speech. On each trial, subjects heard repetitions ('pre-exposures') of two artificially-constructed, multisyllabic patterns that shared an embedded segment 1 or 2 syllables long (e.g., 2 shared syllables: [ga-li-SE] and [li-SE-stu]). There were 2 and 6, 4 and 4, or 6 and 2 repetitions of the two patterns, randomly ordered. Subjects were then to indicate the groupings they perceived within a subsequent, longer sequence containing both of the pre-exposed patterns (e.g., [ga-li-SE-stu]). Responses varied systematically with the size of the embedded segment, the repetition frequencies of the two pre-exposed patterns, and the serial position of each pre-exposure. The results illustrate how investigations of the processing of speech patterns may contribute to an understanding of some elementary aspects of language learning.  相似文献   

11.
The question of whether Dutch listeners rely on the rhythmic characteristics of their native language to segment speech was investigated in three experiments. In Experiment 1, listeners were induced to make missegmentations of continuous speech. The results showed that word boundaries were inserted before strong syllables and deleted before weak syllables. In Experiment 2, listeners were required to spot real CVC or CVCC words (C = consonant, V = vowel) embedded in bisyllabic nonsense strings. For CVCC words, fewer errors were made when the second syllable of the nonsense string was weak rather than strong, whereas for CVC words the effect was reversed. Experiment 3 ruled out an acoustic explanation for this effect. It is argued that these results are in line with an account in which both metrical segmentation and lexical competition play a role.  相似文献   

12.
When listening to speech from one’s native language, words seem to be well separated from one another, like beads on a string. When listening to a foreign language, in contrast, words seem almost impossible to extract, as if there was only one bead on the same string. This contrast reveals that there are language-specific cues to segmentation. The puzzle, however, is that infants must be endowed with a language-independent mechanism for segmentation, as they ultimately solve the segmentation problem for any native language. Here, we approach the acquisition problem by asking whether there are language-independent cues to segmentation that might be available to even adult learners who have already acquired a native language. We show that adult learners recognize words in connected speech when only prosodic cues to word-boundaries are given from languages unfamiliar to the participants. In both artificial and natural speech, adult English speakers, with no prior exposure to the test languages, readily recognized words in natural languages with critically different prosodic patterns, including French, Turkish and Hungarian. We suggest that, even though languages differ in their sound structures, they carry universal prosodic characteristics. Further, these language-invariant prosodic cues provide a universally accessible mechanism for finding words in connected speech. These cues may enable infants to start acquiring words in any language even before they are fine-tuned to the sound structure of their native language.  相似文献   

13.
This study investigates the child-directed speech (CDS) of four Russian-, six German, and six English-speaking mothers to their 2-year-old children. Typologically Russian has considerably less restricted word order than either German or English, with German showing more word-order variants than English. This could lead to the prediction that the lexical restrictiveness previously found in the initial strings of English CDS by Cameron-Faulkner, Lieven, and Tomasello (2003 ) would not be found in Russian or German CDS. However, despite differences between the three corpora that clearly derive from typological differences between the languages, the most significant finding of this study is a high degree of lexical restrictiveness at the beginnings of CDS utterances in all three languages.  相似文献   

14.
In two experiments, we investigated the correspondences between off-line word segmentation and on-line segmentation processing during Chinese reading. In Experiment 1, participants were asked to read sentences which contained critical four-character strings, and then, they were required to segment the same sentences into words in a later off-line word segmentation task. For each item, participants were split into 1-word segmenters (who segmented four-character strings as a single word) and 2-word segmenters (who segmented four-character strings as 2 two-character words). Thus, we split participants into two groups (1-word segmenters and 2-word segmenters) according to their off-line segmentation bias. The data analysis showed no reliable group effect on all the measures. In order to avoid the heterogeneity of participants and stimuli in Experiment 1, two groups of participants (1-word segmenters and 2-word segmenters) and three types of critical four-character string (1-word strings, ambiguous strings, and 2-word strings) were identified in a norming study in Experiment 2. Participants were required to read sentences containing these critical strings. There was no reliable group effect in Experiment 2, as was the case in Experiment 1. However, in Experiment 2, participants spent less time and made fewer fixations on 1-word strings compared to ambiguous and 2-word strings. These results indicate that the off-line word segmentation preferences do not necessarily reflect on-line word segmentation processing during Chinese reading and that Chinese readers exhibit flexibility such that word, or multiple constituent, segmentation commitments are made on-line.  相似文献   

15.
The ability to identify the grammatical category of a word (e.g., noun, verb, adjective) is a fundamental aspect of competence in a natural language. Children show evidence of categorization by as early as 18 months, and in some cases younger. However, the mechanisms that underlie this ability are not well understood. The lexical co-occurrence patterns of words in sentences could provide information about word categories--for example, words that follow the in English often belong to the same category. As a step in understanding the role distributional mechanisms might play in language learning, the present study investigated the ability of adults to categorize words on the basis of distributional information. Forty participants listened for approximately 6 min to sentences in an artificial language and were told that they would later be tested on their memory for what they had heard. Participants were next tested on an additional set of sentences and asked to report which sentences they recognized from the first 6 min. The results suggested that learners performed a distributional analysis on the initial set of sentences and recognized sentences on the basis of their memory of sequences of categories of words. Thus, mechanisms that would be useful in natural language learning were shown to be active in adults in an artificial language learning task.  相似文献   

16.
Subjects listening to dichotically presented real speech stop and fricative consonants, with and without transitions, showed larger laterality effects in the transition-less condition. In a second study, laterality effects for burst cues and transition cues were compared; using the stop consonants /b/ and /d/. Again, burst cues produced a larger laterality effect. These results are not compatible with a lateralized speech “decoder”, and are interpreted as favoring a Semmes (1968) model of hemispheric differences, differential processing.  相似文献   

17.
Räsänen O 《Cognition》2011,(2):149-176
Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this work, a computational model for word segmentation and learning of primitive lexical items from continuous speech is presented. The model does not utilize any a priori linguistic or phonemic knowledge such as phones, phonemes or articulatory gestures, but computes transitional probabilities between atomic acoustic events in order to detect recurring patterns in speech. Experiments with the model show that word segmentation is possible without any knowledge of linguistically relevant structures, and that the learned ungrounded word models show a relatively high selectivity towards specific words or frequently co-occurring combinations of short words.  相似文献   

18.
Recognising the grammatical categories of words is a necessary skill for the acquisition of syntax and for on-line sentence processing. The syntactic and semantic context of the word contribute as cues for grammatical category assignment, but phonological cues, too, have been implicated as important sources of information. The value of phonological and distributional cues has not, with very few exceptions, been empirically assessed. This paper presents a series of analyses of phonological cues and distributional cues and their potential for distinguishing grammatical categories of words in corpus analyses. The corpus analyses indicated that phonological cues were more reliable for less frequent words, whereas distributional information was most valuable for high frequency words. We tested this prediction in an artificial language learning experiment, where the distributional and phonological cues of categories of nonsense words were varied. The results corroborated the corpus analyses. For high-frequency nonwords, distributional information was more useful, whereas for low-frequency words there was more reliance on phonological cues. The results indicate that phonological and distributional cues contribute differentially towards grammatical categorisation.  相似文献   

19.
Sixty-one single Japanese-speaking women between the ages of 18 and 26 years were recorded as they read aloud picture books to a young child and as they conversed with another Japanese-speaking woman. When their utterances were acoustically compared between the two settings with regard to prosodic features, both the average pitch and pitch excursions exhibited a significant increase when interacting with the child in 17 of the 61 women. In 36 of the remaining 44 subjects, neither of these parameters showed such changes. This individual variability was not related to the subjects' liking for picture books, previous experience with reading or being read them, or with baby-sitting. The only variable that could explain the results was whether the subjects had grown up with one or more siblings or as only children. If they were only children, the prosodic modification was significantly less likely to occur  相似文献   

20.
In two artificial language learning experiments, we investigated the impact of attention load on segmenting speech through two sublexical cues: transitional probabilities (TPs) and coarticulation. In Experiment 1, we observed that coarticulation processing was resilient to high attention load, whereas TP computation was penalized in a graded manner. In Experiment 2, we showed that encouraging participants to actively search for “word” candidates enhanced overall performance but was not sufficient to preclude the impairment of statistically driven segmentation by attention load. As long as attentional resources were depleted, independently of their intention to find these “words,” participants segmented only TP words with the highest TPs, not TP words with lower TPs. Attention load thus has a graded and differential impact on the relative weighting of the cues in speech segmentation, even when only sublexical cues are available in the signal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号