首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
English‐learning 7.5‐month‐olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non‐initially stressed words from speech. Using the same artificial language methodology as Johnson and Jusczyk (2001 ), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11‐month‐olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11‐month‐olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non‐initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.  相似文献   

2.
In three experiments, we examined priming effects where primes were formed by transposing the first and last phoneme of tri‐phonemic target words (e.g., /byt/ as a prime for /tyb/). Auditory lexical decisions were found not to be sensitive to this transposed‐phoneme priming manipulation in long‐term priming (Experiment 1), with primes and targets presented in two separated blocks of stimuli and with unrelated primes used as control condition (/mul/‐/tyb/), while a long‐term repetition priming effect was observed (/tyb/‐/tyb/). However, a clear transposed‐phoneme priming effect was found in two short‐term priming experiments (Experiments 2 and 3), with primes and targets presented in close temporal succession. The transposed‐phoneme priming effect was found when unrelated prime‐target pairs (/mul/‐/tyb/) were used as control and more important when prime‐target pairs sharing the medial vowel (/pys/‐/tyb/) served as control condition, thus indicating that the effect is not due to vocalic overlap. Finally, in Experiment 3, a transposed‐phoneme priming effect was found when primes sharing the medial vowel plus one consonant in an incorrect position with the targets (/byl/‐/tyb/) served as control condition, and this condition did not differ significantly from the vowel‐only condition. Altogether, these results provide further evidence for a role for position‐independent phonemes in spoken word recognition, such that a phoneme at a given position in a word also provides evidence for the presence of words that contain that phoneme at a different position.  相似文献   

3.
Bilingual acquisition presents learning challenges beyond those found in monolingual environments, including the need to segment speech in two languages. Infants may use statistical cues, such as syllable‐level transitional probabilities, to segment words from fluent speech. In the present study we assessed monolingual and bilingual 14‐month‐olds’ abilities to segment two artificial languages using transitional probability cues. In Experiment 1, monolingual infants successfully segmented the speech streams when the languages were presented individually. However, monolinguals did not segment the same language stimuli when they were presented together in interleaved segments, mimicking the language switches inherent to bilingual speech. To assess the effect of real‐world bilingual experience on dual language speech segmentation, Experiment 2 tested infants with regular exposure to two languages using the same interleaved language stimuli as Experiment 1. The bilingual infants in Experiment 2 successfully segmented the languages, indicating that early exposure to two languages supports infants’ abilities to segment dual language speech using transitional probability cues. These findings support the notion that early bilingual exposure prepares infants to navigate challenging aspects of dual language environments as they begin to acquire two languages.  相似文献   

4.
Individual variability in infant's language processing is partly explained by environmental factors, like the quantity of parental speech input, as well as by infant‐specific factors, like speech production. Here, we explore how these factors affect infant word segmentation. We used an artificial language to ensure that only statistical regularities (like transitional probabilities between syllables) could cue word boundaries, and then asked how the quantity of parental speech input and infants’ babbling repertoire predict infants’ abilities to use these statistical cues. We replicated prior reports showing that 8‐month‐old infants use statistical cues to segment words, with a preference for part‐words over words (a novelty effect). Crucially, 8‐month‐olds with larger novelty effects had received more speech input at 4 months and had greater production abilities at 8 months. These findings establish for the first time that the ability to extract statistical information from speech correlates with individual factors in infancy, like early speech experience and language production. Implications of these findings for understanding individual variability in early language acquisition are discussed.  相似文献   

5.
The use of rhythm in attending to speech   总被引:1,自引:0,他引:1  
Three experiments examined attentional allocation during speech processing to determine whether listeners capitalize on the rhythmic nature of speech and attend more closely to stressed than to unstressed syllables. Ss performed a phoneme monitoring task in which the target phoneme occurred on a syllable that was either predicted to be stressed or unstressed by the context preceding the target word. Stimuli were digitally edited to eliminate the local acoustic correlates of stress. A sentential context and a context composed of word lists, in which all the words had the same stress pattern, were used. In both cases, the results suggest that attention may be preferentially allocated to stressed syllables during speech processing. However, a normal sentence context may not provide strong predictive cues to lexical stress, limiting the use of the attentional focus.  相似文献   

6.
Kim J  Davis C  Krins P 《Cognition》2004,93(1):B39-B47
This study investigated the linguistic processing of visual speech (video of a talker's utterance without audio) by determining if such has the capacity to prime subsequently presented word and nonword targets. The priming procedure is well suited for the investigation of whether speech perception is amodal since visual speech primes can be used with targets presented in different modalities. To this end, a series of priming experiments were conducted using several tasks. It was found that visually spoken words (for which overt identification was poor) acted as reliable primes for repeated target words in the naming, written and auditory lexical decision tasks. These visual speech primes did not produce associative or reliable form priming. The lack of form priming suggests that the repetition priming effect was constrained by lexical level processes. That priming found in all tasks is consistent with the view that similar processes operate in both visual and auditory speech processing.  相似文献   

7.
We tested 4–6‐ and 10–12‐month‐old infants to investigate whether the often‐reported decline in infant sensitivity to other‐race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own‐race or other‐race faces which were either accompanied by a speech syllable, no sound, or a non‐speech sound. Results indicated that 4–6‐ and 10–12‐month‐old infants discriminated own‐race as well as other‐race faces accompanied by a speech syllable, that only the 10–12‐month‐olds discriminated silent own‐race faces, and that 4–6‐month‐old infants discriminated own‐race and other‐race faces accompanied by a non‐speech sound but that 10–12‐month‐old infants only discriminated own‐race faces accompanied by a non‐speech sound. Overall, the results suggest that the ORE reported to date reflects infant responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing.  相似文献   

8.
Bilingual and monolingual infants differ in how they process linguistic aspects of the speech signal. But do they also differ in how they process non‐linguistic aspects of speech, such as who is talking? Here, we addressed this question by testing Canadian monolingual and bilingual 9‐month‐olds on their ability to learn to identify native Spanish‐speaking females in a face‐voice matching task. Importantly, neither group was familiar with Spanish prior to participating in the study. In line with our predictions, bilinguals succeeded in learning the face‐voice pairings, whereas monolinguals did not. We consider multiple explanations for this finding, including the possibility that simultaneous bilingualism enhances perceptual attentiveness to talker‐specific speech cues in infancy (even in unfamiliar languages), and that early bilingualism delays perceptual narrowing to language‐specific talker recognition cues. This work represents the first evidence that multilingualism in infancy affects the processing of non‐linguistic aspects of the speech signal, such as talker identity.  相似文献   

9.
Everyone agrees that infants possess general mechanisms for learning about the world, but the existence and operation of more specialized mechanisms is controversial. One mechanism—rule learning—has been proposed as potentially specific to speech, based on findings that 7‐month‐olds can learn abstract repetition rules from spoken syllables (e.g. ABB patterns: wo‐fe‐fe, ga‐tu‐tu…) but not from closely matched stimuli, such as tones. Subsequent work has shown that learning of abstract patterns is not simply specific to speech. However, we still lack a parsimonious explanation to tie together the diverse, messy, and occasionally contradictory findings in that literature. We took two routes to creating a new profile of rule learning: meta‐analysis of 20 prior reports on infants’ learning of abstract repetition rules (including 1,318 infants in 63 experiments total), and an experiment on learning of such rules from a natural, non‐speech communicative signal. These complementary approaches revealed that infants were most likely to learn abstract patterns from meaningful stimuli. We argue that the ability to detect and generalize simple patterns supports learning across domains in infancy but chiefly when the signal is meaningfully relevant to infants’ experience with sounds, objects, language, and people.  相似文献   

10.
The basic speech unit (phoneme or syllable) problem was investigatedwith the primed matching task. In primed matching, subjects have to decide whether the elements of stimulus pairs are the same or different. The prime should facilitate matching in as far as its representation is similar to the stimuli to be matched. If stimulus representations generate graded structure, with stimulus instances being more or less prototypical for the category, priming should interact with prototypicality because prototypical instances are more similar to the activated category than are low-prototypical instances. Rosch (1975a, 1975b) showed that, by varying the matching criterion (matching for physical identity or for belonging to the same category), the specific patterns of the priming × prototypicality interaction could differentiate perceptually based from abstract categories. Bytesting this pattern forphoneme and syllable categories, the abstraction level of these categories canbe studied. After finding reliable prototypicality effects for both phoneme and syllable categories (Experiments 1 and 2), primed phoneme matching (Experiments 3 and 4) and primed syllable matching (Experiments 5 and 6) were used under both physical identity instructions and same-category instructions. The results make clear that phoneme categories are represented on the basis of perceptual information, whereas syllable representations are more abstract. The phoneme category can thus be identified as the basic speech unit. Implications for phoneme and syllable representation are discussed.  相似文献   

11.
A series of 15 experiments was conducted to explore English-learning infants' capacities to segment bisyllabic words from fluent speech. The studies in Part I focused on 7.5 month olds' abilities to segment words with strong/weak stress patterns from fluent speech. The infants demonstrated an ability to detect strong/weak target words in sentential contexts. Moreover, the findings indicated that the infants were responding to the whole words and not to just their strong syllables. In Part II, a parallel series of studies was conducted examining 7.5 month olds' abilities to segment words with weak/strong stress patterns. In contrast with the results for strong/weak words, 7.5 month olds appeared to missegment weak/strong words. They demonstrated a tendency to treat strong syllables as markers of word onsets. In addition, when weak/strong words co-occurred with a particular following weak syllable (e.g., "guitar is"), 7.5 month olds appeared to misperceive these as strong/weak words (e.g., "taris"). The studies in Part III examined the abilities of 10.5 month olds to segment weak/strong words from fluent speech. These older infants were able to segment weak/strong words correctly from the various contexts in which they appeared. Overall, the findings suggest that English learners may rely heavily on stress cues when they begin to segment words from fluent speech. However, within a few months time, infants learn to integrate multiple sources of information about the likely boundaries of words in fluent speech.  相似文献   

12.
Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.  相似文献   

13.
In a series of experiments, the masked priming paradigm with very brief prime exposures was used to investigate the role of the syllable in the production of English. Experiment 1 (word naming task) showed a syllable priming effect for English words with clear initial syllable boundaries (such as BALCONY), but no effect with ambisyllabic words targets (such as BALANCE, where the /l/ belongs to both the first and the second syllables). Experiment 2 failed to show such syllable priming effects in the lexical decision task. Experiment 3 demonstrated that for words with clear initial syllable boundaries, naming latencies were faster only when primes formed the first syllable of the target, in comparison with a neutral condition. Experiment 4 showed that the two possible initial syllables of ambisyllabic words facilitated word naming to the same extent, in comparison with the neutral condition. Finally, Experiment 5 demonstrated that the syllable priming effect obtained for CV words with clear initial syllable boundaries (such as DIVORCE) was not due to increased phonological and/or orthographic overlap. These results, showing that the syllable constitutes a unit of speech production in English, are discussed in relation to the model of phonological and phonetic encoding proposed by Levelt and Wheeldon (1994).  相似文献   

14.
Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4‐ to 8‐month‐old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye‐tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.  相似文献   

15.
This study investigates the influence of the acoustic properties of vowels on 6‐ and 10‐month‐old infants’ speech preferences. The shape of the contour (bell or monotonic) and the duration (normal or stretched) of vowels were manipulated in words containing the vowels /i/ and /u/, and presented to infants using a two‐choice preference procedure. Experiment 1 examined contour shape: infants heard either normal‐duration bell‐shaped and monotonic contours, or the same two contours with stretched duration. The results show that 6‐month‐olds preferred bell to monotonic contours, whereas 10‐month‐olds preferred monotonic to bell contours. In Experiment 2, infants heard either normal‐duration and stretched bell contours, or normal‐duration and stretched monotonic contours. As in Experiment 1, infants showed age‐specific preferences, with 6‐month‐olds preferring stretched vowels, and 10‐month‐olds preferring normal‐duration vowels. Infants’ attention to the acoustic qualities of vowels, and to speech in general, undergoes a dramatic transformation in the final months of the first year, a transformation that aligns with the emergence of other developmental milestones in speech perception.  相似文献   

16.
Following findings that musical rhythmic priming enhances subsequent speech perception, we investigated whether rhythmic priming for spoken sentences can enhance phonological processing – the building blocks of speech – and whether audio–motor training enhances this effect. Participants heard a metrical prime followed by a sentence (with a matching/mismatching prosodic structure), for which they performed a phoneme detection task. Behavioural (RT) data was collected from two groups: one who received audio–motor training, and one who did not. We hypothesised that 1) phonological processing would be enhanced in matching conditions, and 2) audio–motor training with the musical rhythms would enhance this effect. Indeed, providing a matching rhythmic prime context resulted in faster phoneme detection, thus revealing a cross-domain effect of musical rhythm on phonological processing. In addition, our results indicate that rhythmic audio–motor training enhances this priming effect. These results have important implications for rhythm-based speech therapies, and suggest that metrical rhythm in music and speech may rely on shared temporal processing brain resources.  相似文献   

17.
We propose that speech comprehension involves the activation of token representations of the phonological forms of current lexical hypotheses, separately from the ongoing construction of a conceptual interpretation of the current utterance. In a series of cross-modal priming experiments, facilitation of lexical decision responses to visual target words (e.g., time) was found for targets that were semantic associates of auditory prime words (e.g., date) when the primes were isolated words, but not when the same primes appeared in sentence contexts. Identity priming (e.g., faster lexical decisions to visual date after spoken date than after an unrelated prime) appeared, however, both with isolated primes and with primes in prosodically neutral sentences. Associative priming in sentence contexts only emerged when sentence prosody involved contrastive accents, or when sentences were terminated immediately after the prime. Associative priming is therefore not an automatic consequence of speech processing. In no experiment was there associative priming from embedded words (e.g., sedate-time), but there was inhibitory identity priming (e.g., sedate-date) from embedded primes in sentence contexts. Speech comprehension therefore appears to involve separate distinct activation both of token phonological word representations and of conceptual word representations. Furthermore, both of these types of representation are distinct from the long-term memory representations of word form and meaning.  相似文献   

18.
Three experiments investigated the role of specific phonological components in priming tip-of-the-tongue (TOT) resolution. When in a TOT state, participants read a list of words that included phonological primes intermixed among unrelated words. The phonological primes contained either the same first letter as the target (Experiment 1), a single syllable (first, middle, or last) of the target (Experiment 2), or the first phoneme or first syllable of the target (Experiment 3). Reading first-letter primes in Experiment 1 did not help to resolve TOTs, whereas reading first-syllable primes significantly increased word retrieval in Experiment 2. Experiment 3 replicated the results of Experiments 1 and 2 using first-phoneme primes instead of first-letter primes and using two primes instead of three, although first-syllable priming occurred only for primes read silently. The results of these experiments support a transmission deficit model, where TOTs are caused by weak connections among phonological representations and can be resolved through internal or overt production of specific phonology.  相似文献   

19.
To better understand how infants process complex auditory input, this study investigated whether 11-month-old infants perceive the pitch (melodic) or the phonetic (lyric) components within songs as more salient, and whether melody facilitates phonetic recognition. Using a preferential looking paradigm, uni-dimensional and multi-dimensional songs were tested; either the pitch or syllable order of the stimuli varied. As a group, infants detected a change in pitch order in a 4-note sequence when the syllables were redundant (experiment 1), but did not detect the identical pitch change with variegated syllables (experiment 2). Infants were better able to detect a change in syllable order in a sung sequence (experiment 2) than the identical syllable change in a spoken sequence (experiment 1). These results suggest that by 11 months, infants cannot “ignore” phonetic information in the context of perceptually salient pitch variation. Moreover, the increased phonetic recognition in song contexts mirrors findings that demonstrate advantages of infant-directed speech. Findings are discussed in terms of how stimulus complexity interacts with the perception of sung speech in infancy.  相似文献   

20.
Gaze is considered a crucial component of early communication between an infant and her caregiver. When communicatively addressed, infants respond aptly to others’ gaze by following its direction. However, experience with face‐to‐face contact varies across cultures, begging the question whether infants’ competencies in receiving others’ communicative gaze signals are universal or culturally specific . We used eye‐tracking to assess gaze‐following responses of 5‐ to 7‐month olds in Vanuatu, where face‐to‐face parent–infant interactions are less prevalent than in Western populations. We found that—just like Western 6‐month‐olds studied previously—5‐ to ‐7‐month‐olds living in Vanuatu followed gaze only, when communicatively addressed. That is, if presented gaze shifts were preceded by infant‐directed speech, but not if they were preceded by adult‐directed speech. These results are consistent with the notion that early infant gaze following is tied to infants’ early emerging communicative competencies and rooted in universal mechanisms rather than being dependent on cultural specificities of early socialization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号