首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In a landmark study, Jusczyk and Aslin (1995) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in Jusczyk and Aslin (1995); however, our stimuli were controlled for target word location and infants were given a shorter familiarization time to avoid ceiling effects. Infants were familiarized to one word that always occurred at the edge of an utterance (sentence-initial position for half of the infants and sentence-final position for the other half) and one word that always occurred in sentence-medial position. Our results demonstrate that infants segment words from the edges of an utterance more readily than from the middle of an utterance. In addition, infants segment words from utterance-final position just as readily as they segment words from utterance-initial position. Possible explanations for these results, as well as their implications for current models of the development of word segmentation, are discussed.  相似文献   

2.
How do infants find the words in the tangle of speech that confronts them? The present study shows that by as early as 6 months of age, infants can already exploit highly familiar words-including, but not limited to, their own names-to segment and recognize adjoining, previously unfamiliar words from fluent speech. The head-turn preference procedure was used to familiarize babies with short passages in which a novel word was preceded by a familiar or a novel name. At test, babies recognized the word that followed the familiar name, but not the word that followed the novel name. This is the youngest age at which infants have been shown capable of segmenting fluent speech. Young infants have a powerful aid available to them for cracking the speech code. Their emerging familiarity with particular words, such as their own and other people's names, can provide initial anchors in the speech stream.  相似文献   

3.
4.
Rapid recognition at 10 months as a predictor of language development   总被引:1,自引:0,他引:1  
Infants' ability to recognize words in continuous speech is vital for building a vocabulary. We here examined the amount and type of exposure needed for 10-month-olds to recognize words. Infants first heard a word, either embedded within an utterance or in isolation, then recognition was assessed by comparing event-related potentials to this word versus a word that they had not heard directly before. Although all 10-month-olds showed recognition responses to words first heard in isolation, not all infants showed such responses to words they had first heard within an utterance. Those that did succeed in the latter, harder, task, however, understood more words and utterances when re-tested at 12 months, and understood more words and produced more words at 24 months, compared with those who had shown no such recognition response at 10 months. The ability to rapidly recognize the words in continuous utterances is clearly linked to future language development.  相似文献   

5.
Isolated words enhance statistical language learning in infancy   总被引:1,自引:0,他引:1  
Infants are adept at tracking statistical regularities to identify word boundaries in pause-free speech. However, researchers have questioned the relevance of statistical learning mechanisms to language acquisition, since previous studies have used simplified artificial languages that ignore the variability of real language input. The experiments reported here embraced a key dimension of variability in infant-directed speech. English-learning infants (8-10 months) listened briefly to natural Italian speech that contained either fluent speech only or a combination of fluent speech and single-word utterances. Listening times revealed successful learning of the statistical properties of target words only when words appeared both in fluent speech and in isolation; brief exposure to fluent speech alone was not sufficient to facilitate detection of the words' statistical properties. This investigation suggests that statistical learning mechanisms actually benefit from variability in utterance length, and provides the first evidence that isolated words and longer utterances act in concert to support infant word segmentation.  相似文献   

6.
According to speech act theory (Searle, 1969), utterances have both a propositional content and an illocutionary force (the speech act performed with the utterance). Four experiments were conducted to examine whether utterance comprehension involves speech act recognition. Participants in all experiments first read remarks that could be characterized by a particular speech act (e.g., beg). A recognition probe reaction time procedure was used in Experiments 1 and 2; participants indicated whether a probe word had literally appeared in the last remark that they had read. Participants were significantly slower at making this judgment (and made significantly more errors) when the probe represented the speech act performed with the prior remark than when it did not. A lexical decision task was used in Experiments 3 and 4, and participants were significantly faster at verifying target words representing the speech act performed with a remark, relative to control words. Overall, the results suggest that speech act recognition may be an important component of the comprehension of conversational remarks.  相似文献   

7.
In order to explore the function of imitation for first language learning, imitative and spontaneous utterances were compared in the naturalistic speech of six children in the course of their development from single-word utterances (when mean length of utterance was essentially 1.0) to the emergence of grammar (when mean length of utterance approached 2.0). The relative extent of imitation, and lexical and grammatical variation in imitative and spontaneous speech were determined. There were inter-subject differences in the extent of imitation, but each child was consistent in the tendency to imitate or not to imitate across time. For those children who imitated, there were both lexical and grammatical differences in imitative and spontaneous speech, and a developmental shift from imitative to spontaneous use of particular words and semantic-syntactic relations between words. The results are discussed as evidence of an active processing of model utterances relative to the contexts in which they occur for information for language learning.  相似文献   

8.
The lexicon of 6‐month‐olds is comprised of names and body part words. Unlike names, body part words do not often occur in isolation in the input. This presents a puzzle: How have infants been able to pull out these words from the continuous stream of speech at such a young age? We hypothesize that caregivers' interactions directed at and on the infant's body may be at the root of their early acquisition of body part words. An artificial language segmentation study shows that experimenter‐provided synchronous tactile cues help 4‐month‐olds to find words in continuous speech. A follow‐up study suggests that this facilitation cannot be reduced to the highly social situation in which the directed interaction occurs. Taken together, these studies suggest that direct caregiver–infant interaction, exemplified in this study by touch cues, may play a key role in infants' ability to find word boundaries, and suggests that early vocabulary items may consist of words often linked with caregiver touches. A video abstract of this article can be viewed at http://youtu.be/NfCj5ipatyE  相似文献   

9.
A series of 15 experiments was conducted to explore English-learning infants' capacities to segment bisyllabic words from fluent speech. The studies in Part I focused on 7.5 month olds' abilities to segment words with strong/weak stress patterns from fluent speech. The infants demonstrated an ability to detect strong/weak target words in sentential contexts. Moreover, the findings indicated that the infants were responding to the whole words and not to just their strong syllables. In Part II, a parallel series of studies was conducted examining 7.5 month olds' abilities to segment words with weak/strong stress patterns. In contrast with the results for strong/weak words, 7.5 month olds appeared to missegment weak/strong words. They demonstrated a tendency to treat strong syllables as markers of word onsets. In addition, when weak/strong words co-occurred with a particular following weak syllable (e.g., "guitar is"), 7.5 month olds appeared to misperceive these as strong/weak words (e.g., "taris"). The studies in Part III examined the abilities of 10.5 month olds to segment weak/strong words from fluent speech. These older infants were able to segment weak/strong words correctly from the various contexts in which they appeared. Overall, the findings suggest that English learners may rely heavily on stress cues when they begin to segment words from fluent speech. However, within a few months time, infants learn to integrate multiple sources of information about the likely boundaries of words in fluent speech.  相似文献   

10.
Monaghan P  Mattock K 《Cognition》2012,123(1):133-143
Learning word-referent mappings is complex because the word and its referent tend to co-occur with multiple other words and potential referents. Such complexity has led to proposals for a host of constraints on learning, though how these constraints may interact has not yet been investigated in detail. In this paper, we investigated interactions between word co-occurrence constraints and cross-situational statistics in word learning. Analyses of child-directed speech revealed that when both object-referring and non-referring words occurred in the utterance, referring words were more likely to be preceded by a determiner than when the utterance contained only referring words. In a word learning study containing both referring and non-referring words, learning was facilitated when non-referring words contributed grammatical constraints analogous to determiners. The complexity of multi-word utterances provides an opportunity for co-occurrence constraints to contribute to word-referent mapping, and the learning mechanism is able to integrate these multiple sources of information.  相似文献   

11.
12.
In two experiments, we investigated whether simultaneous speech reading can influence the detection of speech in envelope-matched noise. Subjects attempted to detect the presence of a disyllabic utterance in noise while watching a speaker articulate a matching or a non-matching utterance. Speech detection was not facilitated by an audio-visual match, which suggests that listeners relied on low-level auditory cues whose perception was immune to cross-modal top-down influences. However, when the stimuli were words (Experiment 1), there was a (predicted) relative shift in bias, suggesting that the masking noise itself was perceived as more speechlike when its envelope corresponded to the visual information. This bias shift was absent, however, with non-word materials (Experiment 2). These results, which resemble earlier findings obtained with orthographic visual input, indicate that the mapping from sight to sound is lexically mediated even when, as in the case of the articulatory-phonetic correspondence, the cross-modal relationship is non-arbitrary.  相似文献   

13.
Many models of speech production have attempted to explain dysfluent speech. Most models assume that the disruptions that occur when speech is dysfluent arise because the speakers make errors while planning an utterance. In this contribution, a model of the serial order of speech is described that does not make this assumption. It involves the coordination or 'interlocking' of linguistic planning and execution stages at the language-speech interface. The model is examined to determine whether it can distinguish two forms of dysfluent speech (stuttered and agrammatic speech) that are characterized by iteration and omission of whole words and parts of words.  相似文献   

14.
Most speech research with infants occurs in quiet laboratory rooms with no outside distractions. However, in the real world, speech directed to infants often occurs in the presence of other competing acoustic signals. To learn language, infants need to attend to their caregiver’s speech even under less than ideal listening conditions. We examined 7.5-month-old infants’ abilities to selectively attend to a female talker’s voice when a male voice was talking simultaneously. In three experiments, infants heard a target voice repeating isolated words while a distractor voice spoke fluently at one of three different intensities. Subsequently, infants heard passages produced by the target voice containing either the familiar words or novel words. Infants listened longer to the familiar words when the target voice was 10 dB or 5 dB more intense than the distractor, but not when the two voices were equally intense. In a fourth experiment, the assignment of words and passages to the familiarization and testing phases was reversed so that the passages and distractors were presented simultaneously during familiarization, and the infants were tested on the familiar and unfamiliar isolated words. During familiarization, the passages were 10 dB more intense than the distractors. The results suggest that this may be at the limits of what infants at this age can do in separating two different streams of speech. In conclusion, infants have some capacity to extract information from speech even in the face of a competing acoustic voice.  相似文献   

15.
Making a self-repair in speech typically proceeds in three phases. The first phase involves the monitoring of one's own speech and the interruption of the flow of speech when trouble is detected. From an analysis of 959 spontaneous self-repairs it appears that interrupting follows detection promptly, with the exception that correct words tend to be completed. Another finding is that detection of trouble improves towards the end of constituents. The second phase is characterized by hesitation, pausing, but especially the use of so-called editing terms. Which editing term is used depends on the nature of the speech trouble in a rather regular fashion: Speech errors induce other editing terms than words that are merely inappropriate, and trouble which is detected quickly by the speaker is preferably signalled by the use of ‘uh’. The third phase consists of making the repair proper. The linguistic well-formedness of a repair is not dependent on the speaker's respecting the integrity of constituents, but on the structural relation between original utterance and repair. A bi-conditional well-formedness rule links this relation to a corresponding relation between the conjuncts of a coordination. It is suggested that a similar relation holds also between question and answer. In all three cases the speaker respects certain structural commitments derived from an original utterance. It was finally shown that the editing term plus the first word of the repair proper almost always contain sufficient information for the listener to decide how the repair should be related to the original utterance. Speakers almost never produce misleading information in this respect.It is argued that speakers have little or no access to their speech production process; self-monitoring is probably based on parsing one's own inner or overt speech.  相似文献   

16.
To explore the relationship between short-term memory and speech production, we developed a speech error induction technique. The technique, which was adapted from a Japanese word game, exposed participants to an auditory distractor word immediately before the utterance of a target word. In Experiment 1, the distractor words that were phonologically similar to the target word led to a greater number of errors in speaking the target than did the dissimilar distractor words. Furthermore, the speech error scores were significantly correlated with memory span scores. In Experiment 2, memory span scores were again correlated with the rate of the speech errors that were induced from the task-irrelevant speech sounds. Experiment 3 showed a strong irrelevant-sound effect in the serial recall of nonwords. The magnitude of the irrelevant-sound effects was not affected by phonological similarity between the to-be-remembered nonwords and the irrelevant-sound materials. Analysis of recall errors in Experiment 3 also suggested that there were no essential differences in recall error patterns between the dissimilar and similar irrelevant-sound conditions. Weproposed two different underlying mechanisms in immediate memory, one operating via the phonological short-term memory store and the other via the processes underpinning speech production.  相似文献   

17.
The purpose of this study was to compare mothers' and fathers' speech to their preverbal infants in a teaching situation. Thirty-two parents of 16 8-month-olds were asked to teach their infants to put a small cube into a cup. Infant Gender (2) x Birth Order (2) x Parent (2) analyses of variance were performed with repeated measures on parent. Results indicated that fathers issued more utterances and used more words per utterance than did mothers. Although there was no difference in the proportion of imperatives used by mothers and fathers, fathers' imperatives were significantly longer than mothers'; this difference was not evident for utterances that contained indirect instructions. Mothers tended to use more exact repetitions. There were differences in parental speech related to infant gender: Parents directed more utterances, particularly utterances that contained negative statements, imperatives, and exhortations, to girls than to boys. Infant Gender x Parent effects for imperatives and exhortations indicated that these differences were especially true for fathers. Overall, it appeared that fathers made greater efforts to control the situation and to direct their infants' behavior, which might have reflected mothers' and fathers' different perceptions of both their infants' ability and their own role as teachers.  相似文献   

18.
Children's daily contexts shape their experiences. In this study, we assessed whether variations in infant placement (e.g., held, bouncy seat) are associated with infants' exposure to adult speech. Using repeated survey sampling of mothers and continuous audio recordings, we tested whether the use of independence-supporting placements was associated with adult speech exposure in a Southeastern U.S. sample of 60 4- to 6-month-old infants (38% male, predominately White, not Hispanic/Latinx, from higher socioeconomic status households). Within-subject analyses indicated that independence-supporting placements were associated with exposure to fewer adult words in the moment. Between-subjects analyses indicated that infants more frequently reported to be in independence-supporting placements that also provided posture support (i.e., an exersaucer) were exposed to relatively fewer adult words and less consistent adult speech across the day. These findings indicate that infants' opportunities for exposure to adult speech ‘in the wild’ may vary based on immediate physical context.  相似文献   

19.
The minimal unit of phonological encoding: prosodic or lexical word   总被引:1,自引:0,他引:1  
Wheeldon LR  Lahiri A 《Cognition》2002,85(2):B31-B41
Wheeldon and Lahiri (Journal of Memory and Language 37 (1997) 356) used a prepared speech production task (Sternberg, S., Monsell, S., Knoll, R. L., & Wright, C. E. (1978). The latency and duration of rapid movement sequences: comparisons of speech and typewriting. In G. E. Stelmach (Ed.), Information processing in motor control and learning (pp. 117-152). New York: Academic Press; Sternberg, S., Wright, C. E., Knoll, R. L., & Monsell, S. (1980). Motor programs in rapid speech: additional evidence. In R. A. Cole (Ed.), The perception and production of fluent speech (pp. 507-534). Hillsdale, NJ: Erlbaum) to demonstrate that the latency to articulate a sentence is a function of the number of phonological words it comprises. Latencies for the sentence [Ik zoek het] [water] 'I seek the water' were shorter than latencies for sentences like [Ik zoek] [vers] [water] 'I seek fresh water'. We extend this research by examining the prepared production of utterances containing phonological words that are less than a lexical word in length. Dutch compounds (e.g. ooglid 'eyelid') form a single morphosyntactic word and a phonological word, which in turn includes two phonological words. We compare their prepared production latencies to those syntactic phrases consisting of an adjective and a noun (e.g. oud lid 'old member') which comprise two morphosyntactic and two phonological words, and to morphologically simple words (e.g. orgel 'organ') which comprise one morphosyntactic and one phonological word. Our findings demonstrate that the effect is limited to phrasal level phonological words, suggesting that production models need to make a distinction between lexical and phrasal phonology.  相似文献   

20.
In previous research, Saffran and colleagues [Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928; Saffran, J. R., Newport, E. L., & Aslin, R. N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606-621.] have shown that adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. They also showed that a similar learning mechanism operates with musical stimuli [Saffran, J. R., Johnson, R. E. K., Aslin, N., & Newport, E. L. (1999). Abstract Statistical learning of tone sequences by human infants and adults. Cognition, 70, 27-52.]. In this work we combined linguistic and musical information and we compared language learning based on speech sequences to language learning based on sung sequences. We hypothesized that, compared to speech sequences, a consistent mapping of linguistic and musical information would enhance learning. Results confirmed the hypothesis showing a strong learning facilitation of song compared to speech. Most importantly, the present results show that learning a new language, especially in the first learning phase wherein one needs to segment new words, may largely benefit of the motivational and structuring properties of music in song.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号