首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When learning language, young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and we propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two‐stage computational analysis of a large corpus of English child‐directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges – the beginning and ending phonemes of words – to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child‐directed speech is possible by attending to the statistics of single phoneme transitions and word‐initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure.  相似文献   

2.
Many studies have observed phonetic and phonological differences between function words and content words. However, as many of the most commonly cited function words are also very high in frequency, it is unclear whether these differences are the result of syntactic category or word frequency. This study attempts to determine whether syntactically defined function words are indeed phonologically and phonetically reduced or assimilated when word frequency is balanced. Three experiments were designed to distinguish the relative contributions of the factors of category and frequency on phonetic and phonological reduction and assimilation. Overall results suggest that syntactic category and word frequency interact with phonetic and phonological processes in a more complex way than previously believed. Experiment 1 measured final t/d dropping, a reduction process, using electropalatography (EPG). Experiment 2 examined vowel reduction using acoustic measures. In Experiment 3, palatalization, an assimilation process, was examined using EPG. Results showed that t/d dropping responds to the factor of syntactic category, whereas palatalization is affected by word frequency; vowel reduction responded to both factors, with a dominant syntactic category effect and a secondary within-category frequency effect. The implications of these findings for models of lexical representation and theories of language acquisition are discussed.  相似文献   

3.
Phonetic categorization in auditory word perception   总被引:5,自引:0,他引:5  
To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.  相似文献   

4.
Neurobiological models of reading account for two ways in which orthography is converted to phonology: (1) familiar words, particularly those with exceptional spelling-sound mappings (e.g., shoe) access their whole-word lexical representations in the ventral visual stream, and (2) orthographically unfamiliar words, particularly those with regular spelling-sound mappings (i.e., pseudohomophones [PHs], which are orthographically novel but sound like real words; e.g., shue) are phonetically decoded via sublexical processing in the dorsal visual stream. The present study used a naming task in order to compare naming reaction time (RT) and response duration (RD) of exception and regular words to their PH counterparts. We replicated our earlier findings with words, and extended them to PH phonetic decoding by showing a similar effect on RT and RD of matched PHs. Given that the shorter RDs for exception words can be attributed to the benefit of whole-word processing in the orthographic word system, and the longer RTs for exception words to the conflict with phonetic decoding, our PH results demonstrate that phonetic decoding also involves top-down feedback from phonological lexical representations (e.g., activated by shue) to the orthographic representations of the corresponding correct word (e.g., shoe). Two computational models were tested for their ability to account for these effects: the DRC and the CDP+. The CDP+ fared best as it was capable of simulating both the regularity and stimulus type effect on RT for both word and PH identification, although not their over-additive interaction. Our results demonstrate that both lexical reading and phonetic decoding elicit a regularity dissociation between RT and RD that provides important constraints to all models of reading, and that phonetic decoding results in top-down feedback that bolsters the orthographic lexical reading process.  相似文献   

5.
A categorization paradigm was used to investigate the relations between lexical and conceptual connections in bilingual memory. Fifty-one more fluent and less fluent English-French bilinguals viewed category names (e.g.,vegetable) and then decided whether a target word (e.g.,peas) was a member of that category. The category names and target words appeared in both English and French across experimental conditions. Because category matching requires access to conceptual memory, only relatively fluent bilinguals, who are able to directly access meaning for their second language, were expected to be able to use category information across languages. The performance of lessfluent bilinguals was expected to reflect reliance on lexical-level connections between languages, requiring translation of second-language words. The results provided evidence for concept mediation by more-fluent bilinguals, because categorization latencies were independent of the language of the category name. However, the performance of less-fluent bilinguals indicated that they did not follow a simple lexical translation strategy. Instead, these subjects were faster at categorizing words in both languages when the language of the category name matched the language of the target word, suggesting that they were able to access limited conceptual information from the second language. A model of the development of concept mediation during second language acquisition is described.  相似文献   

6.
曾涛  段妞妞 《心理科学》2014,37(3):587-592
本文通过对五名汉语儿童的长期跟踪调查,旨在从词汇增长、句法出现和语义发展三方面发掘词汇飞跃时期儿童的语言特征。结果显示:儿童在18个月左右词汇迅猛增长,出现词汇飞跃现象;标志句法出现的词语组合和词汇飞跃在时间上紧密相连;词汇飞跃期儿童语义发展显著,体现为过度延伸词的逐渐减少及下位层次词的逐渐增加。词汇飞跃是汉语儿童早期语言发展的里程碑,表明其在词汇、句法和语义多个语言层面的飞跃。  相似文献   

7.
Although previous research has established that multiple top-down factors guide the identification of words during speech processing, the ultimate range of information sources that listeners integrate from different levels of linguistic structure is still unknown. In a set of experiments, we investigate whether comprehenders can integrate information from the 2 most disparate domains: pragmatic inference and phonetic perception. Using contexts that trigger pragmatic expectations regarding upcoming coreference (expectations for either he or she), we test listeners' identification of phonetic category boundaries (using acoustically ambiguous words on the /hi/~/∫i/ continuum). The results indicate that, in addition to phonetic cues, word recognition also reflects pragmatic inference. These findings are consistent with evidence for top-down contextual effects from lexical, syntactic, and semantic cues, but they extend this previous work by testing cues at the pragmatic level and by eliminating a statistical-frequency confound that might otherwise explain the previously reported results. We conclude by exploring the time course of this interaction and discussing how different models of cue integration could be adapted to account for our results.  相似文献   

8.
Yu C  Ballard DH  Aslin RN 《Cognitive Science》2005,29(6):961-1005
We examine the influence of inferring interlocutors' referential intentions from their body movements at the early stage of lexical acquisition. By testing human participants and comparing their performances in different learning conditions, we find that those embodied intentions facilitate both word discovery and word-meaning association. In light of empirical findings, the main part of this article presents a computational model that can identify the sound patterns of individual words from continuous speech, using nonlinguistic contextual information, and employ body movements as deictic references to discover word-meaning associations. To our knowledge, this work is the first model of word learning that not only learns lexical items from raw multisensory signals to closely resemble infant language development from natural environments, but also explores the computational role of social cognitive skills in lexical acquisition.  相似文献   

9.
Word recognition is generally assumed to be achieved via competition in the mental lexicon between phonetically similar word forms. However, this process has so far been examined only in the context of auditory phonetic similarity. In the present study, we investigated whether the influence of word-form similarity on word recognition holds in the visual modality and with the patterns of visual phonetic similarity. Deaf and hearing participants identified isolated spoken words presented visually on a video monitor. On the basis of computational modeling of the lexicon from visual confusion matrices of visual speech syllables, words were chosen to vary in visual phonetic distinctiveness, ranging from visually unambiguous (lexical equivalence class [LEC] size of 1) to highly confusable (LEC size greater than 10). Identification accuracy was found to be highly related to the word LEC size and frequency of occurrence in English. Deaf and hearing participants did not differ in their sensitivity to word LEC size and frequency. The results indicate that visual spoken word recognition shows strong similarities with its auditory counterpart in that the same dependencies on lexical similarity and word frequency are found to influence visual speech recognition accuracy. In particular, the results suggest that stimulus-based lexical distinctiveness is a valid construct to describe the underlying machinery of both visual and auditory spoken word recognition.  相似文献   

10.
The place of phonetic analysis in the perception of words is unclear. While some theories assume fully specified phonemic strings as input, other theories assume that little analysis occurs. An earlier experiment by Streeter and Nigro (1979) produced evidence, based on auditorily presented words with misleading acoustic cues, that lexical decisions were based on mostly unanalyzed patterns, since word judgments were delayed by misleading information whereas non word judgments were not. The present studies expand that work to a different set of cues, and to cases in which the overriding cue came first. An additional task, auditory naming, was used to examine the effects when the decision stage is less demanding. For the lexical decision task, misleading information slowed the responses, for both words and nonwords. In the auditory naming task, only the slower responses were affected. These results suggest that phonetic conflicts are resolved prior to lexical access.  相似文献   

11.
通过与生理年龄匹配儿童比较新词重复学习中眼跳定位模式变化的异同, 探讨发展性阅读障碍儿童在新词学习中的眼跳定位是否存在缺陷。以发展性阅读障碍儿童和生理年龄匹配儿童为被试, 采用重复学习新词范式, 结果发现:(1)与生理年龄匹配组相比, 发展性阅读障碍儿童跳入新词的眼跳距离较短、首次注视落点位置更靠近词首; (2)生理年龄匹配组儿童利用学习次数调节新词眼跳定位模式的能力高于发展性阅读障碍儿童, 即随着新词学习次数的增加, 生理年龄匹配组儿童跳入和跳出新词的眼跳距离随之增长, 首次注视落点位置更靠近词中心; 相比之下, 发展性阅读障碍儿童仅在跳出新词的眼跳距离上有所增长, 但增加幅度也显著小于生理年龄匹配组。结果表明, 发展性阅读障碍儿童在新词学习中的眼跳定位, 及利用学习次数对眼跳定位的调节上均表现出一定缺陷。  相似文献   

12.
In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition--inhibitory interaction among words in speech comprehension--can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.  相似文献   

13.
When perceiving spoken language, listeners must match the incoming acoustic phonetic input to lexical representations in memory. Models that quantify this process propose that the input activates multiple lexical representations in parallel and that these activated representations compete for recognition (Weber & Scharenborg, 2012). In two experiments, we assessed how grammatically constraining contexts alter the process of lexical competition. The results suggest that grammatical context constrains the lexical candidates that are activated to grammatically appropriate competitors. Stimulus words with little competition from items of the same grammatical class benefit more from the addition of grammatical context than do words with more within-class competition. The results provide evidence that top-down contextual information is integrated in the early stages of word recognition. We propose adding a grammatical class level of analysis to existing models of word recognition to account for these findings.  相似文献   

14.
Three experiments investigated the impact of five lexical variables (instance dominance, category dominance, word frequency, word length in letters, and word length in syllables) on performance in three different tasks involving word recognition: category verification, lexical decision, and pronunciation. Although the same set of words was used in each task, the relationship of the lexical variables to reaction time varied significantly with the task within which the words were embedded. In particular, the effect of word frequency was minimal in the category verification task, whereas it was significantly larger in the pronunciation task and significantly larger yet in the lexical decision task. It is argued that decision processes having little to do with lexical access accentuate the word-frequency effect in the lexical decision task and that results from this task have questionable value in testing the assumption that word frequency orders the lexicon, thereby affecting time to access the mental lexicon. A simple two-stage model is outlined to account for the role of word frequency and other variables in lexical decision. The model is applied to the results of the reported experiments and some of the most important findings in other studies of lexical decision and pronunciation.  相似文献   

15.
A theory of lexical access in speech production   总被引:49,自引:0,他引:49  
Levelt WJ  Roelofs A  Meyer AS 《The Behavioral and brain sciences》1999,22(1):1-38; discussion 38-75
Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feed-forward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER++. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging.  相似文献   

16.
Phonological processing and lexical access in aphasia   总被引:1,自引:0,他引:1  
This study explored the relationship between on-line processing of phonological information and lexical access in aphasic patients. A lexical decision paradigm was used in which subjects were presented auditorily with pairs of words or word-like stimuli and were asked to make a lexical decision about the second stimulus in the pair. The initial phonemes of the first word primes, which were semantically related to the real word targets, were systematically changed by one or more than one phonetic feature, e.g., cat-dog, gat-dog, wat-dog. Each of these priming conditions was compared to an unrelated word baseline condition, e.g., nurse-dog. Previous work with normals showed that even a nonword stimulus receives a lexical interpretation if it shares a sufficient number of phonetic features with an actual word in the listener's lexicon. Results indicated a monotonically decreasing degree of facilitation as a function of phonological distortion. In contrast, fluent aphasics showed priming in all phonological distortion conditions relative to the unrelated word baseline. Nonfluent aphasics showed priming only in the undistorted, related word condition relative to the unrelated word baseline. Nevertheless, in a secondary task requiring patients to make a lexical decision on the nonword primes presented singly, all aphasics showed phonological feature sensitivity. These results suggest deficits for aphasic patients in the various processes contributing to lexical access, rather than impairments at the level of lexical organization or phonological organization.  相似文献   

17.
Subcategorical phonetic mismatches and lexical access.   总被引:1,自引:0,他引:1  
The place of phonetic analysis in the perception of words is unclear. While some theories assume fully specified phonemic strings as input, other theories assume that little analysis occurs. An earlier experiment by Streeter and Nigro (1979) produced evidence, based on auditorily presented words with misleading acoustic cues, that lexical decisions were based on mostly unanalyzed patterns, since word judgments were delayed by misleading information whereas nonword judgments were not. The present studies expand that work to a different set of cues, and to cases in which the overriding cue came first. An additional task, auditory naming, was used to examine the effects when the decision stage is less demanding. For the lexical decision task, misleading information slowed the responses, for both words and nonwords. In the auditory naming task, only the slower responses were affected. These results suggest that phonetic conflicts are resolved prior to lexical access.  相似文献   

18.
Presenting words in MiXeD cAsE has previously been shown to disrupt naming performance of adult readers. This effect is greater on nonwords than it is on real words. There have been two main accounts of this interaction. First, case mixing may disrupt naming via non-lexical spellingto-sound correspondences to a greater extent than it disrupts lexical naming. Alternatively, stored lexical knowledge of words may feed back to a visual analysis level during processing of a visually presented word, helping known words to overcome the visual disruption caused by case mixing. In the present study, when young children (aged 6 and 8 years) were tested, case mixing did not disrupt nonword naming more than word naming. However, slightly older children (aged 9 years) demonstrated the same pattern of performance as adults. These results support the view that topdown lexical information can aid overcoming visual disruption to words, and that beginning readers have not developed the stored word knowledge necessary to allow this. In addition, a greater case-mixing effect on high-frequency words for the youngest age group (6-year-olds) suggests that their word recognition may be based more on wholistic visual features than is that of older children.  相似文献   

19.
The current study examines the relationship between 18‐month‐old toddlers’ vocabulary size and their ability to inhibit attention to no‐longer relevant information using the backward semantic inhibition paradigm. When adults switch attention from one semantic category to another, the former and no‐longer‐relevant semantic category becomes inhibited, and subsequent attention to an item that belongs to the inhibited semantic category is impaired. Here we demonstrate that 18‐month‐olds can inhibit attention to no‐longer relevant semantic categories, but only if they have a relatively large vocabulary. These findings suggest that an increased number of items (word knowledge) in the toddler lexical‐semantic system during the “vocabulary spurt” at 18‐months may be an important driving force behind the emergence of a semantic inhibitory mechanism. Possessing more words in the mental lexicon likely results in the formation of inhibitory links between words, which allow toddlers to select and deselect words and concepts more efficiently. Our findings highlight the role of vocabulary growth in the development of inhibitory processes in the emerging lexical‐semantic system.  相似文献   

20.
Recent studies of lexical access in Broca's aphasics suggest that lexical activation levels are reduced in these patients. The present study compared the performance of Broca's aphasics with that of normal subjects in an auditory semantic priming paradigm. Lexical decision times were measured in response to word targets preceded by an intact semantically related prime word ("cat"-"dog"), by a related prime in which one segment was acoustically altered to produce a poorer phonetic exemplar ("c*at"-"dog"), and by a semantically unrelated prime ("ring"-"dog"). The effects of the locus of the acoustic distortion within the prime word (initial or final position) and the presence of potential lexical competitors ("cat" --> /gaet/versus "coat" --> "goat") were examined. In normal subjects, the acoustic manipulations produce a small, short-lived reduction in semantic facilitation irrespective of the position of the distortion in the prime word or the presence of a voiced lexical competitor. In contrast, Broca's aphasics showed a large and lasting reduction in priming in response to word-initial acoustic distortions, but only a weak effect of word-final distortions on priming. In both phonetic positions, the effect of distortion was greater for prime words with a lexical competitor. These findings are compatible with the claim that Broca's aphasics have reduced lexical activation levels, which may result in a disruption of the bottom-up access of words on the basis of acoustic input as well as increased vulnerability to competition between acoustically similar lexical items.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号