首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Word recognition is generally assumed to be achieved via competition in the mental lexicon between phonetically similar word forms. However, this process has so far been examined only in the context of auditory phonetic similarity. In the present study, we investigated whether the influence of word-form similarity on word recognition holds in the visual modality and with the patterns of visual phonetic similarity. Deaf and hearing participants identified isolated spoken words presented visually on a video monitor. On the basis of computational modeling of the lexicon from visual confusion matrices of visual speech syllables, words were chosen to vary in visual phonetic distinctiveness, ranging from visually unambiguous (lexical equivalence class [LEC] size of 1) to highly confusable (LEC size greater than 10). Identification accuracy was found to be highly related to the word LEC size and frequency of occurrence in English. Deaf and hearing participants did not differ in their sensitivity to word LEC size and frequency. The results indicate that visual spoken word recognition shows strong similarities with its auditory counterpart in that the same dependencies on lexical similarity and word frequency are found to influence visual speech recognition accuracy. In particular, the results suggest that stimulus-based lexical distinctiveness is a valid construct to describe the underlying machinery of both visual and auditory spoken word recognition.  相似文献   

2.
A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies.  相似文献   

3.
P Zwitserlood 《Cognition》1989,32(1):25-64
Models of word recognition differ with respect to where the effects of sentential-semantic context are to be located. Using a crossmodal priming technique, this research investigated the availability of lexical entries as a function of stimulus information and contextual constraint. To investigate the exact locus of the effects of sentential contexts, probes that were associatively related to contextually appropriate and inappropriate words were presented at various positions before and concurrent with the spoken word. The results show that sentential contexts do not preselect a set of contextually appropriate words before any sensory information about the spoken word is available. Moreover, during lexical access, defined here as the initial contact with lexical entries and their semantic and syntactic properties, both contextually appropriate and inappropriate words are activated. Contextual effects are located after lexical access, at a point in time during word processing where the sensory input by itself is still insufficiently informative to disambiguate between the activated entries. This suggests that sentential-semantic contexts have their effects during the process of selecting one of the activated candidates for recognition.  相似文献   

4.
The authors report 3 dual-task experiments concerning the locus of frequency effects in word recognition. In all experiments, Task 1 entailed a simple perceptual choice and Task 2 involved lexical decision. In Experiment 1, an underadditive effect of word frequency arose for spoken words. Experiment 2 also showed underadditivity for visual lexical decision. It was concluded that word frequency exerts an influence prior to any dual-task bottleneck. A related finding in similar dual-task experiments is Task 2 response postponement at short stimulus onset asynchronies. This was explored in Experiment 3, and it was shown that response postponement was equivalent for both spoken and visual word recognition. These results imply that frequency-sensitive processes operate early and automatically.  相似文献   

5.
Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.  相似文献   

6.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical. The present experiments examined these predictions by measuring the influence of a cross-modal word context on word target discrimination. The results provide constraints on the types of connections that can exist between orthographic lexical representations and phonological lexical representations.  相似文献   

7.
A widely agreed-upon feature of spoken word recognition is that multiple lexical candidates in memory are simultaneously activated in parallel when a listener hears a word, and that those candidates compete for recognition (Luce, Goldinger, Auer, & Vitevitch, Perception 62:615–625, 2000; Luce & Pisoni, Ear and Hearing 19:1–36, 1998; McClelland & Elman, Cognitive Psychology 18:1–86, 1986). Because the presence of those competitors influences word recognition, much research has sought to quantify the processes of lexical competition. Metrics that quantify lexical competition continuously are more effective predictors of auditory and visual (lipread) spoken word recognition than are the categorical metrics traditionally used (Feld & Sommers, Speech Communication 53:220–228, 2011; Strand & Sommers, Journal of the Acoustical Society of America 130:1663–1672, 2011). A limitation of the continuous metrics is that they are somewhat computationally cumbersome and require access to existing speech databases. This article describes the Phi-square Lexical Competition Database (Phi-Lex): an online, searchable database that provides access to multiple metrics of auditory and visual (lipread) lexical competition for English words, available at www.juliastrand.com/phi-lex.  相似文献   

8.
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None of the existing approaches were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment.  相似文献   

9.
Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.  相似文献   

10.
Gradient effects of within-category phonetic variation on lexical access   总被引:7,自引:0,他引:7  
In order to determine whether small within-category differences in voice onset time (VOT) affect lexical access, eye movements were monitored as participants indicated which of four pictures was named by spoken stimuli that varied along a 0-40 ms VOT continuum. Within-category differences in VOT resulted in gradient increases in fixations to cross-boundary lexical competitors as VOT approached the category boundary. Thus, fine-grained acoustic/phonetic differences are preserved in patterns of lexical activation for competing lexical candidates and could be used to maximize the efficiency of on-line word recognition.  相似文献   

11.
Accented speech has been seen as an additional impediment for speech processing; it usually adds linguistic and cognitive load to the listener's task. In the current study we analyse where the processing costs of regional dialects come from, a question that has not been answered yet. We quantify the proficiency of Basque–Spanish bilinguals who have different native dialects of Basque on many dimensions and test for costs at each of three levels of processing–phonemic discrimination, word recognition, and semantic processing. The ability to discriminate a dialect-specific contrast is affected by a bilingual's linguistic background less than lexical access is, and an individual's difficulty in lexical access is correlated with basic discrimination problems. Once lexical access is achieved, dialectal variation has little impact on semantic processing. The results are discussed in terms of the presence or absence of correlations between different processing levels. The implications of the results are considered for how models of spoken word recognition handle dialectal variation.  相似文献   

12.
Many theories of spoken word recognition assume that lexical items are stored in memory as abstract representations. However, recent research (e.g., Goldinger, 1996) has suggested that representations of spoken words in memory are veridical exemplars that encode specific information, such as characteristics of the talker’s voice. If representations are exemplar based, effects of stimulus variation such as that arising from changes in the identity of the talker may have an effect on identification of and memory for spoken words. This prediction was examined for an implicit and explicit task (lexical decision and recognition, respectively). Comparable amounts of repetition priming in lexical decision were found for repeated words, regardless of whether the repetitions were in the same or in different voices. However, reaction times in the recognition task were faster if the repetition was in the same voice. These results suggest a role for both abstract and specific representations in models of spoken word recognition.  相似文献   

13.
Spoken word recognition by eye   总被引:2,自引:2,他引:0  
Spoken word recognition is thought to be achieved via competition in the mental lexicon between perceptually similar word forms. A review of the development and initial behavioral validations of computational models of visual spoken word recognition is presented, followed by a report of new empirical evidence. Specifically, a replication and extension of Mattys, Bernstein & Auer's (2002) study was conducted with 20 deaf participants who varied widely in speechreading ability. Participants visually identified isolated spoken words. Accuracy of visual spoken word recognition was influenced by the number of visually similar words in the lexicon and by the frequency of occurrence of the stimulus words. The results are consistent with the common view held within auditory word recognition that this task is accomplished via a process of activation and competition in which frequently occurring units are favored. Finally, future directions for visual spoken word recognition are discussed.  相似文献   

14.
Ernestus M  Mak WM 《Brain and language》2004,90(1-3):378-392
This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.  相似文献   

15.
According to current models, spoken word recognition is driven by the phonological properties of the speech signal. However, several studies have suggested that orthographic information also influences recognition in adult listeners. In particular, it has been repeatedly shown that, in the lexical decision task, words that include rimes with inconsistent spellings (e.g., /-ip/ spelled -eap or -eep) are disadvantaged, as compared with words with consistent rime spelling. In the present study, we explored whether the orthographic consistency effect extends to tasks requiring people to process words beyond simple lexical access. Two different tasks were used: semantic and gender categorization. Both tasks produced reliable consistency effects. The data are discussed as suggesting that orthographic codes are activated during word recognition, or that the organization of phonological representations of words is affected by orthography during literacy acquisition.  相似文献   

16.
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment, Dutch-English bilinguals performing a 2nd language (L2) lexical decision task were faster to recognize identical and nonidentical cognate words (e.g., banaan-banana) presented in isolation than control words. A second experiment replicated this effect when the same set of cognates was presented as the final words of low-constraint sentences. In a third experiment that used eyetracking, the authors showed that early target reading time measures also yield cognate facilitation but only for identical cognates. These results suggest that a sentence context may influence, but does not nullify, cross-lingual lexical interactions during early visual word recognition by bilinguals.  相似文献   

17.
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.  相似文献   

18.
Using the cross-modal priming paradigm, we attempted to determine whether semantic representations for word-final morphemes embedded in multisyllabic words (e.g., /lak/ in /hεmlak/) are independently activated in memory. That is, we attempted to determine whether the auditory prime, /hεmlak/, would facilitate lexical decision times to the visual target,key, even when the recognition point for/hεmlak / occurred prior to the end of the word, which should ensure deactivation of all lexical candidates. In the first experiment, a gating task was used in order to ensure that the multisyllabic words could be identified prior to their offsets. In the second experiment, lexical decision times for visually presented targets following spoken monosyllabic primes (e.g., /lak/-key) were compared with reaction times for the same visual targets following multisyllabic pairs (/hεmlak/-KEY). Significant priming was found for both the monosyllabic and the multisyllabic conditions. The results support a recognition strategy that initiates lexical access at strong syllables (Cutler & Norris, 1988) and operates according to a principle of delayed commitment (Marr, 1982).  相似文献   

19.
The influence of orthography on children's online auditory word recognition was studied from the end of Grade 4 to the end of Grade 9 by examining the orthographic consistency effect in auditory lexical decision. Fourth-graders showed evidence of a widespread influence of orthography in their spoken word recognition system; words with rimes that can be spelled in two different ways (inconsistent) produced longer auditory lexical decision times and more errors than did consistent words. A similar consistency effect was also observed on pseudowords. With adult listeners, on exactly the same material, we replicated the usual pattern of an orthographic consistency effect restricted to words in lexical decision. From Grade 6 onward, this adult pattern of orthographic effect on spoken recognition is already observable.  相似文献   

20.
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating feedback consistency of rhymes. The present lexical decision study, done in English, manipulated the spelling of individual vowels within consistent rhymes. Participants recognized words with consistent rhymes where the vowel has the most typical spelling (e.g., lobe) faster than words with consistent rhymes where the vowel has a less typical spelling (e.g., loaf). The present study extends previous literature by showing that auditory word recognition is affected by orthographic regularities at different grain sizes, just like written word recognition and spelling. The theoretical and methodological implications for future research in spoken word recognition are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号