首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Word recognition is generally assumed to be achieved via competition in the mental lexicon between phonetically similar word forms. However, this process has so far been examined only in the context of auditory phonetic similarity. In the present study, we investigated whether the influence of word-form similarity on word recognition holds in the visual modality and with the patterns of visual phonetic similarity. Deaf and hearing participants identified isolated spoken words presented visually on a video monitor. On the basis of computational modeling of the lexicon from visual confusion matrices of visual speech syllables, words were chosen to vary in visual phonetic distinctiveness, ranging from visually unambiguous (lexical equivalence class [LEC] size of 1) to highly confusable (LEC size greater than 10). Identification accuracy was found to be highly related to the word LEC size and frequency of occurrence in English. Deaf and hearing participants did not differ in their sensitivity to word LEC size and frequency. The results indicate that visual spoken word recognition shows strong similarities with its auditory counterpart in that the same dependencies on lexical similarity and word frequency are found to influence visual speech recognition accuracy. In particular, the results suggest that stimulus-based lexical distinctiveness is a valid construct to describe the underlying machinery of both visual and auditory spoken word recognition.  相似文献   

2.
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.  相似文献   

3.
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether listening to a second language (L2) is influenced by knowledge of the native language (L1) and, more important, whether listening to the L1 is also influenced by knowledge of an L2. Additionally, we investigated whether the listener's selectivity of lexical access is influenced by the speaker's L1 (and thus his or her accent). With this aim, Dutch-English bilinguals completed an English (Experiment 1) and a Dutch (Experiment 3) auditory lexical decision task. As a control, the English auditory lexical decision task was also completed by English monolinguals (Experiment 2). Targets were pronounced by a native Dutch speaker with English as the L2 (Experiments 1A, 2A, and 3A) or by a native English speaker with Dutch as the L2 (Experiments 1B, 2B, and 3B). In all experiments, Dutch-English bilinguals recognized interlingual homophones (e.g., lief [sweet]-leaf /li:f/) significantly slower than matched control words, whereas the English monolinguals showed no effect. These results indicate that (a) lexical access in bilingual auditory word recognition is not language selective in L2, nor in L1, and (b) language-specific subphonological cues do not annul cross-lingual interactions.  相似文献   

4.
Previous research (Garber & Pisoni, 1991; Pisoni & Garber, 1990) has demonstrated that subjective familiarity judgments for words are not differentially affected by the modality (visual or auditory) in which the words are presented, suggesting that participants base their judgments on fairly abstract, modality-independent representations in memory. However, in a recent large-scale study in Japanese (Amano, Kondo, & Kakehi, 1995), marked modality effects on familiarity ratings were observed. The present research further examined possible modality differences in subjective ratings and their implications for word recognition. Specially selected words were presented to participants for frequency judgments. In particular, participants were asked how frequently they read, wrote, heard, or said a given spoken or printed word. These ratings were then regressed against processing times in auditory and visual lexical decision and naming tasks. Our results suggest modality dependence for some lexical representations.  相似文献   

5.
The present study examined individual differences in the automaticity of visual word recognition. Specifically, we examined whether people can recognize words while central attention is devoted to another task and how this ability depends on reading skill. A lexical-decision Task 2 was combined with either an auditory or visual Task 1. Regardless of the Task 1 modality, Task 2 word recognition proceeded in parallel with Task 1 central operations for individuals with high Nelson-Denny reading scores, but not for individuals with low reading scores. We conclude that greater lexical skill leads to greater automaticity, allowing better readers to more efficiently perform lexical processes in parallel with other attention-demanding tasks.  相似文献   

6.
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non‐speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.  相似文献   

7.
Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre‐familiarized sounds, etc.). The current study extends this research by examining how auditory input affects 8‐ and 14‐month‐olds’ performance on individuation tasks. The results of the current study indicate that both unfamiliar sounds and words interfered with infants’ performance on an individuation task, with cross‐modal interference effects being numerically stronger for unfamiliar sounds. The effects of auditory input on a variety of lexical tasks are discussed.  相似文献   

8.
对于同音词家族较大的词汇(例如汉语单音节词), 听觉通道的词汇通达和词汇表征的激活会受同音词汇歧义怎样的影响, 仍不很清楚。本研究采用了两个听觉通道的实验。实验一的听写任务发现在孤立音节的同音字选择上存在歧义的同时还存在同音字族内的高频偏向, 并非随机选择; 实验二的音-字同音判断任务对比同音与不同音条件得到同音促进量, 揭示了孤立音节在激活语音表征时还会引起同音高频字表征的自动激活, 而同音低频字则受到抑制。结果说明存在同音字族内的听觉通道词频效应, 同音的高、中、低频字的表征激活具有不同等性, 导致不穷尽通达, 且同音的最高频字得到更多通达机会。这些发现难以被现有的词汇通达和同音词表征激活模型解释, 文章提出一个能够解释这些听觉通道发现的模型。  相似文献   

9.
When learning language, young children are faced with many seemingly formidable challenges, including discovering words embedded in a continuous stream of sounds and determining what role these words play in syntactic constructions. We suggest that knowledge of phoneme distributions may play a crucial part in helping children segment words and determine their lexical category, and we propose an integrated model of how children might go from unsegmented speech to lexical categories. We corroborated this theoretical model using a two‐stage computational analysis of a large corpus of English child‐directed speech. First, we used transition probabilities between phonemes to find words in unsegmented speech. Second, we used distributional information about word edges – the beginning and ending phonemes of words – to predict whether the segmented words from the first stage were nouns, verbs, or something else. The results indicate that discovering lexical units and their associated syntactic category in child‐directed speech is possible by attending to the statistics of single phoneme transitions and word‐initial and final phonemes. Thus, we suggest that a core computational principle in language acquisition is that the same source of information is used to learn about different aspects of linguistic structure.  相似文献   

10.
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e. word pairs that differ by a single phoneme), despite their ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top‐down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom‐up acoustic‐phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still‐developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single‐speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them.  相似文献   

11.
A perceptual learning experiment provides evidence that the mental lexicon cannot consist solely of detailed acoustic traces of recognition episodes. In a training lexical decision phase, listeners heard an ambiguous [f–s] fricative sound, replacing either [f] or [s] in words. In a test phase, listeners then made lexical decisions to visual targets following auditory primes. Critical materials were minimal pairs that could be a word with either [f] or [s] (cf. English knife–nice), none of which had been heard in training. Listeners interpreted the minimal pair words differently in the second phase according to the training received in the first phase. Therefore, lexically mediated retuning of phoneme perception not only influences categorical decisions about fricatives (Norris, McQueen, & Cutler, 2003), but also benefits recognition of words outside the training set. The observed generalization across words suggests that this retuning occurs prelexically. Therefore, lexical processing involves sublexical phonological abstraction, not only accumulation of acoustic episodes.  相似文献   

12.
Across languages, lexical items specific to infant‐directed speech (i.e., ‘baby‐talk words’) are characterized by a preponderance of onomatopoeia (or highly iconic words), diminutives, and reduplication. These lexical characteristics may help infants discover the referential nature of words, identify word referents, and segment fluent speech into words. If so, the amount of lexical input containing these properties should predict infants’ rate of vocabulary growth. To test this prediction, we tracked the vocabulary size in 47 English‐learning infants from 9 to 21 months and examined whether the patterns of growth can be related to measures of iconicity, diminutives, and reduplication in the lexical input at 9 months. Our analyses showed that both diminutives and reduplication in the input were associated with vocabulary growth, although measures of iconicity were not. These results are consistent with the hypothesis that phonological properties typical of lexical input in infant‐directed speech play a role in early vocabulary growth.  相似文献   

13.
周爱保 《心理学报》1996,29(1):53-57
讨论了词语使用过程中视觉使用频率和听觉使用频率的区分。通过两个实验对听觉使用频率进行了探讨。实验一通过量表评定法得到了两组视觉使用频率相同,但听觉使用频率不同的双字词,实验二利用实验一得到的两组双字词对被试进行了记忆测验,在单一的听觉条件下,发现在自由回忆,听觉再认和模糊辨听这两类不同的记忆测验中出现了实验性分离现象;反映了听觉使用频率自身的一些特征。  相似文献   

14.
15.
The ability to create temporary binding representations of information from different sources in working memory has recently been found to relate to the development of monolingual word recognition in children. The current study explored this possible relationship in an adult word-learning context. We assessed whether the relationship between cross-modal working memory binding and lexical development would be observed in the learning of associations between unfamiliar spoken words and their semantic referents, and whether it would vary across experimental conditions in first- and second-language word learning. A group of English monolinguals were recruited to learn 24 spoken disyllable Mandarin Chinese words in association with either familiar or novel objects as semantic referents. They also took a working memory task in which their ability to temporarily bind auditory-verbal and visual information was measured. Participants’ performance on this task was uniquely linked to their learning and retention of words for both novel objects and for familiar objects. This suggests that, at least for spoken language, cross-modal working memory binding might play a similar role in second language-like (i.e., learning new words for familiar objects) and in more native-like situations (i.e., learning new words for novel objects). Our findings provide new evidence for the role of cross-modal working memory binding in L1 word learning and further indicate that early stages of picture-based word learning in L2 might rely on similar cognitive processes as in L1.  相似文献   

16.
The present experiments examined the automaticity of word recognition. The authors examined whether people can recognize words while central attention is devoted to another task and how this ability changes across the life span. In Experiment 1, a lexical decision Task 2 was combined with either an auditory or a visual Task 1. Regardless of the Task 1 modality, Task 2 word recognition proceeded in parallel with Task 1 central operations for older adults but not for younger adults. This is a rare example of improved cognitive processing with advancing age. When Task 2 was nonlexical (Experiment 2), however, there was no evidence for greater parallel processing for older adults. Thus, the processing advantage appears to be restricted to lexical processes. The authors conclude that greater cumulative experience with lexical processing leads to greater automaticity, allowing older adults to more efficiently perform this stage in parallel with another task.  相似文献   

17.
Subjects took part in an auditory lexical decision task followed by an auditory test of recognition memory for words presented in this task. Subjects categorized their recognition judgments as based on either recollection (“remember” responses) or familiarity (“know” responses). Distractor items in the recognition test included the base words from which the nonwords used in the lexical decision task were derived. Consistent with the findings of Wallace, Stewart, Sherman, and Mellor (1995), more false alarms were made to “late” base words (where the corresponding nonwords were created by changing a phoneme near the end of the word) than to “early” base words (corresponding nonwords were created by changing a phoneme at the beginning of the word). However, this effect was found in “know” responses but not in “remember” responses. The findings are attributed to enhanced fluency with which the base words are processed following their implicit activation at encoding.  相似文献   

18.

Throughout their lifetime, adults learn new words in their native lannguage, and potentially also in a second language. However, they do so with variable levels of success. In the auditory word learning literature, some of this variability has been attributed to phonological skills, including decoding and phonological short-term memory. Here I examine how the relationship between phonological skills and word learning applies to the visual modality. I define the availability of phonology in terms of (1) the extent to which it is biased by the learning environment, (2) the characteristics of the words to be learned, and (3) individual differences in phonological skills. Across these three areas of research, visual word learning improves when phonology is made more available to adult learners, suggesting that phonology can facilitate learning across modalities. However, the facilitation is largely specific to alphabetic languages, which have predictable sublexical correspondences between orthography and phonology. Therefore, I propose that phonology bootstraps visual word learning by providing a secondary code that constrains and refines developing orthographic representations.

  相似文献   

19.
We present an experiment in which we explored the extent to which visual speech information affects learners’ ability to segment words from a fluent speech stream. Learners were presented with a set of sentences consisting of novel words, in which the only cues to the location of word boundaries were the transitional probabilities between syllables. They were exposed to this language through the auditory modality only, through the visual modality only (where the learners saw the speaker producing the sentences but did not hear anything), or through both the auditory and visual modalities. The learners were successful at segmenting words from the speech stream under all three training conditions. These data suggest that visual speech information has a positive effect on word segmentation performance, at least under some circumstances.  相似文献   

20.
The degree to which infants represent phonetic detail in words has been a source of controversy in phonology and developmental psychology. One prominent hypothesis holds that infants store words in a vague or inaccurate form until the learning of similar–sounding neighbors forces attention to subtle phonetic distinctions. In the experiment reported here, we used a visual fixation task to assess word recognition. We present the first evidence indicating that, in fact, the lexical representations of 14– and 15–month–olds are encoded in fine detail, even when this detail is not functionally necessary for distinguishing similar words in the infant's vocabulary. Exposure to words is sufficient for well–specified lexical representations, even well before the vocabulary spurt. These results suggest developmental continuity in infants' representations of speech: As infants begin to build a vocabulary and learn word meanings, they use the perceptual abilities previously demonstrated in tasks testing the discrimination and categorization of meaningless syllables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号