首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
We used fluency tasks to investigate lexical organisation in Deaf adults who use British sign language (BSL). The number of responses produced to semantic categories did not differ from reports in spoken languages. However, there was considerable variability in the number of responses across phonological categories, and some signers had difficulty retrieving items. Responses were richly clustered according to semantic and/or phonological properties. With respect to phonology, there was significantly more clustering around the parameters “handshape” and “location” compared to “movement”. We conclude that the BSL lexicon is organised in similar ways to the lexicons of spoken languages, but that lexical retrieval is characterised by strong links between semantics and phonology; movement is less readily retrieved than handshape and location; and phonological fluency is difficult for signers because they have little metaphonological awareness in BSL and because signs do not display the onset salience that characterises spoken words.  相似文献   

2.
Sign language phonological parameters are somewhat analogous to phonemes in spoken language. Unlike phonemes, however, there is little linguistic literature arguing that these parameters interact at the sublexical level. This situation raises the question of whether such interaction in spoken language phonology is an artifact of the modality or whether sign language phonology has not been approached in a way that allows one to recognize sublexical parameter interaction. We present three studies in favor of the latter alternative: a shape-drawing study with deaf signers from six countries, an online dictionary study of American Sign Language, and a study of selected lexical items across 34 sign languages. These studies show that, once iconicity is considered, handshape and movement parameters interact at the sublexical level. Thus, consideration of iconicity makes transparent similarities in grammar across both modalities, allowing us to maintain certain key findings of phonological theory as evidence of cognitive architecture.  相似文献   

3.
We report a 27-year-old woman with chronic auditory agnosia following Landau-Kleffner Syndrome (LKS) diagnosed at age 4 1/2 . She grew up in the hearing/speaking community with some exposure to manually coded English and American Sign Language (ASL). Manually coded (signed) English is her preferred mode of communication. Comprehension and production of spoken language remain severely compromised. Disruptions in auditory processing can be observed in tests of pitch and duration, suggesting that her disorder is not specific to language. Linguistic analysis of signed, spoken, and written English indicates her language system is intact but compromised because of impoverished input during the critical period for acquisition of spoken phonology. Specifically, although her sign language phonology is intact, spoken language phonology is markedly impaired. We argue that deprivation of auditory input during a period critical for the development of a phonological grammar and auditory–verbal short-term memory has limited her lexical and syntactic development in specific ways.  相似文献   

4.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.  相似文献   

5.
In two studies, we find that native and non-native acquisition show different effects on sign language processing. Subjects were all born deaf and used sign language for interpersonal communication, but first acquired it at ages ranging from birth to 18. In the first study, deaf signers shadowed (simultaneously watched and reproduced) sign language narratives given in two dialects, American Sign Language (ASL) and Pidgin Sign English (PSE), in both good and poor viewing conditions. In the second study, deaf signers recalled and shadowed grammatical and ungrammatical ASL sentences. In comparison with non-native signers, natives were more accurate, comprehended better, and made different kinds of lexical changes; natives primarily changed signs in relation to sign meaning independent of the phonological characteristics of the stimulus. In contrast, non-native signers primarily changed signs in relation to the phonological characteristics of the stimulus independent of lexical and sentential meaning. Semantic lexical changes were positively correlated to processing accuracy and comprehension, whereas phonological lexical changes were negatively correlated. The effects of non-native acquisition were similar across variations in the sign dialect, viewing condition, and processing task. The results suggest that native signers process lexical structural automatically, such that they can attend to and remember lexical and sentential meaning. In contrast, non-native signers appear to allocate more attention to the task of identifying phonological shape such that they have less attention available for retrieval and memory of lexical meaning.  相似文献   

6.
This paper investigates whether the semantic and phonological levels in speech production are specific to spoken languages or universal across modalities. We examined semantic and phonological effects during Catalan Signed Language (LSC: Llengua de Signes Catalana) production using an adaptation of the picture-word interference task: native and non-native signers were asked to sign picture names while ignoring signs produced in the background. The results showed semantic interference effects for semantically related distractor signs and phonological facilitation effects when target signs and distractor signs shared either Handshape or Movement but phonological interference effects when target and distractor shared Location. The results suggest that the general distinction between semantic and phonological levels seems to hold across modalities. However, differences in sign language and spoken production become evident in the mechanisms underlying phonological encoding, shown by the different role that Location, Handshape, and Movement play during phonological encoding in sign language.  相似文献   

7.
This investigation examined whether access to sign language as a medium for instruction influences theory of mind (ToM) reasoning in deaf children with similar home language environments. Experiment 1 involved 97 deaf Italian children ages 4-12 years: 56 were from deaf families and had LIS (Italian Sign Language) as their native language, and 41 had acquired LIS as late signers following contact with signers outside their hearing families. Children receiving bimodal/bilingual instruction in LIS together with Sign-Supported and spoken Italian significantly outperformed children in oralist schools in which communication was in Italian and often relied on lipreading. Experiment 2 involved 61 deaf children in Estonia and Sweden ages 6-16 years. On a wide variety of ToM tasks, bilingually instructed native signers in Estonian Sign Language and spoken Estonian succeeded at a level similar to age-matched hearing children. They outperformed bilingually instructed late signers and native signers attending oralist schools. Particularly for native signers, access to sign language in a bilingual environment may facilitate conversational exchanges that promote the expression of ToM by enabling children to monitor others' mental states effectively.  相似文献   

8.
What form is the lexical phonology that gives rise to phonological effects in visual lexical decision? The authors explored the hypothesis that beyond phonological contrasts the physical phonetic details of words are included. Three experiments using lexical decision and 1 using naming compared processing times for printed words (e.g., plead and pleat) that differ, when spoken, in vowel length and overall duration. Latencies were longer for long-vowel words than for short-vowel words in lexical decision but not in naming. Further, lexical decision on long-vowel words benefited more from identity priming than lexical decision on short-vowel words, suggesting that representations of long-vowel words achieve activation thresholds more slowly. The discussion focused on phonetically informed phonologies, particularly gestural phonology and its potential for understanding reading acquisition and performance.  相似文献   

9.
The number and type of connections involving different levels of orthographic and phonological representations differentiate between several models of spoken and visual word recognition. At the sublexical level of processing, Borowsky, Owen, and Fonos (1999) demonstrated evidence for direct processing connections from grapheme representations to phoneme representations (i.e., a sensitivity effect) over and above any bias effects, but not in the reverse direction. Neural network models of visual word recognition implement an orthography to phonology processing route that involves the same connections for processing sublexical and lexical information, and thus a similar pattern of cross-modal effects for lexical stimuli are expected by models that implement this single type of connection (i.e., orthographic lexical processing should directly affect phonological lexical processing, but not in the reverse direction). Furthermore, several models of spoken word perception predict that there should be no direct connections between orthographic representations and phonological representations, regardless of whether the connections are sublexical or lexical. The present experiments examined these predictions by measuring the influence of a cross-modal word context on word target discrimination. The results provide constraints on the types of connections that can exist between orthographic lexical representations and phonological lexical representations.  相似文献   

10.
Malay, a language spoken by 250 million people, has a shallow alphabetic orthography, simple syllable structures, and transparent affixation—characteristics that contrast sharply with those of English. In the present article, we first compare the letter—phoneme and letter—syllable ratios for a sample of alphabetic orthographies to highlight the importance of separating language-specific from language-universal reading processes. Then, in order to develop a better understanding of word recognition in orthographies with more consistent mappings to phonology than English, we compiled a database of lexical variables (letter length, syllable length, phoneme length, morpheme length, word frequency, orthographic and phonological neighborhood sizes, and orthographic and phonological Levenshtein distances) for 9,592 Malay words. Separate hierarchical regression analyses for Malay and English revealed how the consistency of orthography—phonology mappings selectively modulates the effects of different lexical variables on lexical decision and speeded pronunciation performance. The database of lexical and behavioral measures for Malay is available at http://brm.psychonomic-journals.org/content/ supplemental.  相似文献   

11.
We described a patient with a dramatic deficit of both word comprehension and naming but with good preservation of visual pictorial semantics. On word-picture matching, his performances were slightly better than expected based on the observed lexical semantic disorder; in addition, the patient, who maintained good preservation of his underlying phonology, showed a tendency to point to the picture phonologically related to the target. In order to interpret these data, we advanced the hypothesis that the patient, in spite of his virtually complete inability to name, would be able, in a word-picture matching task, to "covertly" (i.e., preverbally) retrieve the name from the picture and to use this name to attempt a match with the phonological form of the stimulus word. This mechanism, that we called "phonological" comprehension, would allow the identification of the correct target and would explain the choice of the phonologically related foil that was sometimes selected.  相似文献   

12.
张积家  陈栩茜 《心理学报》2005,37(5):582-589
采用缺失音素的中文双字词为材料,进一步考察了中文听觉词的语音、语义激活进程。共包含两个实验:实验1考察听觉词语音能否同时激活多个与之语音相近的语音、语义结点;实验2考察在听觉词词汇提取后期,是否还存在语音的激活作用。结果表明:⑴中文听觉词的语音能够同时激活与之语音相近的语音、语义结点,这种语音作用在ISI=400ms时处于较低水平;⑵听觉词呈现结束后,语音存在着二次激活现象;⑶对缺失声母的中文听觉词理解后期,除语义背景依赖效应外,还存在明显的语音作用,两者共同作用共同实现对目标词的语义恢复。根据对句子背景下缺失声母的中文听觉词理解的实验结果,作者提出了“中文听觉词理解的激活扩散动态模型”。  相似文献   

13.
Age of acquisition (AoA) is a psycholinguistic construct that refers to the chronological age at which a given word is acquired. Contemporary theories of AoA have focused on lexical acquisition with respect to either the developing phonological or semantic systems. One way of testing the relative dominance of phonological or semantic contributions is through open-source psycholinguistic databases, whereby AoA may be correlated with other variables (e.g., morphology, semantics, phonology). We report two multiple regression analyses conducted on a corpus of English nouns with, respectively, subjective and objective AoA measures as the dependent variables and a combination of 10 predictors, including 2 semantic, 4 phonological, 2 morphological, and 2 lexical. This multivariate combination of predictors accounted for significant proportions of the variance of AoA in both analyses. We argue that this evidence supports hybrid models of language development that integrate multiple levels of processing—from sound to meaning.  相似文献   

14.
Models of spoken word recognition vary in the ways in which they capture the relationship between speech input and meaning. Modular accounts prohibit a word’s meaning from affecting the computation of its form-based representation, whereas interactive models allow activation at the semantic level to affect phonological processing. We tested these competing hypotheses by manipulating word familiarity and imageability, using lexical decision and repetition tasks. Responses to high-imageability words were significantly faster than those to low-imageability words. Repetition latencies were also analyzed as a function of cohort variables, revealing a significant imageability effect only for words that were members of large cohorts, suggesting that when the mapping from phonology to semantics is difficult, semantic information can help the discrimination process. Thus, these data support interactive models of spoken word recognition.  相似文献   

15.
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.  相似文献   

16.
In this study, we investigated orthographic influences on spoken word recognition. The degree of spelling inconsistency was manipulated while rime phonology was held constant. Inconsistent words with subdominant spellings were processed more slowly than inconsistent words with dominant spellings. This graded consistency effect was obtained in three experiments. However, the effect was strongest in lexical decision, intermediate in rime detection, and weakest in auditory naming. We conclude that (1) orthographic consistency effects are not artifacts of phonological, phonetic, or phonotactic properties of the stimulus material; (2) orthographic effects can be found even when the error rate is extremely low, which rules out the possibility that they result from strategies used to reduce task difficulty; and (3) orthographic effects are not restricted to lexical decision. However, they are stronger in lexical decision than in other tasks. Overall, the study shows that learning about orthography alters the way we process spoken language.  相似文献   

17.
Two experiments explored learning, generalization, and the influence of semantics on orthographic processing in an artificial language. In Experiment 1, 16 adults learned to read 36 novel words written in novel characters. Posttraining, participants discriminated trained from untrained items and generalized to novel items, demonstrating extraction of individual character sounds. Frequency and consistency effects in learning and generalization showed that participants were sensitive to the statistics of their learning environment. In Experiment 2, 32 participants were preexposed to the sounds of all items (lexical phonology) and to novel definitions for half of these items (semantics). Preexposure to either lexical phonology or semantics boosted the early stages of orthographic learning relative to Experiment 1. By the end of training, facilitation was restricted to the semantic condition and to items containing low-frequency inconsistent vowels. Preexposure reduced generalization, suggesting that enhanced item-specific learning was achieved at the expense of character-sound abstraction. The authors' novel paradigm provides a new tool to explore orthographic learning. Although the present findings support the idea that semantic knowledge supports word reading processes, they also suggest that item-specific phonological knowledge is important in the early stages of learning to read.  相似文献   

18.
Allen [Allen, M. (2005). The preservation of verb subcategory knowledge in a spoken language comprehension deficit. Brain and Language, 95, 255-264.] reports a single patient, WBN, who, during spoken language comprehension, is still able to access some of the syntactic properties of verbs despite being unable to access some of their semantic properties. Allen claims that these findings challenge linguistic theories which assume that much of the syntactic behavior of verbs can be predicted from their meanings. I argue, however, that this conclusion is not supported by the data for two reasons: first, Allen focuses on aspects of verb syntax that are not claimed to be influenced by verb semantics; and second, he ignores aspects of verb syntax that are claimed to be influenced by verb semantics.  相似文献   

19.
The development of decoding skills has traditionally been viewed as a stage-like process during which children's reading strategies change as a consequence of the acquisition of phonological awareness. More explicit accounts of the mechanisms involved in learning to read are provided by recent connectionist models in which children learn mappings initially between orthography and phonology, and later between orthography, phonology and semantics. Evidence from studies of reading development suggests that learning to read is determined primarily by the status of a child's phonological representations and is therefore compromised in dyslexic children who have phonological deficits. Children who have language impairments encompassing deficits in semantic representations have qualitatively different reading problems centring on difficulties with reading comprehension and in learning to read exception words.  相似文献   

20.
This paper is a broad survey of issues that I have been examining within the domain of lexical access. Experiments are briefly outlined looking at the questions of morphological processing in both visual and spoken word recognition, phonological recoding in visual word recognition, orthographic influences in spoken word recognition, and a morphophonemic level of word representation. These issues are discussed in the framework of a model of lexical access where the recognition system and the production system are seen as separate representational systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号