首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
John L. Locke 《Cognition》1978,6(3):175-187
Twenty-four deaf and hearing children silently read a printed passage while crossing out all detected cases of a pre-specified target letter. Target letters appeared in phonemically modal form, a category loosely analogous to “pronounced” letters (e.g., the g in badge), and in phonemically nonmodal form, a class which included “silent” letters and those pronounced in somewhat atypical fashion (e.g., the g in rough). Hearing children detected significantly more modal than nonmodal forms, an expected pronunciation effect for individuals in whom speech and reading ordinarily are in close functional relationship. The deaf detected exactly as many modal as nonmodal letter forms, provoking the interpretation that deaf children, as a group, do not effectively mediate print with speech. The deaf also were relatively unaffected by grammatical class, while hearing subjects were considerably more likely to detect a target letter if it occured in a content word than a functor term. Questions pertaining to reading instruction in the deaf are discussed.  相似文献   

2.
聋人手语视觉表象生成能力的实验研究   总被引:2,自引:0,他引:2  
通过视觉表象判断实验,对聋手语使用者和听力正常人两类被试视觉表象生成的能力进行了比较。实验发现:与听力正常的人相比,聋手语使用者学习和记忆大写字母的时间短于听力正常的被试,并且两组被试记忆复杂字母的时间都较长;聋被试和听力正常被试采用了相同的字母表征方式。但是,习得手语的年龄对聋手语者生成表象的能力没有明显的影响。  相似文献   

3.
To explore the role of acoustic factors in visual detection, this study employed 40 deaf and hearing Ss. Ss were requested to cancel all the letters “e” in a passage from Treasure Island. Results were analyzed in terms of probabilities of missing a pronounced or silent e and the e in the word “the.” Hearing and hard of hearing Ss were more likely to miss silent e's than pronounced e's. There was no significant difference between silent and pronounced e's for the profoundly deaf. Deaf and hearing Ss missed significantly more e's in “the” than pronounced or silent e's. The deaf, when compared to the hearing Ss, were more efficient in detecting pronounced and silent e's. They did not differ significantly from the hearing Ss in detecting the e in “the.”  相似文献   

4.
To examine the processing of sequentially presented letters of familiar and nonsense words, especially among Ss of vastly differing experience on sequential tasks, three groups of Ss were tested on letters of words spelled sequentially on an alphanumeric display and on letters of words fingerspelled. These were a deaf group (N=33) with little or no hearing and who varied in their fingerspelling ability; a staff group (N=12) who taught fingerspelling and were highly proficient; and a hearing group (N=19). Of principal interest was the finding that the hearing Ss did better on nonsense letter recognition, while the deaf group did better on word recognition. Word length was important except to the staff Ss on fingerspelled words, which also suggests that concentration on fingerspelling proficiency forces attention to the whole word and not its component letters. Hearing Ss, who are the group faced with an unfamiliar task, seemed to attend to each letter and hence had more difficulty with recognition of the longer unit.  相似文献   

5.
The memory of 11 deaf and 11 hearing British Sign Language users and 11 hearing nonsigners for pictures of faces of and verbalizable objects was measured using the game Concentration. The three groups performed at the same level for the objects. In contrast the deaf signers were better for faces than the hearing signers, who in turn were superior to the hearing nonsigners, who were the worst. Three hypotheses were made: That there would be no significant difference in terms of the number of attempts between the three groups on the verbalizable object task, that the hearing and deaf signers would demonstrate superior performance to that of the hearing nonsigners on the matching faces task, and that the hearing and deaf signers would exhibit similar performance levels on the matching faces task. The first two hypotheses were supported, but the third was not. Deaf signers were found to be superior for memory for faces to hearing signers and hearing nonsigners. Possible explanations for the findings are discussed, including the possibility that deafness and the long use of sign language have additive effects.  相似文献   

6.
The nature of hemispheric processing in the prelingually deaf was examined in a picture-letter matching task. It was hypothesized that linguistic competence in the deaf would be associated with normal or near-normal laterality (i.e., a left hemisphere advantage for analytic linguistic tasks). Subjects were shown a simple picture of a common object (e.g., lamp), followed by brief unilateral presentation of a manually signed or orthographic letter, and they had to indicate as quickly as possible whether the letter was present in the spelling of the object's label. While hearing subjects showed a marked left hemisphere advantage, no such superiority was found for either a linguistically skilled or unskilled group of deaf students. In the skilled group, however, there was a suggestion of a right hemisphere advantage for manually signed letters. It was concluded that while hemispheric asymmetry of function does not develop normally in the deaf, the absence of this normal pattern does not preclude the development of the analytic skills needed to deal with the structure of language.  相似文献   

7.
Representations of the fingers are embodied in our cognition and influence performance in enumeration tasks. Among deaf signers, the fingers also serve as a tool for communication in sign language. Previous studies in normal hearing (NH) participants showed effects of embodiment (i.e., embodied numerosity) on tactile enumeration using the fingers of one hand. In this research, we examined the influence of extensive visuo-manual use on tactile enumeration among the deaf. We carried out four enumeration task experiments, using 1–5 stimuli, on a profoundly deaf group (n = 16) and a matching NH group (n = 15): (a) tactile enumeration using one hand, (b) tactile enumeration using two hands, (c) visual enumeration of finger signs, and (d) visual enumeration of dots. In the tactile tasks, we found salient embodied effects in the deaf group compared to the NH group. In the visual enumeration of finger signs task, we controlled the meanings of the stimuli presentation type (e.g., finger-counting habit, fingerspelled letters, both or neither). Interestingly, when comparing fingerspelled letters to neutrals (i.e., not letters or numerical finger-counting signs), an inhibition pattern was observed among the deaf. The findings uncover the influence of rich visuo-manual experiences and language on embodied representations. In addition, we propose that these influences can partially account for the lag in mathematical competencies in the deaf compared to NH peers. Lastly, we further discuss how our findings support a contemporary model for mental numerical representations and finger-counting habits.  相似文献   

8.
Linguistic coding by deaf children in relation to beginning reading success   总被引:4,自引:0,他引:4  
The coding of printed letters in a task of consonant recall was examined in relation to the level of success of prelingually and profoundly deaf children (median age 8.75 years) in beginning reading. As determined by recall errors, the deaf children who were classified as good readers appeared to use both speech and fingerspelling (manual) codes in short-term retention of printed letters. In contrast, deaf children classified as poor readers did not show influence of either of these linguistically based codes in recall. Thus, the success of deaf children in beginning reading, like that of hearing children, appears to be related to the ability to establish and make use of linguistically recoded representations of the language. Neither group showed evidence of dependence on visual cues for recall.  相似文献   

9.
Compensatory sensitivity is said to follow loss of a primary communicative channel. In previous studies in which how accurately deaf and hearing people perceive emotional expression was compared, caricatures or nonverbal behavior of hearing people were stimuli. These studies did not specifically address the possibility that deaf people show nonverbal behavior which might be related to their sign language. To assess this possibility two methodological innovations were made. Stimuli were displayed of nonverbal messages with various emotional contents presented by deaf people in sign language. Also, no verbal labels identified emotional content of the messages. Sixty hearing and 45 deaf male college students watched films of emotional expressions in sign language. The participants tried to identify the emotional content of each film by matching content to one of six photographs of facial expressions. Responses were analyzed for accuracy in perceiving the emotional content. Hearing participants were more accurate in perceiving the display of Happiness. Display of Disgust was perceived better by the deaf participants. No support was found for compensatory sensitivity among the deaf participants.  相似文献   

10.
For hearing people, structure given to orthographic information may be influenced by phonological structures that develop with experience of spoken language. In this study we examine whether profoundly deaf individuals structure orthographic representation differently. We ask "Would deaf students who are advanced readers show effects of syllable structure despite their altered experience of spoken language, or would they, because of reduced influence from speech, organize their orthographic knowledge according to groupings defined by letter frequency?" We used a task introduced by Prinzmetal (Prinzmetal, Treiman, & Rho, 1986) in which participants were asked to judge the colour of letters in briefly presented words. As with hearing participants, the number of errors made by deaf participants was influenced by syllable structure (Prinzmetal et al., 1986; Rapp, 1992). This effect could not be accounted for by letter frequency. Furthermore, there was no correlation between the strength of syllable effects and residual speech or hearing. Our results support the view that the syllable is a unit of linguistic organization that is abstract enough to apply to both spoken and written language.  相似文献   

11.
We examined priming of adjective-noun structures in Dutch hearing and deaf children. In three experiments, hearing 7- and 8-year-olds, hearing 11- and 12-year-olds, and deaf 11- and 12-year-olds read a prenominal structure (e.g., the blue ball), a relative clause structure (e.g., the ball that is blue), or a main clause (e.g., the ball is blue). After reading each prime structure, children described a target picture in writing. Half of the target pictures contained the same noun as the prime structure and half contained a different noun. Hearing 7- and 8-year-olds and 11- and 12-year-olds, as well as deaf 11- and 12-year-olds, showed priming effects for all three structures in both the same-noun and different-noun conditions. Structural priming was not boosted by lexical repetition in the hearing and deaf 11- and 12-year-olds; a lexical boost effect was observed only in the 7- and 8-year-olds and only in the relative clause structure. The findings suggest that hearing and deaf children possess abstract representations of adjective-noun structures independent of particular lexical items.  相似文献   

12.
Most people born deaf and exposed to oral language show scant evidence of sensitivity to the phonology of speech when processing written language. In this respect they differ from hearing people. However, occasionally, a prelingually deaf person can achieve good processing of written language in terms of phonological sensitivity and awareness, and in this respect appears exceptional. We report the pattern of event-related fMRI activation in such a deaf reader while performing a rhyme-judgment on written words with similar spelling endings that do not provide rhyme clues. The left inferior frontal gyrus pars opercularis and the left inferior parietal lobe showed greater activation for this task than for a letter-string identity matching task. This participant was special in this regard, showing significantly greater activation in these regions than a group of hearing participants with a similar level of phonological and reading skill. In addition, SR showed activation in the left mid-fusiform gyrus; a region which did not show task-specific activation in the other respondents. The pattern of activation in this exceptional deaf reader was also unique compared with three deaf readers who showed limited phonological processing. We discuss the possibility that this pattern of activation may be critical in relation to phonological decoding of the written word in good deaf readers whose phonological reading skills are indistinguishable from those of hearing readers.  相似文献   

13.
Linguistic flexibility of deaf and hearing children was compared by examining the relative frequencies of their nonliteral constructions in stories written and signed (by the deaf) or written and spoken (by the hearing). Seven types of nonliteral constructions were considered: novel figurative language, frozen figurative language, gestures, pantomime, linguistic modifications, linguistic inventions, and lexical substitutions. Among the hearing 8- to 15-year-olds, oral and written stories contained comparable numbers of nonliteral constructions. Among their age-matched deaf peers, however, nonliteral constructions were significantly stories contained comparable numbers of nonliteral constructions. Among their age-matched deaf peers, however, nonliteral constructions were significantly more common in signed than written stories. Overall, hearing students used more nonliteral constructions in their written stories than did their deaf peers (who used very few), whereas deaf students used more nonliteral constructions in their signed stories than their hearing peers did in their spoken stories. The results suggest that deaf children are linguistically and cognitively more competent than is generally assumed on the basis of evaluations in English. Although inferior to hearing age-mates in written expression, they are comparable to, and in some ways better than those peers when evaluated using their primary mode of communication.  相似文献   

14.
This study examined 40 deaf and 20 hearing students' free recall of visually presented words varied systematically with respect to signability (i.e., words that could be expressed by a single sign) and visual imagery. Half of the deaf subjects had deaf parents, while the other half had hearing parents. For deaf students, recall was better for words that had sign-language equivalents and high-imagery values. For the hearing students, recall was better for words with high-imagery values, but there was no effect of signability. Over-all, the hearing students recalled significantly more words than the deaf students in both immediate and delayed free-recall conditions. In immediate recall, deaf students with deaf parents reported using a sign-language coding strategy more frequently and recalled more words correctly than deaf students with hearing parents. Serial-position curves indicated several differences in patterns of recall among the groups. These results underline the importance of sign language in the memory and recall of deaf persons.  相似文献   

15.
通过认知抑制控制的Stroop范式运用事件相关电位考察听障儿童和对照组儿童在认知抑制控制方面的行为特点以及脑内时程特点。行为结果显示:(1)两类被试在一致条件下认知抑制控制的正确率显著高于不一致条件,反应时显著快于不一致条件,表现出冲突效应;(2)组间结果显示,听障儿童的认知抑制控制正确率显著低于对照组儿童,反应时上差异不显著。脑电结果显示:(1)不一致条件在N450上的负激活显著高于一致条件,听障儿童比对照组儿童诱发了更负的N450波幅;(2)不一致条件在SP上的正激活显著高于一致条件,两类被试在SP上差异不显著。结果提示听障儿童在认知抑制控制过程中的冲突监测功能存在障碍,这是由于冲突监测过程中注意分配能力受损所致。  相似文献   

16.
This study was designed to determine the feasibility of using self-paced reading methods to study deaf readers and to assess how deaf readers respond to two syntactic manipulations. Three groups of participants read the test sentences: deaf readers, hearing monolingual English readers, and hearing bilingual readers whose second language was English. In Experiment 1, the participants read sentences containing subject-relative or object-relative clauses. The test sentences contained semantic information that would influence online processing outcomes (Traxler, Morris, & Seely Journal of Memory and Language 47: 69–90, 2002; Traxler, Williams, Blozis, & Morris Journal of Memory and Language 53: 204–224, 2005). All of the participant groups had greater difficulty processing sentences containing object-relative clauses. This difficulty was reduced when helpful semantic cues were present. In Experiment 2, participants read active-voice and passive-voice sentences. The sentences were processed similarly by all three groups. Comprehension accuracy was higher in hearing readers than in deaf readers. Within deaf readers, native signers read the sentences faster and comprehended them to a higher degree than did nonnative signers. These results indicate that self-paced reading is a useful method for studying sentence interpretation among deaf readers.  相似文献   

17.
采用复制法,通过聋人与听力正常人时距估计的对比实验研究,结果发现听觉经验缺失对时距估计有一定的影响:(1)两类被试在2000ms和10000ms的时距估计中,再现时距的平均数表现出了显著性差异。听力正常被试倾向于低估时距,聋人被试倾向于高估时距。(2)聋人被试不同时距再现相对误差率之间不存在显著性差异;听力正常被试时距再现相对错误率在2000ms与10000ms、30000ms存在显著差异,10000ms和30000ms之间差异不显著。  相似文献   

18.
Deaf and hearing children were given two tasks: (a) sorting faces portraying nine emotions and (b) matching those faces with drawings of appropriate emotion-arousing situations. The deaf children performed as the hearing children did on the first task but did not match the faces to the situations as well as the hearing children. It appeared that the deaf children were unable to analyze and interpret emotion-arousing events adequately. Possible reasons for this finding are presented and discussed in detail.This research was supported, in part, by Social Rehabilitation Services Grant No. RD-2552.The authors are indebted to Dr. Lloyd Graunke, Delmas Young, and Warren Flower of the Tennessee School for the Deaf, Elizabeth Stallings of the Monroe Harding Children's Home, Rev. Lucius Hart and Rev. Hudlow of the Baptist Children's Home, Dr. Lloyd Funchess and Jerome Freeman of the Louisiana State School for the Deaf, W. W. Wallace, Milton Lillard, and Wilburn Kelley of the Williamson County Tennessee School System, and Charles Barham of Tennessee Preparatory School.  相似文献   

19.
This study investigated peripheral vision (at least 30° eccentric to fixation) development in profoundly deaf children without cochlear implantation, and compared this to age-matched hearing controls as well as to deaf and hearing adult data. Deaf and hearing children between the ages of 5 and 15 years were assessed using a new, specifically paediatric designed method of static perimetry. The deaf group (N = 25) were 14 females and 11 males, mean age 9.92 years (range 5-15 years). The hearing group (N = 64) were 34 females, 30 males, mean age 9.13 years (range 5-15 years). All participants had good visual acuity in both eyes (< 0.200 LogMAR). Accuracy of detection and reaction time to briefly presented LED stimuli of three light intensities, at eccentricities between 30° and 85° were measured while fixation was maintained to a central target. The study found reduced peripheral vision in deaf children between 5 and 10 years of age. Deaf children (aged 5-10 years) showed slower reaction times to all stimuli and reduced ability to detect and accurately report dim stimuli in the far periphery. Deaf children performed equally to hearing children aged 11-12 years. Deaf adolescents aged 13-15 years demonstrated faster reaction times to all peripheral stimuli in comparison to hearing controls. Adolescent results were consistent with deaf and hearing adult performances wherein deaf adults also showed significantly faster reaction times than hearing controls. Peripheral vision performance on this task was found to reach adult-like levels of maturity in deaf and hearing children, both in reaction time and accuracy of detection at the age of 11-12 years.  相似文献   

20.
The present study examines deaf and hearing children's spelling of plural nouns. Severe literacy impairments are well documented in the deaf, which are believed to be a consequence of phonological awareness limitations. Fifty deaf (mean chronological age 13;10 years, mean reading age 7;5 years) and 50 reading-age-matched hearing children produced spellings of regular, semiregular, and irregular plural nouns in Experiment 1 and nonword plurals in Experiment 2. Deaf children performed reading-age appropriately on rule-based (regular and semiregular) plurals but were significantly less accurate at spelling irregular plurals. Spelling of plural nonwords and spelling error analyses revealed clear evidence for use of morphology. Deaf children used morphological generalization to a greater degree than their reading-age-matched hearing counterparts. Also, hearing children combined use of phonology and morphology to guide spelling, whereas deaf children appeared to use morphology without phonological mediation. Therefore, use of morphology in spelling can be independent of phonology and is available to the deaf despite limited experience with spoken language. Indeed, deaf children appear to be learning about morphology from the orthography. Education on more complex morphological generalization and exceptions may be highly beneficial not only for the deaf but also for other populations with phonological awareness limitations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号