首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The present study examined the nature of reading skills in congenitally deaf and hearing children 7–19 years of age. Deaf children were drawn from oralist and total communication programs. A visual detection task was designed to assess the extent of phonological coding and chunking used in reading a story of various degrees of syntactic, semantic, and orthographic complexity. The results provide evidence that (1) like hearing children, deaf children tend to use orthographic regularities in their reading: (2) there is no relation in the deaf child's performance between sensitivity to orthographic regularities and the type of communication method used in training; and (3) hearing and deaf readers use qualitatively similar psycholinguistic strategies in their processing of a story.  相似文献   

2.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

3.
Sensory systems are essential for perceiving and conceptualizing our semantic knowledge about the world and the way we interact with it. Despite studies reporting neural changes to compensate for the absence of a given sensory modality, studies focusing on the assessment of semantic processing reveal poor performances by deaf individuals when compared with hearing individuals. However, the majority of those studies were not performed in the linguistic modality considered the most adequate to their sensory capabilities (i.e., sign language). Therefore, this exploratory study was developed focusing on linguistic modality effects during semantic retrieval in deaf individuals in comparison with their hearing peers through a category fluency task. Results show a difference in performance between the two linguistic modalities by deaf individuals as well as in the type of linguistic clusters most chosen by participants, suggesting a complex clustering tendency by deaf individuals.  相似文献   

4.
Linguistic flexibility of deaf and hearing children was compared by examining the relative frequencies of their nonliteral constructions in stories written and signed (by the deaf) or written and spoken (by the hearing). Seven types of nonliteral constructions were considered: novel figurative language, frozen figurative language, gestures, pantomime, linguistic modifications, linguistic inventions, and lexical substitutions. Among the hearing 8- to 15-year-olds, oral and written stories contained comparable numbers of nonliteral constructions. Among their age-matched deaf peers, however, nonliteral constructions were significantly stories contained comparable numbers of nonliteral constructions. Among their age-matched deaf peers, however, nonliteral constructions were significantly more common in signed than written stories. Overall, hearing students used more nonliteral constructions in their written stories than did their deaf peers (who used very few), whereas deaf students used more nonliteral constructions in their signed stories than their hearing peers did in their spoken stories. The results suggest that deaf children are linguistically and cognitively more competent than is generally assumed on the basis of evaluations in English. Although inferior to hearing age-mates in written expression, they are comparable to, and in some ways better than those peers when evaluated using their primary mode of communication.  相似文献   

5.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

6.
7.
聋人阅读过程中的视觉功能补偿现象是由于听觉信息的缺失,聋人的视觉功能会发生补偿性改变,表现为对副中央凹视野内文本信息的加工效率更高。本研究采用边界范式,测量聋人的副中央凹-中央凹重复效应,以探究聋人的副中央凹视觉功能补偿现象能否促进其中央凹的词汇识别过程。结果发现,聋人的副中央凹-中央凹重复效应出现在早期阅读指标凝视时间,而阅读能力匹配组只出现在晚期阅读指标总注视时间。因此,相比阅读能力匹配的健听读者,聋人的副中央凹-中央凹重复效应出现得更早,表现出副中央凹视觉功能补偿现象。  相似文献   

8.
Parafoveal attention in congenitally deaf and hearing young adults   总被引:3,自引:1,他引:2  
This reaction-time study compared the performance of 20 congenitally and profoundly deaf, and 20 hearing college students on a parafoveal stimulus detection task in which centrally presented prior cues varied in their informativeness about stimulus location. In one condition, subjects detected a parafoveally presented circle with no other information being present in the visual field. In another condition, spatially complex and task-irrelevant foveal information was present which the subjects were instructed to ignore. The results showed that although both deaf and hearing people utilized cues to direct attention to specific locations and had difficulty in ignoring foveal information, deaf people were more proficient in redirecting attention from one spatial location to another in the presence of irrelevant foveal information. These results suggest that differences exist in the development of attentional mechanisms in deaf and hearing people. Both groups showed an overall right visual-field advantage in stimulus detection which was attenuated when the irrelevant foveal information was present. These results suggest a left-hemisphere superiority for detection of parafoveally presented stimuli independent of cue informativeness for both groups.  相似文献   

9.
A group of congenitally deaf adults and a group of hearing adults, both fluent in sign language, were tested to determine cerebral lateralization. In the most revealing task, subjects were given a series of trials in which they were fist presented with a videotaped sign and then with a word exposed tachistoscopically to the right visual field or left visual field, and were required to judge whether the word corresponded to the sign or not. The results suggested that the comparison processes involved in the decision were performed more efficiently by the left hemisphere for hearing subjects and by the right hemisphere for deaf subjects. However, the deaf subjects performed as well as the hearing subjects in the left hemisphere, suggesting that the deaf are not impeded by their auditory-speech handicap from developing the left hemisphere for at least some types of linguistic processing.  相似文献   

10.
闫国利  秦钊 《心理科学》2021,(5):1266-1272
听觉通道受损,是否会影响聋人的视觉功能?有三种理论对此做出了解释。缺陷理论:聋人视觉功能存在缺陷,包括听觉脚手架假说和劳动分工假说。补偿理论:聋人视觉功能会表现出增强,包括响应增强假说、知觉增强假说、超通道功能假说和背侧通路假说。整合理论:聋人视觉功能既可能表现为缺陷,也可能表现为增强,与实验任务和被试年龄有关。本文评述了听觉障碍对聋人视觉功能影响的三种理论,并对其今后的发展趋势进行了展望。  相似文献   

11.
To examine the claim that phonetic coding plays a special role in temporal order recall, deaf and hearing college students were tested on their recall of temporal and spatial order information at two delay intervals. The deaf subjects were all native signers of American Sign Language. The results indicated that both the deaf and hearing subjects used phonetic coding in short-term temporal recall, and visual coding in spatial recall. There was no evidence of manual or visual coding among either the hearing or the deaf subjects in the temporal order recall task. The use of phonetic coding for temporal recall is consistent with the hypothesis that recall of temporal order information is facilitated by a phonetic code.  相似文献   

12.
For hearing people, structure given to orthographic information may be influenced by phonological structures that develop with experience of spoken language. In this study we examine whether profoundly deaf individuals structure orthographic representation differently. We ask "Would deaf students who are advanced readers show effects of syllable structure despite their altered experience of spoken language, or would they, because of reduced influence from speech, organize their orthographic knowledge according to groupings defined by letter frequency?" We used a task introduced by Prinzmetal (Prinzmetal, Treiman, & Rho, 1986) in which participants were asked to judge the colour of letters in briefly presented words. As with hearing participants, the number of errors made by deaf participants was influenced by syllable structure (Prinzmetal et al., 1986; Rapp, 1992). This effect could not be accounted for by letter frequency. Furthermore, there was no correlation between the strength of syllable effects and residual speech or hearing. Our results support the view that the syllable is a unit of linguistic organization that is abstract enough to apply to both spoken and written language.  相似文献   

13.
This study examined 40 deaf and 20 hearing students' free recall of visually presented words varied systematically with respect to signability (i.e., words that could be expressed by a single sign) and visual imagery. Half of the deaf subjects had deaf parents, while the other half had hearing parents. For deaf students, recall was better for words that had sign-language equivalents and high-imagery values. For the hearing students, recall was better for words with high-imagery values, but there was no effect of signability. Over-all, the hearing students recalled significantly more words than the deaf students in both immediate and delayed free-recall conditions. In immediate recall, deaf students with deaf parents reported using a sign-language coding strategy more frequently and recalled more words correctly than deaf students with hearing parents. Serial-position curves indicated several differences in patterns of recall among the groups. These results underline the importance of sign language in the memory and recall of deaf persons.  相似文献   

14.
通过要求被试分别在近处空间和远处空间完成空间参照框架的判断任务, 考察了听障和听力正常人群空间主导性和空间参照框架的交互作用。结果表明:(1)相对于听力正常人群, 听障人群完成自我参照框架判断任务的反应时更长, 而在完成环境参照框架判断任务无显著差异; (2)听障人群和听力正常人群空间主导性和空间参照框架交互作用呈现出相反模式。研究表明, 听障人群在听力功能受损后, 其空间主导性和空间参照框架的交互作用也产生了变化。  相似文献   

15.
Utterances expressing generic kinds ("birds fly") highlight qualities of a category that are stable and enduring, and thus provide insight into conceptual organization. To explore the role that linguistic input plays in children's production of generic nouns, we observed American and Chinese deaf children whose hearing losses prevented them from learning speech and whose hearing parents had not exposed them to sign. These children develop gesture systems that have language-like structure at many different levels. The specific question we addressed in this study was whether the gesture systems, developed without input from a conventional language model, would contain generics. We found that the deaf children used generics in the gestures they invented, and did so at about the same rate as hearing children growing up in the same cultures and learning English or Mandarin. Moreover, the deaf children produced more generics for animals than for artifacts, a bias found previously in adult English- and Mandarin-speakers and also found in both groups of hearing children in our current study. This bias has been hypothesized to reflect the different conceptual organizations underlying animal and artifact categories. Our results suggest that not only is a language model not necessary for young children to produce generic utterances, but the bias to produce more generics for animals than artifacts also does not require linguistic input to develop.  相似文献   

16.
Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.  相似文献   

17.
The present study aims to explore the semantic knowledge of a group of Iranian deaf individuals who, due mainly to auditory deprivation did not acquire language normally in early years of their life. The participants were ten deaf and a matched number of hearing individuals as control group. A test of five tasks was administrated to assess their knowledge of vocabulary, collocation, semantic categorizations, semantic features, and proverbs. Although the results indicated a significant difference between the deaf and the hearing group, a between- group comparison of each task revealed no significant difference between the deaf and hearing participants in the number of errors in vocabulary, collocations, semantic categorization, and semantic features. The only task in which deaf participants did significantly worse than the control group was that of proverbs. Therefore, it could be argued that, language deprivation in early childhood does not have the same effect on different components of our linguistic knowledge and that the acquisition of semantics may well continue beyond puberty.  相似文献   

18.
20 profoundly deaf and 20 normal hearing children from ages 10 to 13 were compared as to their ability to locate visually the position of apparent vertical and the apparent location of the longitudinal axis of the body under erect and 30 degrees left and right body-tilt. Both deaf and normal hearing children were able accurately to locate a rod to the apparent visual vertical, but deaf children were significantly more accurate in aligning a rod to their apparent body-position than hearing children. This finding is discussed from both a learning view and from a hypothesis of developmental lag.  相似文献   

19.
《Cognition》2009,112(2):217-228
Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of similarity comparisons, we observed four children who had not been exposed to usable linguistic input - deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. These children developed gesture systems that have language-like structure at many different levels. Here we ask whether the deaf children used their gestures to comment on similarity relations and, if so, which types of relations they expressed. We found that all four deaf children were able to use their gestures to express similarity comparisons (point to cat + point to tiger) resembling those conveyed by 40 hearing children in early gesture + speech combinations (cat + point to tiger). However, the two groups diverged at later ages. Hearing children, after acquiring the word like, shifted from primarily expressing global similarity (as in cat/tiger) to primarily expressing single-property similarity (as in crayon is brown like my hair). In contrast, the deaf children, lacking an explicit term for similarity, continued to primarily express global similarity. The findings underscore the robustness of similarity comparisons in human communication, but also highlight the importance of conventional terms for comparison as likely contributors to routinely expressing more focused similarity relations.  相似文献   

20.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号