首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Samar and Parasnis [Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.] reported that correlated measures of coherent motion detection and perceptual speed predicted reading comprehension in deaf young adults. Because deficits in coherent motion detection have been associated with dyslexia in the hearing population, and because coherent motion detection is strongly dependent on extrastriate cortical area MT, these results are consistent with the claim that hidden dyslexia occurs within the deaf population and is associated with deficits in MT. However, coherent motion detection can also be influenced by subcortical deficits in both magnocellular and parvocellular pathways. To confirm the putative cortical locus of coherent motion perception deficits, we measured contrast thresholds for detecting the direction of movement of drifting sine wave gratings in the same participant group as [Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.], under stimulus conditions that selectively biased for input from the subcortical magnocellular and parvocellular pathways, respectively. Contrast thresholds were not related to reading comprehension performance under either the magnocellular or parvocellular conditions. Furthermore, the previously reported correlations among reading comprehension, coherent motion thresholds, and perceptual speed remained significant even after contrast thresholds and non-verbal IQ were controlled in partial correlation analyses. In addition, coherent motion detection thresholds were found to correlate specifically with a reading-IQ discrepancy score, one commonly used indicator of dyslexia. These results provide direct psychophysical evidence that the previously reported deficit in coherent motion detection in deaf poor readers does not involve subcortical pathway deficits, but rather is associated with a cortical deficit likely involving area MT. They also strengthen the argument for the existence of hidden dyslexia in the deaf adult population.  相似文献   

2.
Prelingual deafness and developmental dyslexia have confounding developmental effects on reading acquisition. Therefore, standard reading assessment methods for diagnosing dyslexia in hearing people are ineffective for use with deaf people. Recently, Samar, Parasnis, and Berent (2002) reported visual evoked potential evidence that deaf poor readers, compared to deaf good readers, have dorsal stream visual system deficits like those previously found for hearing dyslexics. Here, we report new psychometric and psychophysical evidence that deficits in dorsal stream function, likely involving extrastriate area MT, are associated with relatively poor reading comprehension in deaf adults. Poorer reading comprehension within a group of 23 prelingually deaf adults was associated with lower scores on the Symbol Digit Modality Test, a perceptual speed test commonly used to help identify dyslexia in hearing people. Furthermore, coherent dot motion detection thresholds, which reflect the functional status of area MT, correlated negatively with reading scores in each visual quadrant. Elevated motion thresholds for deaf poor readers were not due to general cognitive differences in IQ but were specifically correlated with poor perceptual speed. With IQ controlled, a highly reliable right visual field advantage for coherent motion detection was found. Additional analyses suggested that the functional status of dorsal stream motion detection mechanisms in deaf people is related to reading comprehension, but the direction and strength of lateralization of those mechanisms is independent of reading comprehension. Our results generally imply that dyslexia is a hidden contributor to relatively poor reading skill within the deaf population and that assessment of dorsal stream function may provide a diagnostic biological marker for dyslexia in deaf people.  相似文献   

3.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

4.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

5.
Left-Hemisphere Dominance for Motion Processing in Deaf Signers   总被引:4,自引:0,他引:4  
Evidence from neurophysiological studies in animals as well as humans has demonstrated robust changes in neural organization and function following early-onset sensory deprivation. Unfortunately, the perceptual consequences of these changes remain largely unexplored. The study of deaf individuals who have been auditorily deprived since birth and who rely on a visual language (i.e., American Sign Language, ASL) for communication affords a unique opportunity to investigate the degree to which perception in the remaining, intact senses (e.g., vision) is modified as a result of altered sensory and language experience. We studied visual motion perception in deaf individuals and compared their performance with that of hearing subjects. Thresholds and reaction times were obtained for a motion discrimination task, in both central and peripheral vision. Although deaf and hearing subjects had comparable absolute scores on this task, a robust and intriguing difference was found regarding relative performance for left-visual-field (LVF) versus right-visual-field (RVF) stimuli: Whereas hearing subjects exhibited a slight LVF advantage, the deaf exhibited a strong RVF advantage. Thus, for deaf subjects, the left hemisphere may be specialized for motion processing. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing, in the case of ASL) are recruited (or "captured") by the left, language-dominant hemisphere.  相似文献   

6.
Visual abilities in deaf individuals may be altered as a result of auditory deprivation and/or because the deaf rely heavily on a sign language (American Sign Language, or ASL). In this study, we asked whether attentional abilities of deaf subjects are altered. Using a direction of motion discrimination task in the periphery, we investigated three aspects of spatial attention: orienting of attention, divided attention, and selective attention. To separate influences of auditory deprivation and sign language experience, we compared three subject groups: deaf and hearing native signers of ASL and hearing nonsigners. To investigate the ability to orient attention, we compared motion thresholds obtained with and without a valid spatial precue, with the notion that subjects orient to the stimulus prior to its appearance when a precue is presented. Results suggest a slight advantage for deaf subjects in the ability to orient spatial attention. To investigate divided attention, we compared motion thresholds obtained when a single motion target was presented to thresholds obtained when the motion target was presented among confusable distractors. The effect of adding distractors was found to be identical across subject groups, suggesting that attentional capacity is not altered in deaf subjects. Finally, to investigate selective attention, we compared performance for a single, cued motion target with that of a cued motion target presented among distractors. Here, deaf, but not hearing, subjects performed better when the motion target was presented among distractors than when it was presented alone, suggesting that deaf subjects are more affected by the presence of distractors. In sum, our results suggest that attentional orienting and selective attention are altered in the deaf and that these effects are most likely due to auditory deprivation as opposed to sign language experience.  相似文献   

7.
American Sign Language (ASL) offers a valuable opportunity for the study of cerebral asymmetries, since it incorporates both language structure and complex spatial relations: processing the former has generally been considered a left-hemisphere function, the latter, a right-hemisphere one. To study such asymmetries, congenitally deaf, native ASL users and normally-hearing English speakers unfamiliar with ASL were asked to identify four kinds of stimuli: signs from ASL, handshapes never used in ASL, Arabic digits, and random geometric forms. Stimuli were presented tachistoscopically to a visual hemifield and subjects manually responded as rapidly as possible to specified targets. Both deaf and hearing subjects showed left-visual-field (hence, presumably right-hemisphere) advantages to the signs and to the non-ASL hands. The hearing subjects, further, showed a left-hemisphere advantage to the Arabic numbers, while the deaf subjects showed no reliable visual-field differences to this material. We infer that the spatial processing required of the signs predominated over their language processing in determining the cerebral asymmetry of the deaf for these stimuli.  相似文献   

8.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

9.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

10.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, and hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.  相似文献   

11.
Previous studies indicate that hearing readers sometimes convert printed text into a phonological form during silent reading. The experiments reported here investigated whether second-generation congenitally deaf readers use any analogous recoding strategy. Fourteen congenitally and profoundly deaf adults who were native signers of American Sign Language (ASL) served as subjects. Fourteen hearing people of comparable reading levels were control subjects. These subjects participated in four experiments that tested for the possibilities of (a) recoding into articulation, (b) recoding into fingerspelling, (c) recoding into ASL, or (d) no recoding at all. The experiments employed paradigms analogous to those previously used to test for phonological recoding in hearing populations. Interviews with the deaf subjects provided supplementary information about their reading strategies. The results suggest that these deaf subjects as a group do not recode into articulation or fingerspelling, but do recode into sign.  相似文献   

12.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

13.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

14.
A sign decision task, in which deaf signers made a decision about the number of hands required to form a particular sign of American Sign Language (ASL), revealed significant facilitation by repetition among signs that share a base morpheme. A lexical decision task on English words revealed facilitation by repetition among words that share a base morpheme in both English and ASL, but not among those that share a base morpheme in ASL only. This outcome occurred for both deaf and hearing subjects. The results are interpreted as evidence that the morphological principles of lexical organization observed in ASL do not extend to the organization of English for skilled deaf readers.  相似文献   

15.
Visual field asymmetries were examined in American Sign Language-English bilinguals for speeded numerical size judgments of pairs of digits, number words, and number signs. Physical size of the number pairs was either congruent or incongruent with their numerical size. The results revealed a greater left visual field (LVF) interference for numbers represented as digits and a greater right visual field (RVF) interference for numbers represented as words or signs. Subjects' performance on number words and signs was also influenced by their skill in English and ASL: interference was greater in the RVF in the subjects' better language but was greater in the LVF for the less skilled language. These findings suggest that lateralization of numerical size judgments is moderated by the mode of number presentation and by prior language experience.  相似文献   

16.
We investigated the relative role of the left versus right hemisphere in the comprehension of American Sign Language (ASL). Nineteen lifelong signers with unilateral brain lesions [11 left hemisphere damaged (LHD) and 8 right hemisphere damaged (RHD)] performed three tasks, an isolated single-sign comprehension task, a sentence-level comprehension task involving simple one-step commands, and a sentence-level comprehension task involving more complex multiclause/multistep commands. Eighteen of the participants were deaf, one RHD subject was hearing and bilingual (ASL and English). Performance was examined in relation to two factors: whether the lesion was in the right or left hemisphere and whether the temporal lobe was involved. The LHD group performed significantly worse than the RHD group on all three tasks, confirming left hemisphere dominance for sign language comprehension. The group with left temporal lobe involvement was significantly impaired on all tasks, whereas each of the other three groups performed at better than 95% correct on the single sign and simple sentence comprehension tasks, with performance falling off only on the complex sentence comprehension items. A comparison with previously published data suggests that the degree of difficulty exhibited by the deaf RHD group on the complex sentences is comparable to that observed in hearing RHD subjects. Based on these findings we hypothesize (i) that deaf and hearing individuals have a similar degree of lateralization of language comprehension processes and (ii) that language comprehension depends primarily on the integrity of the left temporal lobe.  相似文献   

17.
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses two further questions: first, do extra-grammatical discourse functions in deaf signers show the same right-hemisphere dominance observed for discourse functions in hearing subjects; and second, do discourse functions in ASL that employ spatial relations depend upon more general intact spatial cognitive abilities? We report findings from two right-hemisphere damaged deaf signers, both of whom show disruption of discourse functions in absence of any disruption of grammatical functions. The exact nature of the disruption differs for the two subjects, however. Subject AR shows difficulty in maintaining topical coherence, while SJ shows difficulty in employing spatial discourse devices. Further, the two subjects are equally impaired on non-linguistic spatial tasks, indicating that spared spatial discourse functions can occur even when more general spatial cognition is disrupted. We conclude that, as in the hearing population, discourse functions involve the right hemisphere; that distinct discourse functions can be dissociated from one another in ASL; and that brain organization for linguistic spatial devices is driven by its functional role in language processing, rather than by its surface, spatial characteristics.  相似文献   

18.
A visual hemifield experiment investigated hemispheric specialization among hearing children and adults and prelingually, profoundly deaf youngsters who were exposed intensively to Cued Speech (CS). Of interest was whether deaf CS users, who undergo a development of phonology and grammar of the spoken language similar to that of hearing youngsters, would display similar laterality patterns in the processing of written language. Semantic, rhyme, and visual judgement tasks were used. In the visual task no VF advantage was observed. A RVF (left hemisphere) advantage was obtained for both the deaf and the hearing subjects for the semantic task, supporting Neville's claim that the acquisition of competence in the grammar of language is critical in establishing the specialization of the left hemisphere for language. For the rhyme task, however, a RVF advantage was obtained for the hearing subjects, but not for the deaf ones, suggesting that different neural resources are recruited by deaf and hearing subjects. Hearing the sounds of language may be necessary to develop left lateralised processing of rhymes.  相似文献   

19.
Geraci C  Gozzi M  Papagno C  Cecchetto C 《Cognition》2008,106(2):780-804
It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of signed and spoken languages. This finding is at odds with results showing that short-term memory (STM) for signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of signs: indeed, unlike English (or Italian) words, signs contain both simultaneous and sequential components. Nonetheless, sign languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of signed languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.  相似文献   

20.
Two experiments were conducted on short-term recall of printed English words by deaf signers of American Sign Language (ASL). Compared with hearing subjects, deaf subjects recalled significantly fewer words when ordered recall of words was required, but not when free recall was required. Deaf subjects tended to use a speech-based code in probed recall for order, and the greater the reliance on a speech-based code, the more accurate the recall. These results are consistent with the hypothesis that a speech-based code facilitates the retention of order information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号