首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

2.
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses two further questions: first, do extra-grammatical discourse functions in deaf signers show the same right-hemisphere dominance observed for discourse functions in hearing subjects; and second, do discourse functions in ASL that employ spatial relations depend upon more general intact spatial cognitive abilities? We report findings from two right-hemisphere damaged deaf signers, both of whom show disruption of discourse functions in absence of any disruption of grammatical functions. The exact nature of the disruption differs for the two subjects, however. Subject AR shows difficulty in maintaining topical coherence, while SJ shows difficulty in employing spatial discourse devices. Further, the two subjects are equally impaired on non-linguistic spatial tasks, indicating that spared spatial discourse functions can occur even when more general spatial cognition is disrupted. We conclude that, as in the hearing population, discourse functions involve the right hemisphere; that distinct discourse functions can be dissociated from one another in ASL; and that brain organization for linguistic spatial devices is driven by its functional role in language processing, rather than by its surface, spatial characteristics.  相似文献   

3.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

4.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

5.
Cerebral laterality was examined for third-, fourth-, and fifth-grade deaf and hearing subjects. The experimental task involved the processing of word and picture stimuli presented singly to the right and left visual hemifields. The analyses indicated the deaf children were faster than the hearing children in overall processing efficiency, and that they performed differently in regard to hemispheric lateralization. The deaf children processed the stimuli more efficiently in the right hemisphere, while the hearing children demonstrated a left-hemisphere proficiency. This finding is discussed in terms of the hypothesis that cerebral lateralization is influenced by auditory processing.  相似文献   

6.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

7.
Left-Hemisphere Dominance for Motion Processing in Deaf Signers   总被引:4,自引:0,他引:4  
Evidence from neurophysiological studies in animals as well as humans has demonstrated robust changes in neural organization and function following early-onset sensory deprivation. Unfortunately, the perceptual consequences of these changes remain largely unexplored. The study of deaf individuals who have been auditorily deprived since birth and who rely on a visual language (i.e., American Sign Language, ASL) for communication affords a unique opportunity to investigate the degree to which perception in the remaining, intact senses (e.g., vision) is modified as a result of altered sensory and language experience. We studied visual motion perception in deaf individuals and compared their performance with that of hearing subjects. Thresholds and reaction times were obtained for a motion discrimination task, in both central and peripheral vision. Although deaf and hearing subjects had comparable absolute scores on this task, a robust and intriguing difference was found regarding relative performance for left-visual-field (LVF) versus right-visual-field (RVF) stimuli: Whereas hearing subjects exhibited a slight LVF advantage, the deaf exhibited a strong RVF advantage. Thus, for deaf subjects, the left hemisphere may be specialized for motion processing. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing, in the case of ASL) are recruited (or "captured") by the left, language-dominant hemisphere.  相似文献   

8.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, and hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.  相似文献   

9.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

10.
Visual abilities in deaf individuals may be altered as a result of auditory deprivation and/or because the deaf rely heavily on a sign language (American Sign Language, or ASL). In this study, we asked whether attentional abilities of deaf subjects are altered. Using a direction of motion discrimination task in the periphery, we investigated three aspects of spatial attention: orienting of attention, divided attention, and selective attention. To separate influences of auditory deprivation and sign language experience, we compared three subject groups: deaf and hearing native signers of ASL and hearing nonsigners. To investigate the ability to orient attention, we compared motion thresholds obtained with and without a valid spatial precue, with the notion that subjects orient to the stimulus prior to its appearance when a precue is presented. Results suggest a slight advantage for deaf subjects in the ability to orient spatial attention. To investigate divided attention, we compared motion thresholds obtained when a single motion target was presented to thresholds obtained when the motion target was presented among confusable distractors. The effect of adding distractors was found to be identical across subject groups, suggesting that attentional capacity is not altered in deaf subjects. Finally, to investigate selective attention, we compared performance for a single, cued motion target with that of a cued motion target presented among distractors. Here, deaf, but not hearing, subjects performed better when the motion target was presented among distractors than when it was presented alone, suggesting that deaf subjects are more affected by the presence of distractors. In sum, our results suggest that attentional orienting and selective attention are altered in the deaf and that these effects are most likely due to auditory deprivation as opposed to sign language experience.  相似文献   

11.
Parafoveal attention in congenitally deaf and hearing young adults   总被引:3,自引:1,他引:2  
This reaction-time study compared the performance of 20 congenitally and profoundly deaf, and 20 hearing college students on a parafoveal stimulus detection task in which centrally presented prior cues varied in their informativeness about stimulus location. In one condition, subjects detected a parafoveally presented circle with no other information being present in the visual field. In another condition, spatially complex and task-irrelevant foveal information was present which the subjects were instructed to ignore. The results showed that although both deaf and hearing people utilized cues to direct attention to specific locations and had difficulty in ignoring foveal information, deaf people were more proficient in redirecting attention from one spatial location to another in the presence of irrelevant foveal information. These results suggest that differences exist in the development of attentional mechanisms in deaf and hearing people. Both groups showed an overall right visual-field advantage in stimulus detection which was attenuated when the irrelevant foveal information was present. These results suggest a left-hemisphere superiority for detection of parafoveally presented stimuli independent of cue informativeness for both groups.  相似文献   

12.
The aim of this work was to explore the nature of elementary operations (engage, move, disengage, and filtering) of spatial attention in deaf experts in sign language. Good communication skills require deaf people to rapidly change attention to at least two separate spatial locations, the facial expression and the hand signs of the speaker. Overtraining imposed by sign language demands might have modified certain characteristics of the spatial attention operations. To test that, a spatial orienting task was used in two experiments. Experiment 1 showed that deaf subjects reoriented their attention to the target location faster than hearing subjects in invalid trials. Experiment 2 indicated that inhibition of return decays faster in deaf than in hearing people. These results suggest that deaf subjects can disengage their attention faster than hearing subjects, fostering search of relevant information in more spatial locations.  相似文献   

13.
Groups of deaf subjects, exposed to tachistoscopic bilateral presentation of English words and American Sign Language (ASL) signs, showed weaker right visual half-field (VHF) superiority for words than hearing comparison groups with both a free-recall and matching response. Deaf subjects showed better, though nonsignificant, recognition of left VHF signs with bilateral presentation of signs but shifted to superior right VHF response to signs when word-sign combinations were presented. Cognitive strategies and hemispheric specialization for ASL are discussed as possible factors affecting half-field asymmetry.  相似文献   

14.
Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49, 170-181; Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: Correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.] reported a small, non-significant RVF advantage for deaf signers when short duration motion stimuli were used (200-250 ms). Samar and Parasnis (2005) reported that this small RVF advantage became significant when non-verbal IQ was statistically controlled. This paper presents extended analyses of the correlation between non-verbal IQ and visual field asymmetries in the data set of Samar and Parasnis (2005). We speculate that this correlation might plausibly be driven by individual differences either in age of acquisition of American Sign Language (ASL) or in the degree of neurodevelopmental insult associated with various etiologies of deafness. Limited additional analyses are presented that indicate a need for further research on the cause of this apparent IQ-laterality relationship. Some potential implications of this relationship for lateralization studies of deaf signers are discussed. Controlling non-verbal IQ may improve the reliability of short duration coherent motion tasks to detect adaptive dorsal stream lateralization due to exposure to ASL in deaf research participants.  相似文献   

15.
Visual Field asymmetries for verbal and dot localization tasks were examined in monolingual and bilingual subjects. Consistent right-visual-field advantages were found for verbal material in all groups, although bilingual subjects showed a reduced laterality for their second language in comparison with their native language, Monolingual subjects displayed left-visual-field advantages on the dot localization task, but no consistent asymmetries were shown by the bilingual subjects. The overall pattern of results is consistent with left-hemisphere involvement for the processing of verbal material, but the heterogeneity of performance on the dot localization task suggests that processing of such a task may be influenced by subjects' linguistic backgrounds.  相似文献   

16.
American Sign Language (ASL) has evolved within a completely different biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visual-spatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.This research was supported in part by National Institutes of Health grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC-00201, and HD-26022. I would like to thank and acknowledge Ursula Bellugi for her collaboration during much of the research described in this article.  相似文献   

17.
Many reports show that spatial relations between and within objects show differences in hemispheric lateralization. Coordinate, metric relations concerning distances are processed with a right-hemisphere advantage, whereas a left-hemisphere advantage is thought to be related to categorical, abstract relations (Kosslyn, 1987). Kemmerer and Tranel (2000) argued that the left-hemisphere advantage for categorical processing might apply only for verbal spatial categories, however, whereas a right-hemisphere advantage is related to visuospatial categories. To test this idea, we examined categorical processing for stimuli in both verbal and visuospatial formats, with a visual half-field, match-to-sample design. In Experiment 1, we manipulated the format of the second stimulus to compare response patterns for both verbal and visuospatial stimuli. In Experiment 2, we varied the expectancy of the format of the second stimulus, allowing for an assessment of strategy use. The results showed that a left-hemisphere advantage was related to verbal stimulus format only, but not in all conditions. A right-hemisphere advantage was found only with a visuospatial expectancy, visuospatial format, and brief interval. The theory we present to explain these results proposes that the lateralization related to basic categorical processing can be strongly influenced by verbal characteristics and, to some extent, by additional coordinate processing. The lateralization measured in such cases does not represent lateralization related purely to categorical processing, but to these additional effects as well. This stresses the importance of careful task and stimulus design when examining categorical processing in order to reduce the influence of those additional processes.  相似文献   

18.
Past research indicates that specific shape recognition and spatial-relations encoding rely on subsystems that exhibit right-hemisphere advantages, whereas abstract shape recognition and spatial-relations encoding rely on subsystems that exhibit left-hemisphere advantages. Given these apparent regularities, we tested whether asymmetries in shape processing are causally related to asymmetries in spatial-relations processing. We examined performance in four tasks using the same stimuli with divided-visual-field presentations. Importantly, the asymmetry observed in any one task did not correlate with the asymmetries observed in the other tasks in ways predicted by extant theories. Asymmetries in shape processing and spatial-relations encoding may not be due to a common causal force influencing multiple subsystems.  相似文献   

19.
Previous studies indicate that hearing readers sometimes convert printed text into a phonological form during silent reading. The experiments reported here investigated whether second-generation congenitally deaf readers use any analogous recoding strategy. Fourteen congenitally and profoundly deaf adults who were native signers of American Sign Language (ASL) served as subjects. Fourteen hearing people of comparable reading levels were control subjects. These subjects participated in four experiments that tested for the possibilities of (a) recoding into articulation, (b) recoding into fingerspelling, (c) recoding into ASL, or (d) no recoding at all. The experiments employed paradigms analogous to those previously used to test for phonological recoding in hearing populations. Interviews with the deaf subjects provided supplementary information about their reading strategies. The results suggest that these deaf subjects as a group do not recode into articulation or fingerspelling, but do recode into sign.  相似文献   

20.
A sign decision task, in which deaf signers made a decision about the number of hands required to form a particular sign of American Sign Language (ASL), revealed significant facilitation by repetition among signs that share a base morpheme. A lexical decision task on English words revealed facilitation by repetition among words that share a base morpheme in both English and ASL, but not among those that share a base morpheme in ASL only. This outcome occurred for both deaf and hearing subjects. The results are interpreted as evidence that the morphological principles of lexical organization observed in ASL do not extend to the organization of English for skilled deaf readers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号