首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

2.
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses two further questions: first, do extra-grammatical discourse functions in deaf signers show the same right-hemisphere dominance observed for discourse functions in hearing subjects; and second, do discourse functions in ASL that employ spatial relations depend upon more general intact spatial cognitive abilities? We report findings from two right-hemisphere damaged deaf signers, both of whom show disruption of discourse functions in absence of any disruption of grammatical functions. The exact nature of the disruption differs for the two subjects, however. Subject AR shows difficulty in maintaining topical coherence, while SJ shows difficulty in employing spatial discourse devices. Further, the two subjects are equally impaired on non-linguistic spatial tasks, indicating that spared spatial discourse functions can occur even when more general spatial cognition is disrupted. We conclude that, as in the hearing population, discourse functions involve the right hemisphere; that distinct discourse functions can be dissociated from one another in ASL; and that brain organization for linguistic spatial devices is driven by its functional role in language processing, rather than by its surface, spatial characteristics.  相似文献   

3.
A group of congenitally deaf adults and a group of hearing adults, both fluent in sign language, were tested to determine cerebral lateralization. In the most revealing task, subjects were given a series of trials in which they were fist presented with a videotaped sign and then with a word exposed tachistoscopically to the right visual field or left visual field, and were required to judge whether the word corresponded to the sign or not. The results suggested that the comparison processes involved in the decision were performed more efficiently by the left hemisphere for hearing subjects and by the right hemisphere for deaf subjects. However, the deaf subjects performed as well as the hearing subjects in the left hemisphere, suggesting that the deaf are not impeded by their auditory-speech handicap from developing the left hemisphere for at least some types of linguistic processing.  相似文献   

4.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

5.
Recent imaging (e.g., MacSweeney et al., 2002) and lesion (Hickok, Love-Geffen, & Klima, 2002) studies suggest that sign language comprehension depends primarily on left hemisphere structures. However, this may not be true of all aspects of comprehension. For example, there is evidence that the processing of topographic space in sign may be vulnerable to right hemisphere damage (e.g., Hickok, Say, Bellugi, & Klima, 1996), and the influence of iconicity on comprehension has yet to be explored. In this study, comprehension testing was conducted with 15 signers with unilateral brain damage, and with elderly Deaf controls. Four tests were administered: a test of iconic and non-iconic noun comprehension, a test of verb and sentence comprehension, a test of locative sentence comprehension, and a test of classifier comprehension. All tests were administered in British Sign Language (BSL), a language that has only recently been explored with lesioned signers (see Atkinson, Marshall, Thacker, & Woll, 2004; Marshall, Atkinson, Thacker, Woll, & Smulevitch, 2004; Marshall, Atkinson, Woll, & Thacker, in press). People with left hemisphere damage were impaired relative to controls on all tests. Those with right hemisphere damage performed well in the first two tests, but were impaired on locative sentences and classifiers. Neither group showed any effect of iconicity. The results shed further light on the laterality of sign language comprehension.  相似文献   

6.
Hu Z  Wang W  Liu H  Peng D  Yang Y  Li K  Zhang JX  Ding G 《Brain and language》2011,116(2):64-70
Effective literacy education in deaf students calls for psycholinguistic research revealing the cognitive and neural mechanisms underlying their written language processing. When learning a written language, deaf students are often instructed to sign out printed text. The present fMRI study was intended to reveal the neural substrates associated with word signing by comparing it with picture signing. Native deaf signers were asked to overtly sign in Chinese Sign Language (CSL) common objects indicated with written words or presented as pictures. Except in left inferior frontal gyrus and inferior parietal lobule where word signing elicited greater activation than picture signing, the two tasks engaged a highly overlapping set of brain regions previously implicated in sign production. The results suggest that word signing in the deaf signers relies on meaning activation from printed visual forms, followed by similar production processes from meaning to signs as in picture signing. The present study also documents the basic brain activation pattern for sign production in CSL and supports the notion of a universal core neural network for sign production across different sign languages.  相似文献   

7.
We investigated the relative role of the left versus right hemisphere in the comprehension of American Sign Language (ASL). Nineteen lifelong signers with unilateral brain lesions [11 left hemisphere damaged (LHD) and 8 right hemisphere damaged (RHD)] performed three tasks, an isolated single-sign comprehension task, a sentence-level comprehension task involving simple one-step commands, and a sentence-level comprehension task involving more complex multiclause/multistep commands. Eighteen of the participants were deaf, one RHD subject was hearing and bilingual (ASL and English). Performance was examined in relation to two factors: whether the lesion was in the right or left hemisphere and whether the temporal lobe was involved. The LHD group performed significantly worse than the RHD group on all three tasks, confirming left hemisphere dominance for sign language comprehension. The group with left temporal lobe involvement was significantly impaired on all tasks, whereas each of the other three groups performed at better than 95% correct on the single sign and simple sentence comprehension tasks, with performance falling off only on the complex sentence comprehension items. A comparison with previously published data suggests that the degree of difficulty exhibited by the deaf RHD group on the complex sentences is comparable to that observed in hearing RHD subjects. Based on these findings we hypothesize (i) that deaf and hearing individuals have a similar degree of lateralization of language comprehension processes and (ii) that language comprehension depends primarily on the integrity of the left temporal lobe.  相似文献   

8.
In two studies, we find that native and non-native acquisition show different effects on sign language processing. Subjects were all born deaf and used sign language for interpersonal communication, but first acquired it at ages ranging from birth to 18. In the first study, deaf signers shadowed (simultaneously watched and reproduced) sign language narratives given in two dialects, American Sign Language (ASL) and Pidgin Sign English (PSE), in both good and poor viewing conditions. In the second study, deaf signers recalled and shadowed grammatical and ungrammatical ASL sentences. In comparison with non-native signers, natives were more accurate, comprehended better, and made different kinds of lexical changes; natives primarily changed signs in relation to sign meaning independent of the phonological characteristics of the stimulus. In contrast, non-native signers primarily changed signs in relation to the phonological characteristics of the stimulus independent of lexical and sentential meaning. Semantic lexical changes were positively correlated to processing accuracy and comprehension, whereas phonological lexical changes were negatively correlated. The effects of non-native acquisition were similar across variations in the sign dialect, viewing condition, and processing task. The results suggest that native signers process lexical structural automatically, such that they can attend to and remember lexical and sentential meaning. In contrast, non-native signers appear to allocate more attention to the task of identifying phonological shape such that they have less attention available for retrieval and memory of lexical meaning.  相似文献   

9.
American Sign Language (ASL) has evolved within a completely different biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visual-spatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.This research was supported in part by National Institutes of Health grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC-00201, and HD-26022. I would like to thank and acknowledge Ursula Bellugi for her collaboration during much of the research described in this article.  相似文献   

10.
Abstract:  In the first half of this paper, the experimental investigations on memory and cognition in deaf signers are reviewed in order to reveal how deaf signers rely on sign-based coding when they process linguistic information. It is suggested that deaf signers tactically employ a set of originally separate memory strategies relying on multiple components of working memory. In the second half of this paper, the author shows possible factors that could contribute to a sign language advantage. It is indicated that deaf signers' cognitive activities are deeply rooted in the signers' interaction with the environment. Some concrete examples of Japanese Sign Language signs and their use are provided to support this hypothesis.  相似文献   

11.
Sign language phonological parameters are somewhat analogous to phonemes in spoken language. Unlike phonemes, however, there is little linguistic literature arguing that these parameters interact at the sublexical level. This situation raises the question of whether such interaction in spoken language phonology is an artifact of the modality or whether sign language phonology has not been approached in a way that allows one to recognize sublexical parameter interaction. We present three studies in favor of the latter alternative: a shape-drawing study with deaf signers from six countries, an online dictionary study of American Sign Language, and a study of selected lexical items across 34 sign languages. These studies show that, once iconicity is considered, handshape and movement parameters interact at the sublexical level. Thus, consideration of iconicity makes transparent similarities in grammar across both modalities, allowing us to maintain certain key findings of phonological theory as evidence of cognitive architecture.  相似文献   

12.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

13.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

14.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

15.
This investigation examined whether access to sign language as a medium for instruction influences theory of mind (ToM) reasoning in deaf children with similar home language environments. Experiment 1 involved 97 deaf Italian children ages 4-12 years: 56 were from deaf families and had LIS (Italian Sign Language) as their native language, and 41 had acquired LIS as late signers following contact with signers outside their hearing families. Children receiving bimodal/bilingual instruction in LIS together with Sign-Supported and spoken Italian significantly outperformed children in oralist schools in which communication was in Italian and often relied on lipreading. Experiment 2 involved 61 deaf children in Estonia and Sweden ages 6-16 years. On a wide variety of ToM tasks, bilingually instructed native signers in Estonian Sign Language and spoken Estonian succeeded at a level similar to age-matched hearing children. They outperformed bilingually instructed late signers and native signers attending oralist schools. Particularly for native signers, access to sign language in a bilingual environment may facilitate conversational exchanges that promote the expression of ToM by enabling children to monitor others' mental states effectively.  相似文献   

16.
A normally hearing left-handed patient familiar with American Sign Language (ASL) was assessed under sodium amytal conditions and with left cortical stimulation in both oral speech and signed English. Lateralization was mixed but complementary in each language mode: the right hemisphere perfusion severely disrupted motoric aspects of both types of language expression, the left hemisphere perfusion specifically disrupted features of grammatical and semantic usage in each mode of expression. Both semantic and syntactic aspects of oral and signed responses were altered during left posterior temporal-parietal stimulation. Findings are discussed in terms of the neurological organization of ASL and linguistic organization in cases of early left hemisphere damage.  相似文献   

17.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

18.
This paper examines the impact of auditory deprivation and sign language use on the enhancement of location memory and hemispheric specialization using two matching tasks. Forty-one deaf signers and non-signers and 51 hearing signers and non-signers were tested on location memory for shapes and objects (Study 1) and on categorical versus coordinate spatial relations (Study 2). Results of the two experiments converge to suggest that deafness alone supports the atypical left hemispheric preference in judging the location of a circle or a picture on a blank background and that deafness and sign language experience determine the superior ability of memory for location. The importance of including a sample of deaf non-signers was identified.  相似文献   

19.
The nature of hemispheric processing in the prelingually deaf was examined in a picture-letter matching task. It was hypothesized that linguistic competence in the deaf would be associated with normal or near-normal laterality (i.e., a left hemisphere advantage for analytic linguistic tasks). Subjects were shown a simple picture of a common object (e.g., lamp), followed by brief unilateral presentation of a manually signed or orthographic letter, and they had to indicate as quickly as possible whether the letter was present in the spelling of the object's label. While hearing subjects showed a marked left hemisphere advantage, no such superiority was found for either a linguistically skilled or unskilled group of deaf students. In the skilled group, however, there was a suggestion of a right hemisphere advantage for manually signed letters. It was concluded that while hemispheric asymmetry of function does not develop normally in the deaf, the absence of this normal pattern does not preclude the development of the analytic skills needed to deal with the structure of language.  相似文献   

20.
Experiments are described in which event-related potentials (ERPs) are employed to study the specialization of functions between and within the cerebral hemispheres during the performance of language and nonlanguage tasks by normal adults. Similar studies of deaf subjects suggest that the functional organization of the brain may be altered after different early language and sensory experiences. Studies of patients with alexia without agraphia suggest that the ERP may be a valuable tool with which to study cerebral reorganization after brain damage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号