首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Left-Hemisphere Dominance for Motion Processing in Deaf Signers   总被引:4,自引:0,他引:4  
Evidence from neurophysiological studies in animals as well as humans has demonstrated robust changes in neural organization and function following early-onset sensory deprivation. Unfortunately, the perceptual consequences of these changes remain largely unexplored. The study of deaf individuals who have been auditorily deprived since birth and who rely on a visual language (i.e., American Sign Language, ASL) for communication affords a unique opportunity to investigate the degree to which perception in the remaining, intact senses (e.g., vision) is modified as a result of altered sensory and language experience. We studied visual motion perception in deaf individuals and compared their performance with that of hearing subjects. Thresholds and reaction times were obtained for a motion discrimination task, in both central and peripheral vision. Although deaf and hearing subjects had comparable absolute scores on this task, a robust and intriguing difference was found regarding relative performance for left-visual-field (LVF) versus right-visual-field (RVF) stimuli: Whereas hearing subjects exhibited a slight LVF advantage, the deaf exhibited a strong RVF advantage. Thus, for deaf subjects, the left hemisphere may be specialized for motion processing. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing, in the case of ASL) are recruited (or "captured") by the left, language-dominant hemisphere.  相似文献   

2.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

3.
A group of congenitally deaf adults and a group of hearing adults, both fluent in sign language, were tested to determine cerebral lateralization. In the most revealing task, subjects were given a series of trials in which they were fist presented with a videotaped sign and then with a word exposed tachistoscopically to the right visual field or left visual field, and were required to judge whether the word corresponded to the sign or not. The results suggested that the comparison processes involved in the decision were performed more efficiently by the left hemisphere for hearing subjects and by the right hemisphere for deaf subjects. However, the deaf subjects performed as well as the hearing subjects in the left hemisphere, suggesting that the deaf are not impeded by their auditory-speech handicap from developing the left hemisphere for at least some types of linguistic processing.  相似文献   

4.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

5.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

6.
Most people born deaf and exposed to oral language show scant evidence of sensitivity to the phonology of speech when processing written language. In this respect they differ from hearing people. However, occasionally, a prelingually deaf person can achieve good processing of written language in terms of phonological sensitivity and awareness, and in this respect appears exceptional. We report the pattern of event-related fMRI activation in such a deaf reader while performing a rhyme-judgment on written words with similar spelling endings that do not provide rhyme clues. The left inferior frontal gyrus pars opercularis and the left inferior parietal lobe showed greater activation for this task than for a letter-string identity matching task. This participant was special in this regard, showing significantly greater activation in these regions than a group of hearing participants with a similar level of phonological and reading skill. In addition, SR showed activation in the left mid-fusiform gyrus; a region which did not show task-specific activation in the other respondents. The pattern of activation in this exceptional deaf reader was also unique compared with three deaf readers who showed limited phonological processing. We discuss the possibility that this pattern of activation may be critical in relation to phonological decoding of the written word in good deaf readers whose phonological reading skills are indistinguishable from those of hearing readers.  相似文献   

7.
The nature of hemispheric processing in the prelingually deaf was examined in a picture-letter matching task. It was hypothesized that linguistic competence in the deaf would be associated with normal or near-normal laterality (i.e., a left hemisphere advantage for analytic linguistic tasks). Subjects were shown a simple picture of a common object (e.g., lamp), followed by brief unilateral presentation of a manually signed or orthographic letter, and they had to indicate as quickly as possible whether the letter was present in the spelling of the object's label. While hearing subjects showed a marked left hemisphere advantage, no such superiority was found for either a linguistically skilled or unskilled group of deaf students. In the skilled group, however, there was a suggestion of a right hemisphere advantage for manually signed letters. It was concluded that while hemispheric asymmetry of function does not develop normally in the deaf, the absence of this normal pattern does not preclude the development of the analytic skills needed to deal with the structure of language.  相似文献   

8.
In Experiment 1 neither hearing nor prelingually deaf signing adolescents showed marked lateralization for lexical decision but, unlike the hearing, the deaf were not impaired by the introduction of pseudohomophones. In Experiment 2 semantic categorization produced a left hemisphere advantage in the hearing for words but not pictures whereas in the deaf words and signs but not pictures showed a right hemisphere advantage. In Experiment 3 the lexical decision and semantic categorization findings were confirmed and both groups showed a right hemisphere advantage for a face/nonface decision task. The possible effect of initial language acquisition on the development of hemispheric lateralization for language is discussed.  相似文献   

9.
A divided visual field (DVF) procedure was used to investigate the cerebral organization of language processes in psychopaths. The subjects consisted of four groups of 13 right-handed males: noncriminals (NC), and criminals with high (H), medium (M), and low (L) scores on the Psychopathy Checklist (Hare, 1980). The subject had to decide if a concrete noun, tachistoscopically presented in either the left (LVF) or the right visual hemifield (RVF), matched a pretrial work (SR task), or was an exemplar of a specific category (SC task) or an abstract category (AC task). There were no group differences in reaction time. As predicted, group differences in errors were confined to the AC task; Groups L and NC made fewer RVF than LVF errors (thus showing the sort of RVF advantage expected with semantic categorization), whereas the opposite was true of Group H. The results, along with those obtained in a recent dichotic listening study, lead us to speculate that psychopathy may be associated with weak or unusual lateralization of language function, and that psychopaths may have fewer left hemisphere resources for processing language than do normal individuals.  相似文献   

10.
Functional hemispheric specialization in recognizing faces expressing emotions was investigated in 18 normal hearing and 18 congenitally deaf children aged 13-14 years. Three kinds of faces were presented: happy, to express positive emotions, sad, to express negative emotions, and neutral. The subjects' task was to recognize the test face exposed for 20 msec in the left or right visual field. The subjects answered by pointing at the exposed stimulus on the response card that contained three different faces. The errors committed in expositions of faces in the left and right visual field were analyzed. In the control group the right hemisphere dominated in case of sad and neutral faces. There were no significant differences in recognition of happy faces. The differentiated hemispheric organization pattern in normal hearing persons supports the hypothesis of different processing of positive and negative emotions expressed by faces. The observed hemispheric asymmetry was a result of two factors: (1) processing of faces as complex patterns requiring visuo-spatial analysis, and (2) processing of emotions contained in them. Functional hemispheric asymmetry was not observed in the group of deaf children for any kind of emotion expressed in the presented faces. The results suggest that lack of auditory experience influences the organization of functional hemispheric specialization. It can be supposed that in deaf children, the analysis of information contained in emotional faces takes place in both hemispheres.  相似文献   

11.
Cerebral laterality was examined for third-, fourth-, and fifth-grade deaf and hearing subjects. The experimental task involved the processing of word and picture stimuli presented singly to the right and left visual hemifields. The analyses indicated the deaf children were faster than the hearing children in overall processing efficiency, and that they performed differently in regard to hemispheric lateralization. The deaf children processed the stimuli more efficiently in the right hemisphere, while the hearing children demonstrated a left-hemisphere proficiency. This finding is discussed in terms of the hypothesis that cerebral lateralization is influenced by auditory processing.  相似文献   

12.
We investigated the relative role of the left versus right hemisphere in the comprehension of American Sign Language (ASL). Nineteen lifelong signers with unilateral brain lesions [11 left hemisphere damaged (LHD) and 8 right hemisphere damaged (RHD)] performed three tasks, an isolated single-sign comprehension task, a sentence-level comprehension task involving simple one-step commands, and a sentence-level comprehension task involving more complex multiclause/multistep commands. Eighteen of the participants were deaf, one RHD subject was hearing and bilingual (ASL and English). Performance was examined in relation to two factors: whether the lesion was in the right or left hemisphere and whether the temporal lobe was involved. The LHD group performed significantly worse than the RHD group on all three tasks, confirming left hemisphere dominance for sign language comprehension. The group with left temporal lobe involvement was significantly impaired on all tasks, whereas each of the other three groups performed at better than 95% correct on the single sign and simple sentence comprehension tasks, with performance falling off only on the complex sentence comprehension items. A comparison with previously published data suggests that the degree of difficulty exhibited by the deaf RHD group on the complex sentences is comparable to that observed in hearing RHD subjects. Based on these findings we hypothesize (i) that deaf and hearing individuals have a similar degree of lateralization of language comprehension processes and (ii) that language comprehension depends primarily on the integrity of the left temporal lobe.  相似文献   

13.
Two experiments tested the limiting case of a multiple resources approach to resource allocation in information processing. In this framework, the left and right hemispheres are assumed to have separate, limited-capacity pools of undifferentiated resources that are not mutually accessible, so that tasks can overlap in their demand for these resources either completely, partially, or not at all. We tested all three degrees of overlap in demand for left hemisphere supplies, using dual-task methodology in which subjects were induced to pay different amounts of attention to each task. Experiment 1 compared complete and partial overlap by combining a verbal memory load with a task in which subjects named nonsense syllables briefly presented to either the left or right visual field (LVF and RVF, respectively). Experiment 2 compared complete versus no overlap by using the same verbal memory load combined with a laterally presented same-different judgment task that did not require a spoken response. Decrements from single-task performance were always more severe when the visual field task stimulus was presented to the RVF. Further, subjects in Experiment 1 were able to trade performance between tasks on both LVF and RVF trials because there was always at least some overlap in left hemisphere demand. In Experiment 2, performance trade-offs were observed on RVF (complete overlap) trials, but not on LVF trials, where no overlap in demand existed. These results contradict a single-capacity model, but they support the idea that the hemispheres' resource supplies are independent and have implications for both cerebral specialization and divided attention issues.  相似文献   

14.
The processing advantage for words in the right visual field (RVF) has often been assigned to parallel orthographic analysis by the left hemisphere and sequential by the right. The authors investigated this notion using the Reicher-Wheeler task to suppress influences of guesswork and an eye-tracker to ensure central fixation. RVF advantages obtained for all serial positions and identical U-shaped serial-position curves obtained for both visual fields (Experiments 1-4). These findings were not influenced by lexical constraint (Experiment 2) and were obtained with masked and nonmasked displays (Experiment 3). Moreover, words and nonwords produced similar serial-position effects in each field, but only RVF stimuli produced a word-nonword effect (Experiment 4). These findings support the notion that left-hemisphere function underlies the RVF advantage but not the notion that each hemisphere uses a different mode of orthographic analysis.  相似文献   

15.
Hemispheric specialization for processing different types of rapidly exposed stimuli was examined in a forced choice reaction time task. Four conditions of recognition were included: tacial emotion, neutral faces, emotional words, and neutral words. Only the facial emotion condition produced a significant visual field advantage (in favor of the left visual field), but this condition did not differ significantly from the neutral face condition's left visual field superiority. The verbal conditions produced significantly decreased latencies with RVF presentation, while the LVF presentation was associated with decreased latencies on the facial conditions. These results suggested that facial recognition and affective processing cannot be separated as independent factors generating right hemisphere superiority for facial emotion perception, and that task parameters (verbal vs. nonverbal) are important influences upon effects in studies of cerebral specialization.  相似文献   

16.
Born-deaf, orally trained youngsters were examined on two tasks of immediate memory for pictures of objects. The aim was to investigate the extent of speech coding for pictures in immediate memory in a developmental context. The deaf, unlike young hearing children, did not use picture-name rhyme spontaneously as a cue to recall in a paired association task. Nevertheless, they were just as sensitive as reading age-matched hearing controls to spoken word length in recalling pictures by name. This might mean that the deaf use articulatory rehearsal in some immediate memory tasks, but this leads to a paradoxical conclusion. What could "inner speech" in the deaf be for, if it fails to affect their "inner ear" by inducing rhyme sensitivity in the paired associate task? This paradox is discussed in relation to distinctions between covert and overt use of memory cues in the paired recall task and to possible sources of the word length effect in young hearing (8-9 years old) and deaf subjects.  相似文献   

17.
It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.  相似文献   

18.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

19.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

20.
Native Japanese speakers identified three-letter kana stimuli presented to the left visual field and right hemisphere (LVF/RH), to the right visual field and left hemisphere (RVF/LH), or to both visual fields and hemispheres simultaneously (BILATERAL trials). There were fewer errors on RVF/LH and BILATERAL trials than on LVF/RH trials. Qualitative analysis of error patterns indicated that there were many fewer errors of first-letter identification than of last-letter identification, suggesting top-to-bottom scanning of the kana characters. In contrast to similar studies presenting nonword letter trigrams to native English speakers, qualitative error patterns were identical for the three visual field conditions. Taken together with the results of earlier studies, the results of the present experiment indicate that the ubiquitous RVF/LH advantage reflects a left-hemisphere superiority for phonetic processing that generalizes across specific languages. At the same time, qualitative aspects of hemispheric asymmetry differ from one language to the next and may depend on such things as the way in which individual characters map onto the pronunciation of words and nonwords.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号