首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

2.
ABSTRACT

Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.  相似文献   

3.
The visual spatial memory of 15 deaf signers, 15 hearing signers, and 15 hearing nonsigners for shoes, faces, and verbalizable objects was measured using the game Concentration. It was hypothesized that the deaf and hearing signers would require fewer attempts than the hearing nonsigners on the shoes and faces tasks because of their experience of using a visual-spatial language; and, in the case of the Deaf, also possibly, due to a compensatory mechanism. It was also hypothesized that memory for shoes would be more like that for faces than for simple objects. It was also anticipated that there would be no difference between the three group's memories for verbalizable objects. Deaf signers were found to be similar to hearing signers, both of whom were better than hearing nonsigners on the faces and shoes tasks. Generally, performance on the faces and shoes tasks was similar and followed the same pattern for the three groups. The three groups performed at a similar level on the objects task. There were no gender differences.  相似文献   

4.
Shand (Cognitive Psychology, 1982, 14, 1-12) hypothesized that strong reliance on a phonetic code by hearing individuals in short-term memory situations reflects their primary language experience. As support for this proposal, Shand reported an experiment in which deaf signers' recall of lists of printed English words was poorer when the American Sign Language translations of those words were structurally similar than when they were structurally unrelated. He interpreted this result as evidence that the deaf subjects were recoding the printed words into sign, reflecting their primary language experience. This primary language interpretation is challenged in the present article first by an experiment in which a group of hearing subjects showed a similar recall pattern on Shand's lists of words, and second by a review of the literature on short-term memory studies with deaf subjects. The literature survey reveals that whether or not deaf signers recode into sign depends on a variety of task and subject factors, and that, contrary to the primary language hypothesis, deaf signers may recode into a phonetic code in short-term recall.  相似文献   

5.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

6.
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.  相似文献   

7.
Left-Hemisphere Dominance for Motion Processing in Deaf Signers   总被引:4,自引:0,他引:4  
Evidence from neurophysiological studies in animals as well as humans has demonstrated robust changes in neural organization and function following early-onset sensory deprivation. Unfortunately, the perceptual consequences of these changes remain largely unexplored. The study of deaf individuals who have been auditorily deprived since birth and who rely on a visual language (i.e., American Sign Language, ASL) for communication affords a unique opportunity to investigate the degree to which perception in the remaining, intact senses (e.g., vision) is modified as a result of altered sensory and language experience. We studied visual motion perception in deaf individuals and compared their performance with that of hearing subjects. Thresholds and reaction times were obtained for a motion discrimination task, in both central and peripheral vision. Although deaf and hearing subjects had comparable absolute scores on this task, a robust and intriguing difference was found regarding relative performance for left-visual-field (LVF) versus right-visual-field (RVF) stimuli: Whereas hearing subjects exhibited a slight LVF advantage, the deaf exhibited a strong RVF advantage. Thus, for deaf subjects, the left hemisphere may be specialized for motion processing. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing, in the case of ASL) are recruited (or "captured") by the left, language-dominant hemisphere.  相似文献   

8.
This investigation examined whether access to sign language as a medium for instruction influences theory of mind (ToM) reasoning in deaf children with similar home language environments. Experiment 1 involved 97 deaf Italian children ages 4-12 years: 56 were from deaf families and had LIS (Italian Sign Language) as their native language, and 41 had acquired LIS as late signers following contact with signers outside their hearing families. Children receiving bimodal/bilingual instruction in LIS together with Sign-Supported and spoken Italian significantly outperformed children in oralist schools in which communication was in Italian and often relied on lipreading. Experiment 2 involved 61 deaf children in Estonia and Sweden ages 6-16 years. On a wide variety of ToM tasks, bilingually instructed native signers in Estonian Sign Language and spoken Estonian succeeded at a level similar to age-matched hearing children. They outperformed bilingually instructed late signers and native signers attending oralist schools. Particularly for native signers, access to sign language in a bilingual environment may facilitate conversational exchanges that promote the expression of ToM by enabling children to monitor others' mental states effectively.  相似文献   

9.
聋人手语视觉表象生成能力的实验研究   总被引:2,自引:0,他引:2  
通过视觉表象判断实验,对聋手语使用者和听力正常人两类被试视觉表象生成的能力进行了比较。实验发现:与听力正常的人相比,聋手语使用者学习和记忆大写字母的时间短于听力正常的被试,并且两组被试记忆复杂字母的时间都较长;聋被试和听力正常被试采用了相同的字母表征方式。但是,习得手语的年龄对聋手语者生成表象的能力没有明显的影响。  相似文献   

10.
The memory of 11 deaf and 11 hearing British Sign Language users and 11 hearing nonsigners for pictures of faces of and verbalizable objects was measured using the game Concentration. The three groups performed at the same level for the objects. In contrast the deaf signers were better for faces than the hearing signers, who in turn were superior to the hearing nonsigners, who were the worst. Three hypotheses were made: That there would be no significant difference in terms of the number of attempts between the three groups on the verbalizable object task, that the hearing and deaf signers would demonstrate superior performance to that of the hearing nonsigners on the matching faces task, and that the hearing and deaf signers would exhibit similar performance levels on the matching faces task. The first two hypotheses were supported, but the third was not. Deaf signers were found to be superior for memory for faces to hearing signers and hearing nonsigners. Possible explanations for the findings are discussed, including the possibility that deafness and the long use of sign language have additive effects.  相似文献   

11.
This paper examines the impact of auditory deprivation and sign language use on the enhancement of location memory and hemispheric specialization using two matching tasks. Forty-one deaf signers and non-signers and 51 hearing signers and non-signers were tested on location memory for shapes and objects (Study 1) and on categorical versus coordinate spatial relations (Study 2). Results of the two experiments converge to suggest that deafness alone supports the atypical left hemispheric preference in judging the location of a circle or a picture on a blank background and that deafness and sign language experience determine the superior ability of memory for location. The importance of including a sample of deaf non-signers was identified.  相似文献   

12.
Visual abilities in deaf individuals may be altered as a result of auditory deprivation and/or because the deaf rely heavily on a sign language (American Sign Language, or ASL). In this study, we asked whether attentional abilities of deaf subjects are altered. Using a direction of motion discrimination task in the periphery, we investigated three aspects of spatial attention: orienting of attention, divided attention, and selective attention. To separate influences of auditory deprivation and sign language experience, we compared three subject groups: deaf and hearing native signers of ASL and hearing nonsigners. To investigate the ability to orient attention, we compared motion thresholds obtained with and without a valid spatial precue, with the notion that subjects orient to the stimulus prior to its appearance when a precue is presented. Results suggest a slight advantage for deaf subjects in the ability to orient spatial attention. To investigate divided attention, we compared motion thresholds obtained when a single motion target was presented to thresholds obtained when the motion target was presented among confusable distractors. The effect of adding distractors was found to be identical across subject groups, suggesting that attentional capacity is not altered in deaf subjects. Finally, to investigate selective attention, we compared performance for a single, cued motion target with that of a cued motion target presented among distractors. Here, deaf, but not hearing, subjects performed better when the motion target was presented among distractors than when it was presented alone, suggesting that deaf subjects are more affected by the presence of distractors. In sum, our results suggest that attentional orienting and selective attention are altered in the deaf and that these effects are most likely due to auditory deprivation as opposed to sign language experience.  相似文献   

13.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

14.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

15.
To examine the claim that phonetic coding plays a special role in temporal order recall, deaf and hearing college students were tested on their recall of temporal and spatial order information at two delay intervals. The deaf subjects were all native signers of American Sign Language. The results indicated that both the deaf and hearing subjects used phonetic coding in short-term temporal recall, and visual coding in spatial recall. There was no evidence of manual or visual coding among either the hearing or the deaf subjects in the temporal order recall task. The use of phonetic coding for temporal recall is consistent with the hypothesis that recall of temporal order information is facilitated by a phonetic code.  相似文献   

16.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

17.
采用时距二分任务,探讨聋生在视觉通道的1s以下和1s以上时距知觉的特点。结果发现,在1s以下条件下,聋生时距判断准确性低于普通学生;聋生CNV峰值与LPCt波幅低于普通学生,CNV潜伏期短于普通学生,LPCt峰值潜伏期长于普通学生。在1s以上条件下,聋生时距判断准确性高于普通学生;聋生N1、P2、CNV与LPCt成分各项指标与普通学生均没有显著差异。这说明,听觉丧失损伤了聋生1s以下的时距记忆与决策过程,支持了普遍缺陷假设;但听觉丧失未对聋生1s以上时距加工产生显著影响,支持了感觉补偿机制。因此,时距长度在听觉丧失对视觉时距知觉的影响中具有调节作用,为时间认知的分段综合模型提供了新的支持证据。  相似文献   

18.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, and hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.  相似文献   

19.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

20.
Three experiments examined spatial transformation abilities in hearing people who acquired sign language in early adulthood. The performance of the nonnative hearing signers was compared with that of hearing people with no knowledge of sign language. The two groups were matched for age and gender. Using an adapted Corsi blocks paradigm, the experimental task simulated spatial relations in sign discourse but offered no opportunity for linguistic coding. Experiment 1 showed that the hearing signers performed significantly better than the nonsigners on a task that entailed 180 degree rotation, which is the canonical spatial relationship in sign language discourse. Experiment 2 found that the signers did not show the typical costs associated with processing rotated stimuli, and Experiment 3 ruled out the possibility that their advantage relied on seen hand movements. We conclude that sign language experience, even when acquired in adulthood by hearing people, can give rise to adaptations in cognitive processes associated with the manipulation of visuospatial information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号