首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 131 毫秒
1.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

2.
In two studies, we find that native and non-native acquisition show different effects on sign language processing. Subjects were all born deaf and used sign language for interpersonal communication, but first acquired it at ages ranging from birth to 18. In the first study, deaf signers shadowed (simultaneously watched and reproduced) sign language narratives given in two dialects, American Sign Language (ASL) and Pidgin Sign English (PSE), in both good and poor viewing conditions. In the second study, deaf signers recalled and shadowed grammatical and ungrammatical ASL sentences. In comparison with non-native signers, natives were more accurate, comprehended better, and made different kinds of lexical changes; natives primarily changed signs in relation to sign meaning independent of the phonological characteristics of the stimulus. In contrast, non-native signers primarily changed signs in relation to the phonological characteristics of the stimulus independent of lexical and sentential meaning. Semantic lexical changes were positively correlated to processing accuracy and comprehension, whereas phonological lexical changes were negatively correlated. The effects of non-native acquisition were similar across variations in the sign dialect, viewing condition, and processing task. The results suggest that native signers process lexical structural automatically, such that they can attend to and remember lexical and sentential meaning. In contrast, non-native signers appear to allocate more attention to the task of identifying phonological shape such that they have less attention available for retrieval and memory of lexical meaning.  相似文献   

3.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

4.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

5.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

6.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.  相似文献   

7.
ABSTRACT

Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.  相似文献   

8.
This investigation examined whether access to sign language as a medium for instruction influences theory of mind (ToM) reasoning in deaf children with similar home language environments. Experiment 1 involved 97 deaf Italian children ages 4-12 years: 56 were from deaf families and had LIS (Italian Sign Language) as their native language, and 41 had acquired LIS as late signers following contact with signers outside their hearing families. Children receiving bimodal/bilingual instruction in LIS together with Sign-Supported and spoken Italian significantly outperformed children in oralist schools in which communication was in Italian and often relied on lipreading. Experiment 2 involved 61 deaf children in Estonia and Sweden ages 6-16 years. On a wide variety of ToM tasks, bilingually instructed native signers in Estonian Sign Language and spoken Estonian succeeded at a level similar to age-matched hearing children. They outperformed bilingually instructed late signers and native signers attending oralist schools. Particularly for native signers, access to sign language in a bilingual environment may facilitate conversational exchanges that promote the expression of ToM by enabling children to monitor others' mental states effectively.  相似文献   

9.
Previous studies indicate that hearing readers sometimes convert printed text into a phonological form during silent reading. The experiments reported here investigated whether second-generation congenitally deaf readers use any analogous recoding strategy. Fourteen congenitally and profoundly deaf adults who were native signers of American Sign Language (ASL) served as subjects. Fourteen hearing people of comparable reading levels were control subjects. These subjects participated in four experiments that tested for the possibilities of (a) recoding into articulation, (b) recoding into fingerspelling, (c) recoding into ASL, or (d) no recoding at all. The experiments employed paradigms analogous to those previously used to test for phonological recoding in hearing populations. Interviews with the deaf subjects provided supplementary information about their reading strategies. The results suggest that these deaf subjects as a group do not recode into articulation or fingerspelling, but do recode into sign.  相似文献   

10.
Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49, 170-181; Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: Correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.] reported a small, non-significant RVF advantage for deaf signers when short duration motion stimuli were used (200-250 ms). Samar and Parasnis (2005) reported that this small RVF advantage became significant when non-verbal IQ was statistically controlled. This paper presents extended analyses of the correlation between non-verbal IQ and visual field asymmetries in the data set of Samar and Parasnis (2005). We speculate that this correlation might plausibly be driven by individual differences either in age of acquisition of American Sign Language (ASL) or in the degree of neurodevelopmental insult associated with various etiologies of deafness. Limited additional analyses are presented that indicate a need for further research on the cause of this apparent IQ-laterality relationship. Some potential implications of this relationship for lateralization studies of deaf signers are discussed. Controlling non-verbal IQ may improve the reliability of short duration coherent motion tasks to detect adaptive dorsal stream lateralization due to exposure to ASL in deaf research participants.  相似文献   

11.
聋人手语视觉表象生成能力的实验研究   总被引:2,自引:0,他引:2  
通过视觉表象判断实验,对聋手语使用者和听力正常人两类被试视觉表象生成的能力进行了比较。实验发现:与听力正常的人相比,聋手语使用者学习和记忆大写字母的时间短于听力正常的被试,并且两组被试记忆复杂字母的时间都较长;聋被试和听力正常被试采用了相同的字母表征方式。但是,习得手语的年龄对聋手语者生成表象的能力没有明显的影响。  相似文献   

12.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

13.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

14.
Geraci C  Gozzi M  Papagno C  Cecchetto C 《Cognition》2008,106(2):780-804
It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of signed and spoken languages. This finding is at odds with results showing that short-term memory (STM) for signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of signs: indeed, unlike English (or Italian) words, signs contain both simultaneous and sequential components. Nonetheless, sign languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of signed languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.  相似文献   

15.
Perception of dynamic events of American Sign Language (ASL) was studied by isolating information about motion in the language from information about form. Four experiments utilized Johansson's technique for presenting biological motion as moving points of light. In the first, deaf signers were highly accurate in matching movements of lexical signs presented in point-light displays to those normally presented. Both discrimination accuracy and the pattern of errors were similar in this matching task to that obtained in a control condition in which the same signs were always represented normally. The second experiment showed that these results held for discrimination of morphological operations presented in point-light displays as well. In the third experiment, signers were able to accurately identify signs of a constant handshape and morphological operations acting on signs presented in point-light displays. Finally, in Experiment 4, we evaluated what aspects of the motion patterns carried most of the information for sign identifiability. We presented signs in point-light displays with certain lights removed and found that the movement of the fingertips, but not of any other pair of points, is necessary for sign identification and that, in general, the more distal the joint, the more information its movement carries.  相似文献   

16.
Visual abilities in deaf individuals may be altered as a result of auditory deprivation and/or because the deaf rely heavily on a sign language (American Sign Language, or ASL). In this study, we asked whether attentional abilities of deaf subjects are altered. Using a direction of motion discrimination task in the periphery, we investigated three aspects of spatial attention: orienting of attention, divided attention, and selective attention. To separate influences of auditory deprivation and sign language experience, we compared three subject groups: deaf and hearing native signers of ASL and hearing nonsigners. To investigate the ability to orient attention, we compared motion thresholds obtained with and without a valid spatial precue, with the notion that subjects orient to the stimulus prior to its appearance when a precue is presented. Results suggest a slight advantage for deaf subjects in the ability to orient spatial attention. To investigate divided attention, we compared motion thresholds obtained when a single motion target was presented to thresholds obtained when the motion target was presented among confusable distractors. The effect of adding distractors was found to be identical across subject groups, suggesting that attentional capacity is not altered in deaf subjects. Finally, to investigate selective attention, we compared performance for a single, cued motion target with that of a cued motion target presented among distractors. Here, deaf, but not hearing, subjects performed better when the motion target was presented among distractors than when it was presented alone, suggesting that deaf subjects are more affected by the presence of distractors. In sum, our results suggest that attentional orienting and selective attention are altered in the deaf and that these effects are most likely due to auditory deprivation as opposed to sign language experience.  相似文献   

17.
American Sign Language (ASL) offers a valuable opportunity for the study of cerebral asymmetries, since it incorporates both language structure and complex spatial relations: processing the former has generally been considered a left-hemisphere function, the latter, a right-hemisphere one. To study such asymmetries, congenitally deaf, native ASL users and normally-hearing English speakers unfamiliar with ASL were asked to identify four kinds of stimuli: signs from ASL, handshapes never used in ASL, Arabic digits, and random geometric forms. Stimuli were presented tachistoscopically to a visual hemifield and subjects manually responded as rapidly as possible to specified targets. Both deaf and hearing subjects showed left-visual-field (hence, presumably right-hemisphere) advantages to the signs and to the non-ASL hands. The hearing subjects, further, showed a left-hemisphere advantage to the Arabic numbers, while the deaf subjects showed no reliable visual-field differences to this material. We infer that the spatial processing required of the signs predominated over their language processing in determining the cerebral asymmetry of the deaf for these stimuli.  相似文献   

18.
Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar. A control group of hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken together, these results constitute the first demonstration that deaf readers activate the ASL translations of written words under conditions in which the translation is neither present perceptually nor required to perform the task.  相似文献   

19.
Lane, Boyes-Braem, and Bellugi (1976) suggested that American Sign Language (ASL) was perceived according to distinctive features, as is the case with speech. They advanced a binary model of distinctive features for one component of ASL signs, handshape. To test the validity of this model, three native users of ASL and three English speakers who knew no ASL participated in two experiments. In the first, subjects identified ASL handshapes obscured by visual noise, and confusion frequencies yielded similarity scores for all possible pairs of handshapes. Pairs that shared more features according to the model produced higher similarity scores for both groups of subjects. In the second experiment, subjects discriminated a subset of all possible handshape pairs in a speeded “same-different” task; discrimination accuracy and reaction times yielded a d’ and a ds value, respectively, for each pair. Pairs that shared more features according to a slightly revised version of the model produced lower discrimination indices for both groups. While the binary model was supported, a model in which handshape features varied continuously in two dimensions was more consistent with all sets of data. Both models describe hearing and deaf performance equally well, suggesting that linguistic experience does little to alter perception of the visual features germane to handshape identification and discrimination.  相似文献   

20.
American Sign Language (ASL) has evolved within a completely different biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visual-spatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.This research was supported in part by National Institutes of Health grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC-00201, and HD-26022. I would like to thank and acknowledge Ursula Bellugi for her collaboration during much of the research described in this article.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号