首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper examines the impact of auditory deprivation and sign language use on the enhancement of location memory and hemispheric specialization using two matching tasks. Forty-one deaf signers and non-signers and 51 hearing signers and non-signers were tested on location memory for shapes and objects (Study 1) and on categorical versus coordinate spatial relations (Study 2). Results of the two experiments converge to suggest that deafness alone supports the atypical left hemispheric preference in judging the location of a circle or a picture on a blank background and that deafness and sign language experience determine the superior ability of memory for location. The importance of including a sample of deaf non-signers was identified.  相似文献   

2.
Corina DP  Grosvald M 《Cognition》2012,122(3):330-345
In this paper, we compare responses of deaf signers and hearing non-signers engaged in a categorization task of signs and non-linguistic human actions. We examine the time it takes to make such categorizations under conditions of 180° stimulus inversion and as a function of repetition priming, in an effort to understand whether the processing of sign language forms draws upon special processing mechanisms or makes use of mechanisms used in recognition of non-linguistic human actions. Our data show that deaf signers were much faster in the categorization of both linguistic and non-linguistic actions, and relative to hearing non-signers, show evidence that they were more sensitive to the configural properties of signs. Our study suggests that sign expertise may lead to modifications of a general-purpose human action recognition system rather than evoking a qualitatively different mode of processing, and supports the contention that signed languages make use of perceptual systems through which humans understand or parse human actions and gestures more generally.  相似文献   

3.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

4.
聋人手语视觉表象生成能力的实验研究   总被引:2,自引:0,他引:2  
通过视觉表象判断实验,对聋手语使用者和听力正常人两类被试视觉表象生成的能力进行了比较。实验发现:与听力正常的人相比,聋手语使用者学习和记忆大写字母的时间短于听力正常的被试,并且两组被试记忆复杂字母的时间都较长;聋被试和听力正常被试采用了相同的字母表征方式。但是,习得手语的年龄对聋手语者生成表象的能力没有明显的影响。  相似文献   

5.
Temporal processing in deaf signers   总被引:4,自引:0,他引:4  
The auditory and visual modalities differ in their capacities for temporal analysis, and speech relies on more rapid temporal contrasts than does sign language. We examined whether congenitally deaf signers show enhanced or diminished capacities for processing rapidly varying visual signals in light of the differences in sensory and language experience of deaf and hearing individuals. Four experiments compared rapid temporal analysis in deaf signers and hearing subjects at three different levels: sensation, perception, and memory. Experiment 1 measured critical flicker frequency thresholds and Experiment 2, two-point thresholds to a flashing light. Experiments 3-4 investigated perception and memory for the temporal order of rapidly varying nonlinguistic visual forms. In contrast to certain previous studies, specifically those investigating the effects of short-term sensory deprivation, no significant differences between deaf and hearing subjects were found at any level. Deaf signers do not show diminished capacities for rapid temporal analysis, in comparison to hearing individuals. The data also suggest that the deficits in rapid temporal analysis reported previously for children with developmental language delay cannot be attributed to lack of experience with speech processing and production.  相似文献   

6.
To examine the claim that phonetic coding plays a special role in temporal order recall, deaf and hearing college students were tested on their recall of temporal and spatial order information at two delay intervals. The deaf subjects were all native signers of American Sign Language. The results indicated that both the deaf and hearing subjects used phonetic coding in short-term temporal recall, and visual coding in spatial recall. There was no evidence of manual or visual coding among either the hearing or the deaf subjects in the temporal order recall task. The use of phonetic coding for temporal recall is consistent with the hypothesis that recall of temporal order information is facilitated by a phonetic code.  相似文献   

7.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

8.
Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49, 170-181; Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: Correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.] reported a small, non-significant RVF advantage for deaf signers when short duration motion stimuli were used (200-250 ms). Samar and Parasnis (2005) reported that this small RVF advantage became significant when non-verbal IQ was statistically controlled. This paper presents extended analyses of the correlation between non-verbal IQ and visual field asymmetries in the data set of Samar and Parasnis (2005). We speculate that this correlation might plausibly be driven by individual differences either in age of acquisition of American Sign Language (ASL) or in the degree of neurodevelopmental insult associated with various etiologies of deafness. Limited additional analyses are presented that indicate a need for further research on the cause of this apparent IQ-laterality relationship. Some potential implications of this relationship for lateralization studies of deaf signers are discussed. Controlling non-verbal IQ may improve the reliability of short duration coherent motion tasks to detect adaptive dorsal stream lateralization due to exposure to ASL in deaf research participants.  相似文献   

9.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

10.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

11.
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.  相似文献   

12.
This study was designed to determine the feasibility of using self-paced reading methods to study deaf readers and to assess how deaf readers respond to two syntactic manipulations. Three groups of participants read the test sentences: deaf readers, hearing monolingual English readers, and hearing bilingual readers whose second language was English. In Experiment 1, the participants read sentences containing subject-relative or object-relative clauses. The test sentences contained semantic information that would influence online processing outcomes (Traxler, Morris, & Seely Journal of Memory and Language 47: 69–90, 2002; Traxler, Williams, Blozis, & Morris Journal of Memory and Language 53: 204–224, 2005). All of the participant groups had greater difficulty processing sentences containing object-relative clauses. This difficulty was reduced when helpful semantic cues were present. In Experiment 2, participants read active-voice and passive-voice sentences. The sentences were processed similarly by all three groups. Comprehension accuracy was higher in hearing readers than in deaf readers. Within deaf readers, native signers read the sentences faster and comprehended them to a higher degree than did nonnative signers. These results indicate that self-paced reading is a useful method for studying sentence interpretation among deaf readers.  相似文献   

13.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

14.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

15.
American Sign Language (ASL) has evolved within a completely different biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visual-spatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.This research was supported in part by National Institutes of Health grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC-00201, and HD-26022. I would like to thank and acknowledge Ursula Bellugi for her collaboration during much of the research described in this article.  相似文献   

16.
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses two further questions: first, do extra-grammatical discourse functions in deaf signers show the same right-hemisphere dominance observed for discourse functions in hearing subjects; and second, do discourse functions in ASL that employ spatial relations depend upon more general intact spatial cognitive abilities? We report findings from two right-hemisphere damaged deaf signers, both of whom show disruption of discourse functions in absence of any disruption of grammatical functions. The exact nature of the disruption differs for the two subjects, however. Subject AR shows difficulty in maintaining topical coherence, while SJ shows difficulty in employing spatial discourse devices. Further, the two subjects are equally impaired on non-linguistic spatial tasks, indicating that spared spatial discourse functions can occur even when more general spatial cognition is disrupted. We conclude that, as in the hearing population, discourse functions involve the right hemisphere; that distinct discourse functions can be dissociated from one another in ASL; and that brain organization for linguistic spatial devices is driven by its functional role in language processing, rather than by its surface, spatial characteristics.  相似文献   

17.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, and hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.  相似文献   

18.
The visual spatial memory of 15 deaf signers, 15 hearing signers, and 15 hearing nonsigners for shoes, faces, and verbalizable objects was measured using the game Concentration. It was hypothesized that the deaf and hearing signers would require fewer attempts than the hearing nonsigners on the shoes and faces tasks because of their experience of using a visual-spatial language; and, in the case of the Deaf, also possibly, due to a compensatory mechanism. It was also hypothesized that memory for shoes would be more like that for faces than for simple objects. It was also anticipated that there would be no difference between the three group's memories for verbalizable objects. Deaf signers were found to be similar to hearing signers, both of whom were better than hearing nonsigners on the faces and shoes tasks. Generally, performance on the faces and shoes tasks was similar and followed the same pattern for the three groups. The three groups performed at a similar level on the objects task. There were no gender differences.  相似文献   

19.
The memory of 11 deaf and 11 hearing British Sign Language users and 11 hearing nonsigners for pictures of faces of and verbalizable objects was measured using the game Concentration. The three groups performed at the same level for the objects. In contrast the deaf signers were better for faces than the hearing signers, who in turn were superior to the hearing nonsigners, who were the worst. Three hypotheses were made: That there would be no significant difference in terms of the number of attempts between the three groups on the verbalizable object task, that the hearing and deaf signers would demonstrate superior performance to that of the hearing nonsigners on the matching faces task, and that the hearing and deaf signers would exhibit similar performance levels on the matching faces task. The first two hypotheses were supported, but the third was not. Deaf signers were found to be superior for memory for faces to hearing signers and hearing nonsigners. Possible explanations for the findings are discussed, including the possibility that deafness and the long use of sign language have additive effects.  相似文献   

20.
Three experiments examined spatial transformation abilities in hearing people who acquired sign language in early adulthood. The performance of the nonnative hearing signers was compared with that of hearing people with no knowledge of sign language. The two groups were matched for age and gender. Using an adapted Corsi blocks paradigm, the experimental task simulated spatial relations in sign discourse but offered no opportunity for linguistic coding. Experiment 1 showed that the hearing signers performed significantly better than the nonsigners on a task that entailed 180 degree rotation, which is the canonical spatial relationship in sign language discourse. Experiment 2 found that the signers did not show the typical costs associated with processing rotated stimuli, and Experiment 3 ruled out the possibility that their advantage relied on seen hand movements. We conclude that sign language experience, even when acquired in adulthood by hearing people, can give rise to adaptations in cognitive processes associated with the manipulation of visuospatial information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号