首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

2.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

3.
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses two further questions: first, do extra-grammatical discourse functions in deaf signers show the same right-hemisphere dominance observed for discourse functions in hearing subjects; and second, do discourse functions in ASL that employ spatial relations depend upon more general intact spatial cognitive abilities? We report findings from two right-hemisphere damaged deaf signers, both of whom show disruption of discourse functions in absence of any disruption of grammatical functions. The exact nature of the disruption differs for the two subjects, however. Subject AR shows difficulty in maintaining topical coherence, while SJ shows difficulty in employing spatial discourse devices. Further, the two subjects are equally impaired on non-linguistic spatial tasks, indicating that spared spatial discourse functions can occur even when more general spatial cognition is disrupted. We conclude that, as in the hearing population, discourse functions involve the right hemisphere; that distinct discourse functions can be dissociated from one another in ASL; and that brain organization for linguistic spatial devices is driven by its functional role in language processing, rather than by its surface, spatial characteristics.  相似文献   

4.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

5.
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.  相似文献   

6.
Functional hemispheric specialization in recognizing faces expressing emotions was investigated in 18 normal hearing and 18 congenitally deaf children aged 13-14 years. Three kinds of faces were presented: happy, to express positive emotions, sad, to express negative emotions, and neutral. The subjects' task was to recognize the test face exposed for 20 msec in the left or right visual field. The subjects answered by pointing at the exposed stimulus on the response card that contained three different faces. The errors committed in expositions of faces in the left and right visual field were analyzed. In the control group the right hemisphere dominated in case of sad and neutral faces. There were no significant differences in recognition of happy faces. The differentiated hemispheric organization pattern in normal hearing persons supports the hypothesis of different processing of positive and negative emotions expressed by faces. The observed hemispheric asymmetry was a result of two factors: (1) processing of faces as complex patterns requiring visuo-spatial analysis, and (2) processing of emotions contained in them. Functional hemispheric asymmetry was not observed in the group of deaf children for any kind of emotion expressed in the presented faces. The results suggest that lack of auditory experience influences the organization of functional hemispheric specialization. It can be supposed that in deaf children, the analysis of information contained in emotional faces takes place in both hemispheres.  相似文献   

7.
A visual hemifield experiment investigated hemispheric specialization among hearing children and adults and prelingually, profoundly deaf youngsters who were exposed intensively to Cued Speech (CS). Of interest was whether deaf CS users, who undergo a development of phonology and grammar of the spoken language similar to that of hearing youngsters, would display similar laterality patterns in the processing of written language. Semantic, rhyme, and visual judgement tasks were used. In the visual task no VF advantage was observed. A RVF (left hemisphere) advantage was obtained for both the deaf and the hearing subjects for the semantic task, supporting Neville's claim that the acquisition of competence in the grammar of language is critical in establishing the specialization of the left hemisphere for language. For the rhyme task, however, a RVF advantage was obtained for the hearing subjects, but not for the deaf ones, suggesting that different neural resources are recruited by deaf and hearing subjects. Hearing the sounds of language may be necessary to develop left lateralised processing of rhymes.  相似文献   

8.
This paper examines the impact of auditory deprivation and sign language use on the enhancement of location memory and hemispheric specialization using two matching tasks. Forty-one deaf signers and non-signers and 51 hearing signers and non-signers were tested on location memory for shapes and objects (Study 1) and on categorical versus coordinate spatial relations (Study 2). Results of the two experiments converge to suggest that deafness alone supports the atypical left hemispheric preference in judging the location of a circle or a picture on a blank background and that deafness and sign language experience determine the superior ability of memory for location. The importance of including a sample of deaf non-signers was identified.  相似文献   

9.
American Sign Language (ASL) has evolved within a completely different biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visual-spatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.This research was supported in part by National Institutes of Health grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC-00201, and HD-26022. I would like to thank and acknowledge Ursula Bellugi for her collaboration during much of the research described in this article.  相似文献   

10.
Temporal processing in deaf signers   总被引:4,自引:0,他引:4  
The auditory and visual modalities differ in their capacities for temporal analysis, and speech relies on more rapid temporal contrasts than does sign language. We examined whether congenitally deaf signers show enhanced or diminished capacities for processing rapidly varying visual signals in light of the differences in sensory and language experience of deaf and hearing individuals. Four experiments compared rapid temporal analysis in deaf signers and hearing subjects at three different levels: sensation, perception, and memory. Experiment 1 measured critical flicker frequency thresholds and Experiment 2, two-point thresholds to a flashing light. Experiments 3-4 investigated perception and memory for the temporal order of rapidly varying nonlinguistic visual forms. In contrast to certain previous studies, specifically those investigating the effects of short-term sensory deprivation, no significant differences between deaf and hearing subjects were found at any level. Deaf signers do not show diminished capacities for rapid temporal analysis, in comparison to hearing individuals. The data also suggest that the deficits in rapid temporal analysis reported previously for children with developmental language delay cannot be attributed to lack of experience with speech processing and production.  相似文献   

11.
Groups of deaf subjects, exposed to tachistoscopic bilateral presentation of English words and American Sign Language (ASL) signs, showed weaker right visual half-field (VHF) superiority for words than hearing comparison groups with both a free-recall and matching response. Deaf subjects showed better, though nonsignificant, recognition of left VHF signs with bilateral presentation of signs but shifted to superior right VHF response to signs when word-sign combinations were presented. Cognitive strategies and hemispheric specialization for ASL are discussed as possible factors affecting half-field asymmetry.  相似文献   

12.
The purpose of this study was to investigate hemispheric functional asymmetry in 18 normal hearing children and 18 congenitally deaf children aged 13-14 years. The task was identification of a visual stimulus (3-letter word or photograph of a face) presented in either the left or right visual field. The children responded by pointing to the target stimulus on a response card which contained four different words or three different faces. The percentage of errors for presentations to the two visual fields were analysed to determine hemispheric dominance. The pattern of hemispheric differences for the hearing children was consistent with that from previous investigations. The results for the deaf children differed from those of the normals. In word perception we observed a right hemisphere advantage and in the face recognition a lack of hemispheric differences. These results point to a lack of auditory experiences which is affecting the functional organization of the two hemispheres. It is suggested that the necessity to make use of visuo-spatial information in the process of communication causes right hemisphere dominance in verbal tasks. This may influence the perception of other visuo-spatial stimuli which may yield a lack of hemispheric asymmetry in face recognition.  相似文献   

13.
Corina DP  Grosvald M 《Cognition》2012,122(3):330-345
In this paper, we compare responses of deaf signers and hearing non-signers engaged in a categorization task of signs and non-linguistic human actions. We examine the time it takes to make such categorizations under conditions of 180° stimulus inversion and as a function of repetition priming, in an effort to understand whether the processing of sign language forms draws upon special processing mechanisms or makes use of mechanisms used in recognition of non-linguistic human actions. Our data show that deaf signers were much faster in the categorization of both linguistic and non-linguistic actions, and relative to hearing non-signers, show evidence that they were more sensitive to the configural properties of signs. Our study suggests that sign expertise may lead to modifications of a general-purpose human action recognition system rather than evoking a qualitatively different mode of processing, and supports the contention that signed languages make use of perceptual systems through which humans understand or parse human actions and gestures more generally.  相似文献   

14.
A group of congenitally deaf adults and a group of hearing adults, both fluent in sign language, were tested to determine cerebral lateralization. In the most revealing task, subjects were given a series of trials in which they were fist presented with a videotaped sign and then with a word exposed tachistoscopically to the right visual field or left visual field, and were required to judge whether the word corresponded to the sign or not. The results suggested that the comparison processes involved in the decision were performed more efficiently by the left hemisphere for hearing subjects and by the right hemisphere for deaf subjects. However, the deaf subjects performed as well as the hearing subjects in the left hemisphere, suggesting that the deaf are not impeded by their auditory-speech handicap from developing the left hemisphere for at least some types of linguistic processing.  相似文献   

15.
昝飞  谭和平 《心理科学》2004,27(1):80-83
本研究为考察聋生在接受不同感觉通道词汇时的汉字加工特点和听觉编码所起的作用,采用听觉词汇、视觉词汇和其他感觉词汇等三类词汇为实验材料,对手语聋生组、口语聋生组以及大学生进行了新旧词汇判断、包含与排除两个实验。实验表明,聋生在汉字加工中不仅使用视觉编码,而且无意识地、自动地使用了听觉编码,这在内隐测验中可表现出来。由此可推断,聋生汉字加工困难的原因在于有意识地使用听觉编码的能力较低。  相似文献   

16.
To examine the claim that phonetic coding plays a special role in temporal order recall, deaf and hearing college students were tested on their recall of temporal and spatial order information at two delay intervals. The deaf subjects were all native signers of American Sign Language. The results indicated that both the deaf and hearing subjects used phonetic coding in short-term temporal recall, and visual coding in spatial recall. There was no evidence of manual or visual coding among either the hearing or the deaf subjects in the temporal order recall task. The use of phonetic coding for temporal recall is consistent with the hypothesis that recall of temporal order information is facilitated by a phonetic code.  相似文献   

17.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

18.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

19.
ABSTRACT

Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.  相似文献   

20.
Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49, 170-181; Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: Correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.] reported a small, non-significant RVF advantage for deaf signers when short duration motion stimuli were used (200-250 ms). Samar and Parasnis (2005) reported that this small RVF advantage became significant when non-verbal IQ was statistically controlled. This paper presents extended analyses of the correlation between non-verbal IQ and visual field asymmetries in the data set of Samar and Parasnis (2005). We speculate that this correlation might plausibly be driven by individual differences either in age of acquisition of American Sign Language (ASL) or in the degree of neurodevelopmental insult associated with various etiologies of deafness. Limited additional analyses are presented that indicate a need for further research on the cause of this apparent IQ-laterality relationship. Some potential implications of this relationship for lateralization studies of deaf signers are discussed. Controlling non-verbal IQ may improve the reliability of short duration coherent motion tasks to detect adaptive dorsal stream lateralization due to exposure to ASL in deaf research participants.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号