首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

2.
Abstract:  In the first half of this paper, the experimental investigations on memory and cognition in deaf signers are reviewed in order to reveal how deaf signers rely on sign-based coding when they process linguistic information. It is suggested that deaf signers tactically employ a set of originally separate memory strategies relying on multiple components of working memory. In the second half of this paper, the author shows possible factors that could contribute to a sign language advantage. It is indicated that deaf signers' cognitive activities are deeply rooted in the signers' interaction with the environment. Some concrete examples of Japanese Sign Language signs and their use are provided to support this hypothesis.  相似文献   

3.
聋人手语视觉表象生成能力的实验研究   总被引:2,自引:0,他引:2  
通过视觉表象判断实验,对聋手语使用者和听力正常人两类被试视觉表象生成的能力进行了比较。实验发现:与听力正常的人相比,聋手语使用者学习和记忆大写字母的时间短于听力正常的被试,并且两组被试记忆复杂字母的时间都较长;聋被试和听力正常被试采用了相同的字母表征方式。但是,习得手语的年龄对聋手语者生成表象的能力没有明显的影响。  相似文献   

4.
The memory of 11 deaf and 11 hearing British Sign Language users and 11 hearing nonsigners for pictures of faces of and verbalizable objects was measured using the game Concentration. The three groups performed at the same level for the objects. In contrast the deaf signers were better for faces than the hearing signers, who in turn were superior to the hearing nonsigners, who were the worst. Three hypotheses were made: That there would be no significant difference in terms of the number of attempts between the three groups on the verbalizable object task, that the hearing and deaf signers would demonstrate superior performance to that of the hearing nonsigners on the matching faces task, and that the hearing and deaf signers would exhibit similar performance levels on the matching faces task. The first two hypotheses were supported, but the third was not. Deaf signers were found to be superior for memory for faces to hearing signers and hearing nonsigners. Possible explanations for the findings are discussed, including the possibility that deafness and the long use of sign language have additive effects.  相似文献   

5.
Representations of the fingers are embodied in our cognition and influence performance in enumeration tasks. Among deaf signers, the fingers also serve as a tool for communication in sign language. Previous studies in normal hearing (NH) participants showed effects of embodiment (i.e., embodied numerosity) on tactile enumeration using the fingers of one hand. In this research, we examined the influence of extensive visuo-manual use on tactile enumeration among the deaf. We carried out four enumeration task experiments, using 1–5 stimuli, on a profoundly deaf group (n = 16) and a matching NH group (n = 15): (a) tactile enumeration using one hand, (b) tactile enumeration using two hands, (c) visual enumeration of finger signs, and (d) visual enumeration of dots. In the tactile tasks, we found salient embodied effects in the deaf group compared to the NH group. In the visual enumeration of finger signs task, we controlled the meanings of the stimuli presentation type (e.g., finger-counting habit, fingerspelled letters, both or neither). Interestingly, when comparing fingerspelled letters to neutrals (i.e., not letters or numerical finger-counting signs), an inhibition pattern was observed among the deaf. The findings uncover the influence of rich visuo-manual experiences and language on embodied representations. In addition, we propose that these influences can partially account for the lag in mathematical competencies in the deaf compared to NH peers. Lastly, we further discuss how our findings support a contemporary model for mental numerical representations and finger-counting habits.  相似文献   

6.
《Cognitive development》2005,20(2):159-172
Recent studies with “late-signing” deaf children (deaf children born into families in which no-one uses a sign language) have indicated that they have difficulty performing tasks that require them to reason about other people's false beliefs. However, virtually no research has so far investigated how far late signers’ difficulties with mental state understanding extend. This paper reports one study that uses an imitation paradigm to examine whether late signers may also have difficulty in interpreting other people's actions in terms of their goals. Both late-signing (N = 15) and second generation “native-signing” deaf children (N = 19) produced a pattern of responses to this task that indicates that they can and readily do view the actions of others as goal-directed. We conclude that this form of mental state understanding (generally seen as a precursor to understanding false beliefs) is intact in late-signing deaf children.  相似文献   

7.
ABSTRACT

Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.  相似文献   

8.
In two studies, we find that native and non-native acquisition show different effects on sign language processing. Subjects were all born deaf and used sign language for interpersonal communication, but first acquired it at ages ranging from birth to 18. In the first study, deaf signers shadowed (simultaneously watched and reproduced) sign language narratives given in two dialects, American Sign Language (ASL) and Pidgin Sign English (PSE), in both good and poor viewing conditions. In the second study, deaf signers recalled and shadowed grammatical and ungrammatical ASL sentences. In comparison with non-native signers, natives were more accurate, comprehended better, and made different kinds of lexical changes; natives primarily changed signs in relation to sign meaning independent of the phonological characteristics of the stimulus. In contrast, non-native signers primarily changed signs in relation to the phonological characteristics of the stimulus independent of lexical and sentential meaning. Semantic lexical changes were positively correlated to processing accuracy and comprehension, whereas phonological lexical changes were negatively correlated. The effects of non-native acquisition were similar across variations in the sign dialect, viewing condition, and processing task. The results suggest that native signers process lexical structural automatically, such that they can attend to and remember lexical and sentential meaning. In contrast, non-native signers appear to allocate more attention to the task of identifying phonological shape such that they have less attention available for retrieval and memory of lexical meaning.  相似文献   

9.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

10.
Hu Z  Wang W  Liu H  Peng D  Yang Y  Li K  Zhang JX  Ding G 《Brain and language》2011,116(2):64-70
Effective literacy education in deaf students calls for psycholinguistic research revealing the cognitive and neural mechanisms underlying their written language processing. When learning a written language, deaf students are often instructed to sign out printed text. The present fMRI study was intended to reveal the neural substrates associated with word signing by comparing it with picture signing. Native deaf signers were asked to overtly sign in Chinese Sign Language (CSL) common objects indicated with written words or presented as pictures. Except in left inferior frontal gyrus and inferior parietal lobule where word signing elicited greater activation than picture signing, the two tasks engaged a highly overlapping set of brain regions previously implicated in sign production. The results suggest that word signing in the deaf signers relies on meaning activation from printed visual forms, followed by similar production processes from meaning to signs as in picture signing. The present study also documents the basic brain activation pattern for sign production in CSL and supports the notion of a universal core neural network for sign production across different sign languages.  相似文献   

11.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

12.
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.  相似文献   

13.
聋生由于听力损失,导致其在阅读方面存在一定的困难。如何提高聋生的阅读效率是一个具有重要的实践价值的课题。本研究采用眼动追踪技术,以29名小学高年级聋生为研究对象,考察颜色交替词标记形式对小学高年级聋生篇章阅读的促进作用。结果发现,无论是眼动指标的整体分析还是局部分析,均支持颜色交替词标记文本可以有效地提高小学高年级聋生的语篇阅读效率。本研究结果对于训练提高小学高年级聋生的阅读效率具有一定的启发意义。  相似文献   

14.
聋生由于听力损失,导致其在阅读方面存在一定的困难。如何提高聋生的阅读效率是一个具有重要的实践价值的课题。本研究采用眼动追踪技术,以29名小学高年级聋生为研究对象,考察颜色交替词标记形式对小学高年级聋生篇章阅读的促进作用。结果发现,无论是眼动指标的整体分析还是局部分析,均支持颜色交替词标记文本可以有效地提高小学高年级聋生的语篇阅读效率。本研究结果对于训练提高小学高年级聋生的阅读效率具有一定的启发意义。  相似文献   

15.
语言经验对脑功能和结构发展有重要的塑造作用。然而, 目前的相关证据主要来自对脑损伤导致的失语症病人的语言康复、第二语言学习以及针对成人读者进行的语言训练等方面的研究。幼儿时期的早期语言经验对脑结构与功能发展的影响更加重要, 但直接的研究证据却相当缺乏。本文提出一个研究设想, 拟综合使用多种脑成像技术, 系统探讨有早期手语经验和无早期手语经验的聋人个体在脑皮层语言功能的组织及脑结构发育的差异, 包括语言任务中大脑语言区的激活模式, 静息状态下脑功能联结的默认网络特征, 脑皮层灰质密度, 以及神经纤维束发育状况等, 揭示早期语言经验对大脑功能和结构发育的塑造作用。  相似文献   

16.
聋人读者普遍存在阅读困难,通过眼动技术来探索聋人阅读中的基本问题已成为一种新趋势,聋人读者在阅读过程中存在其独特的眼动模式。在回顾以往聋人阅读眼动研究的基础上,提出了对未来研究的展望:(1)眼动技术的广泛应用是聋人阅读研究的一个新趋势;(2)从跨文化研究视角探究中外聋人阅读加工的异同;(3)考察聋人视觉注意的特点与语言加工之间的关系;(4)通过眼动技术考察聋人读者的手语加工效率。  相似文献   

17.
The aim of the present study was to investigate the role of executive functions (EF) in theory-of-mind (ToM) performance in deaf children and adolescents. Four groups of deaf children aged 7–16 years, with different language backgrounds at home and at school, that is, bilingually instructed native signers, oralist-instructed native signers, and two groups of bilingually instructed late signers from Sweden and Estonia, respectively, were given eight ToM and four EF measures. The bilingually instructed native signers performed at a significantly higher level on the ToM measures than the other groups of deaf children. On the EF measures, there were no significant differences found between any of the groups, with one exception—the Swedish bilingual late signers had a significantly shorter average reaction time on the go-no-go inhibition task than the oralist native signers and the Estonian bilingual late signers. However, the Swedish children's better EF performance was not mirrored in better performance on ToM tasks. Our results indicate that despite all deaf children's good general cognitive abilities, there were still differences in their performance on ToM tasks that need to be explained in other terms. Thus, whatever the cause of late signers' difficulties with ToM, poor EF-skills seem to be of minor importance.  相似文献   

18.
Developmental psychology plays a central role in shaping evidence‐based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.]  相似文献   

19.
This study was designed to determine the feasibility of using self-paced reading methods to study deaf readers and to assess how deaf readers respond to two syntactic manipulations. Three groups of participants read the test sentences: deaf readers, hearing monolingual English readers, and hearing bilingual readers whose second language was English. In Experiment 1, the participants read sentences containing subject-relative or object-relative clauses. The test sentences contained semantic information that would influence online processing outcomes (Traxler, Morris, & Seely Journal of Memory and Language 47: 69–90, 2002; Traxler, Williams, Blozis, & Morris Journal of Memory and Language 53: 204–224, 2005). All of the participant groups had greater difficulty processing sentences containing object-relative clauses. This difficulty was reduced when helpful semantic cues were present. In Experiment 2, participants read active-voice and passive-voice sentences. The sentences were processed similarly by all three groups. Comprehension accuracy was higher in hearing readers than in deaf readers. Within deaf readers, native signers read the sentences faster and comprehended them to a higher degree than did nonnative signers. These results indicate that self-paced reading is a useful method for studying sentence interpretation among deaf readers.  相似文献   

20.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号