首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
聋人手语视觉表象生成能力的实验研究   总被引:2,自引:0,他引:2  
通过视觉表象判断实验,对聋手语使用者和听力正常人两类被试视觉表象生成的能力进行了比较。实验发现:与听力正常的人相比,聋手语使用者学习和记忆大写字母的时间短于听力正常的被试,并且两组被试记忆复杂字母的时间都较长;聋被试和听力正常被试采用了相同的字母表征方式。但是,习得手语的年龄对聋手语者生成表象的能力没有明显的影响。  相似文献   

2.
通过2个实验, 考察了语义类标记在中国手语词汇识别和语义提取中的作用。实验1采用手语词汇判断任务, 比较了有、无语义类标记的手语词汇识别。实验2采用语义决定任务, 探讨了语义类标记对手语词语义提取的影响。结果表明, 语义类标记影响聋生对手语词的识别和语义提取。聋生识别有语义类标记的手语词显著快于识别无语义类标记的手语词。当语义类标记与手语词的语义一致时, 能够促进聋生对手语词的语义提取; 当语义类标记与手语词的语义不一致, 会干扰聋生对手语词的语义提取。中国手语词语义类标记效应的发现, 丰富了中国手语词词汇认知的理论, 对聋人的语言教学和概念教学具有重要的启示。  相似文献   

3.
曹宇  李恒 《心理科学》2021,(1):67-73
采用启动条件下的词汇判断任务,考察熟练手语使用者和无手语经验成年听人的跨模态语义启动效应。结果发现:1)在象似词条件下,两组被试判断汉语语义相关词的反应时均快于语义无关词,说明手语象似词和汉语词之间存在跨模态语义启动效应。2)在非象似词条件下,仅手语熟练被试判断汉语语义相关词的反应时快于语义无关词,无手语经验被试判断汉语语义相关词和无关词的速度没有差异。这是由于前者心理词库中的手语词和口语词共享语义表征,而后者主要依赖手语象似词的视觉模拟性。整个研究表明,中国手语和汉语间存在跨模态语义启动效应,但该效应受到手语词象似性和手语学习经历的调节。  相似文献   

4.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

5.
Bimodal bilinguals are hearing individuals who know both a signed and a spoken language. Effects of bimodal bilingualism on behavior and brain organization are reviewed, and an fMRI investigation of the recognition of facial expressions by ASL-English bilinguals is reported. The fMRI results reveal separate effects of sign language and spoken language experience on activation patterns within the superior temporal sulcus. In addition, the strong left-lateralized activation for facial expression recognition previously observed for deaf signers was not observed for hearing signers. We conclude that both sign language experience and deafness can affect the neural organization for recognizing facial expressions, and we argue that bimodal bilinguals provide a unique window into the neurocognitive changes that occur with the acquisition of two languages.  相似文献   

6.
Shand (Cognitive Psychology, 1982, 14, 1-12) hypothesized that strong reliance on a phonetic code by hearing individuals in short-term memory situations reflects their primary language experience. As support for this proposal, Shand reported an experiment in which deaf signers' recall of lists of printed English words was poorer when the American Sign Language translations of those words were structurally similar than when they were structurally unrelated. He interpreted this result as evidence that the deaf subjects were recoding the printed words into sign, reflecting their primary language experience. This primary language interpretation is challenged in the present article first by an experiment in which a group of hearing subjects showed a similar recall pattern on Shand's lists of words, and second by a review of the literature on short-term memory studies with deaf subjects. The literature survey reveals that whether or not deaf signers recode into sign depends on a variety of task and subject factors, and that, contrary to the primary language hypothesis, deaf signers may recode into a phonetic code in short-term recall.  相似文献   

7.
语言经验对脑功能和结构发展有重要的塑造作用。然而, 目前的相关证据主要来自对脑损伤导致的失语症病人的语言康复、第二语言学习以及针对成人读者进行的语言训练等方面的研究。幼儿时期的早期语言经验对脑结构与功能发展的影响更加重要, 但直接的研究证据却相当缺乏。本文提出一个研究设想, 拟综合使用多种脑成像技术, 系统探讨有早期手语经验和无早期手语经验的聋人个体在脑皮层语言功能的组织及脑结构发育的差异, 包括语言任务中大脑语言区的激活模式, 静息状态下脑功能联结的默认网络特征, 脑皮层灰质密度, 以及神经纤维束发育状况等, 揭示早期语言经验对大脑功能和结构发育的塑造作用。  相似文献   

8.
In two studies, we find that native and non-native acquisition show different effects on sign language processing. Subjects were all born deaf and used sign language for interpersonal communication, but first acquired it at ages ranging from birth to 18. In the first study, deaf signers shadowed (simultaneously watched and reproduced) sign language narratives given in two dialects, American Sign Language (ASL) and Pidgin Sign English (PSE), in both good and poor viewing conditions. In the second study, deaf signers recalled and shadowed grammatical and ungrammatical ASL sentences. In comparison with non-native signers, natives were more accurate, comprehended better, and made different kinds of lexical changes; natives primarily changed signs in relation to sign meaning independent of the phonological characteristics of the stimulus. In contrast, non-native signers primarily changed signs in relation to the phonological characteristics of the stimulus independent of lexical and sentential meaning. Semantic lexical changes were positively correlated to processing accuracy and comprehension, whereas phonological lexical changes were negatively correlated. The effects of non-native acquisition were similar across variations in the sign dialect, viewing condition, and processing task. The results suggest that native signers process lexical structural automatically, such that they can attend to and remember lexical and sentential meaning. In contrast, non-native signers appear to allocate more attention to the task of identifying phonological shape such that they have less attention available for retrieval and memory of lexical meaning.  相似文献   

9.
To investigate whether formational properties of sign language are used spontaneously to organize long-term memory, 16 deaf college students were given a free recall task with items that could be categorized either by shared semantic category or by shared sign language hand shape. Both presentation and response modes (signed or written) were varied between subjects. Analyses revealed no effects of mode on trials to criterion or number of items recalled at 1 week. The clustering that occurred was exclusively semantic, with significantly higher clustering scores during acquisition trials in subjects required to sign their responses. In Experiment 2, formational clustering was encouraged by including formational similarity as the only experimenter-defined basis of categorization, by increasing formational similarity within categories, and by testing only subjects with high signing skills. Input and output modes were again varied between subjects. Subjects were deaf college students with deaf parents (n = 10) or hearing parents (n = 16), and hearing adults with deaf parents (n = 8). Again, spontaneous clustering by formational similarity was extremely low. In only one case— deaf subjects with hearing parents given signed input—did formational clustering increase significantly across the eight acquisition trials. After the categorical nature of the list was explained to subjects at a 1-week retention session, all groups clustered output by formational categories. Apparently, fluent signers do have knowledge of the formational structure of signs, but do not spontaneously use this knowledge as a basis of mnemonic organization in long-term memory.  相似文献   

10.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.  相似文献   

11.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

12.
Despite years of research on the reading problems of deaf students, we still do not know how deaf signers who read well actually crack the code of print. How connections are made between sign language and written language is still an open question. In this article, we show how the Noldus Observer XT software can be used to conduct an in-depth analysis of the online behavior of deaf readers. First, we examine factors that may have an impact on reading behavior. Then, we describe how we videotaped teachers with their deaf student signers of langue des signes québécoise during a reading task, how we conducted a recall activity to better understand the students’ reading behavior, and how we used this innovative software to analyze the taped footage. Finally, we discuss the contribution this type of research can have on the future reading behavior of deaf students.  相似文献   

13.
Recently, we reported a strong right visual field/left hemisphere advantage for motion processing in deaf signers and a slight reverse asymmetry in hearing nonsigners (Bosworth & Dobkins, 1999). This visual field asymmetry in deaf signers may be due to auditory deprivation or to experience with a visual-manual language, American Sign Language (ASL). In order to separate these two possible sources, in this study we added a third group, hearing native signers, who have normal hearing and have learned ASL from their deaf parents. As in our previous study, subjects performed a direction-of-motion discrimination task at different locations across the visual field. In addition to investigating differences in left vs right visual field asymmetries across subject groups, we also asked whether performance differences exist for superior vs inferior visual fields and peripheral vs central visual fields. Replicating our previous study, a robust right visual field advantage was observed in deaf signers, but not in hearing nonsigners. Like deaf signers, hearing signers also exhibited a strong right visual field advantage, suggesting that this effect is related to experience with sign language. These results suggest that perceptual processes required for the acquisition and comprehension of language (motion processing in the case of ASL) are recruited by the left, language-dominant, hemisphere. Deaf subjects also exhibited an inferior visual field advantage that was significantly larger than that observed in either hearing group. In addition, there was a trend for deaf subjects to perform relatively better on peripheral than on central stimuli, while both hearing groups showed the reverse pattern. Because deaf signers differed from hearing signers and nonsigners along these domains, the inferior and peripheral visual field advantages observed in deaf subjects is presumably related to auditory deprivation. Finally, these visual field asymmetries were not modulated by attention for any subject group, suggesting they are a result of sensory, and not attentional, factors.  相似文献   

14.
Most people born deaf and exposed to oral language show scant evidence of sensitivity to the phonology of speech when processing written language. In this respect they differ from hearing people. However, occasionally, a prelingually deaf person can achieve good processing of written language in terms of phonological sensitivity and awareness, and in this respect appears exceptional. We report the pattern of event-related fMRI activation in such a deaf reader while performing a rhyme-judgment on written words with similar spelling endings that do not provide rhyme clues. The left inferior frontal gyrus pars opercularis and the left inferior parietal lobe showed greater activation for this task than for a letter-string identity matching task. This participant was special in this regard, showing significantly greater activation in these regions than a group of hearing participants with a similar level of phonological and reading skill. In addition, SR showed activation in the left mid-fusiform gyrus; a region which did not show task-specific activation in the other respondents. The pattern of activation in this exceptional deaf reader was also unique compared with three deaf readers who showed limited phonological processing. We discuss the possibility that this pattern of activation may be critical in relation to phonological decoding of the written word in good deaf readers whose phonological reading skills are indistinguishable from those of hearing readers.  相似文献   

15.
This paper examines the impact of auditory deprivation and sign language use on the enhancement of location memory and hemispheric specialization using two matching tasks. Forty-one deaf signers and non-signers and 51 hearing signers and non-signers were tested on location memory for shapes and objects (Study 1) and on categorical versus coordinate spatial relations (Study 2). Results of the two experiments converge to suggest that deafness alone supports the atypical left hemispheric preference in judging the location of a circle or a picture on a blank background and that deafness and sign language experience determine the superior ability of memory for location. The importance of including a sample of deaf non-signers was identified.  相似文献   

16.
This investigation examined whether access to sign language as a medium for instruction influences theory of mind (ToM) reasoning in deaf children with similar home language environments. Experiment 1 involved 97 deaf Italian children ages 4-12 years: 56 were from deaf families and had LIS (Italian Sign Language) as their native language, and 41 had acquired LIS as late signers following contact with signers outside their hearing families. Children receiving bimodal/bilingual instruction in LIS together with Sign-Supported and spoken Italian significantly outperformed children in oralist schools in which communication was in Italian and often relied on lipreading. Experiment 2 involved 61 deaf children in Estonia and Sweden ages 6-16 years. On a wide variety of ToM tasks, bilingually instructed native signers in Estonian Sign Language and spoken Estonian succeeded at a level similar to age-matched hearing children. They outperformed bilingually instructed late signers and native signers attending oralist schools. Particularly for native signers, access to sign language in a bilingual environment may facilitate conversational exchanges that promote the expression of ToM by enabling children to monitor others' mental states effectively.  相似文献   

17.
昝飞  谭和平 《心理科学》2005,28(5):1089-1095
本研究采用暂同形似、音同形异、音异形似和无关字四类汉字字组为实验材料.每类字组都按汉字使用频率分为低频字、中频字、高频字三种.对使用手语聋生和使用口语聋生进行了同音判断和启动效应实验,旨在探究聋生在汉字识别过程中语音编码所起的作用。实验结果表明,在聋生汉字识别中,字形的知觉加工对提取语音具有非常重要的作用,但语音的提取对聋生来说非常困难。不同字频对不同字组的同音判断成绩的影响不同,表明聋生对不同汉字的语音意识不同。聋生在汉字识别中存在语音混淆和字形混淆的现象,说明语音编码和字形编码在汉字识别过程中都起了重要的作用。字频对聋生汉字识别的影响也不同,同频字产生语音特征的影响;低频字产生字形特征的影响;而中频字都不产生语音特征和字形特征的影响。  相似文献   

18.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

19.
Temporal processing in deaf signers   总被引:4,自引:0,他引:4  
The auditory and visual modalities differ in their capacities for temporal analysis, and speech relies on more rapid temporal contrasts than does sign language. We examined whether congenitally deaf signers show enhanced or diminished capacities for processing rapidly varying visual signals in light of the differences in sensory and language experience of deaf and hearing individuals. Four experiments compared rapid temporal analysis in deaf signers and hearing subjects at three different levels: sensation, perception, and memory. Experiment 1 measured critical flicker frequency thresholds and Experiment 2, two-point thresholds to a flashing light. Experiments 3-4 investigated perception and memory for the temporal order of rapidly varying nonlinguistic visual forms. In contrast to certain previous studies, specifically those investigating the effects of short-term sensory deprivation, no significant differences between deaf and hearing subjects were found at any level. Deaf signers do not show diminished capacities for rapid temporal analysis, in comparison to hearing individuals. The data also suggest that the deficits in rapid temporal analysis reported previously for children with developmental language delay cannot be attributed to lack of experience with speech processing and production.  相似文献   

20.
ABSTRACT

Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号