首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Two experiments were conducted on short-term recall of printed English words by deaf signers of American Sign Language (ASL). Compared with hearing subjects, deaf subjects recalled significantly fewer words when ordered recall of words was required, but not when free recall was required. Deaf subjects tended to use a speech-based code in probed recall for order, and the greater the reliance on a speech-based code, the more accurate the recall. These results are consistent with the hypothesis that a speech-based code facilitates the retention of order information.  相似文献   

2.
In order to reveal the psychological representation of movement from American Sign Language (ASL), deaf native signers and hearing subjects unfamiliar with sign were asked to make triadic comparisons of movements that had been isolated from lexical and from grammatically inflected signs. An analysis of the similarity judgments revealed a small set of physically specifiable dimensions that accounted for most of the variance. The dimensions underlying the perception of lexical movement were in general different from those underlying inflectional movement, for both groups of subjects. Most strikingly, deaf and hearing subjects significantly differed in their patterns of dimensional salience for movements, both at the lexical and at the inflectional levels. Linguistically relevant dimensions were of increased salience to native signers. The difference in perception of linguistic movement by native signers and by naive observers demonstrates that modification of natural perceptual categories after language acquisition is not bound to a particular transmission modality, but rather can be a more general consequence of acquiring a formal linguistic system.  相似文献   

3.
Previous studies indicate that hearing readers sometimes convert printed text into a phonological form during silent reading. The experiments reported here investigated whether second-generation congenitally deaf readers use any analogous recoding strategy. Fourteen congenitally and profoundly deaf adults who were native signers of American Sign Language (ASL) served as subjects. Fourteen hearing people of comparable reading levels were control subjects. These subjects participated in four experiments that tested for the possibilities of (a) recoding into articulation, (b) recoding into fingerspelling, (c) recoding into ASL, or (d) no recoding at all. The experiments employed paradigms analogous to those previously used to test for phonological recoding in hearing populations. Interviews with the deaf subjects provided supplementary information about their reading strategies. The results suggest that these deaf subjects as a group do not recode into articulation or fingerspelling, but do recode into sign.  相似文献   

4.
A sign decision task, in which deaf signers made a decision about the number of hands required to form a particular sign of American Sign Language (ASL), revealed significant facilitation by repetition among signs that share a base morpheme. A lexical decision task on English words revealed facilitation by repetition among words that share a base morpheme in both English and ASL, but not among those that share a base morpheme in ASL only. This outcome occurred for both deaf and hearing subjects. The results are interpreted as evidence that the morphological principles of lexical organization observed in ASL do not extend to the organization of English for skilled deaf readers.  相似文献   

5.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

6.
Shand (Cognitive Psychology, 1982, 14, 1-12) hypothesized that strong reliance on a phonetic code by hearing individuals in short-term memory situations reflects their primary language experience. As support for this proposal, Shand reported an experiment in which deaf signers' recall of lists of printed English words was poorer when the American Sign Language translations of those words were structurally similar than when they were structurally unrelated. He interpreted this result as evidence that the deaf subjects were recoding the printed words into sign, reflecting their primary language experience. This primary language interpretation is challenged in the present article first by an experiment in which a group of hearing subjects showed a similar recall pattern on Shand's lists of words, and second by a review of the literature on short-term memory studies with deaf subjects. The literature survey reveals that whether or not deaf signers recode into sign depends on a variety of task and subject factors, and that, contrary to the primary language hypothesis, deaf signers may recode into a phonetic code in short-term recall.  相似文献   

7.
In order to help illuminate general ways in which language users process inflected items, two groups of native signers of American Sign Language (ASL) were asked to recall lists of inflected and uninflected signs. Despite the simultaneous production of base and inflection in ASL, subjects transposed inflections across base forms, recalling the base forms in the correct serial positions, or transposed base forms, recalling the inflections in the correct serial positions. These rearrangements of morphological components within lists occurred significantly more often than did rearrangements of whole forms (base plus inflection). These and other patterns of errors converged to suggest that ASL signers remembered inflected signs componentially in terms of a base and an inflection, much as the available evidence suggests is true for users of spoken language. Componential processing of regularly inflected forms would thus seem to be independent of particular transmission systems and of particular mechanisms for combining lexical and inflectional material.  相似文献   

8.
Lane, Boyes-Braem, and Bellugi (1976) suggested that American Sign Language (ASL) was perceived according to distinctive features, as is the case with speech. They advanced a binary model of distinctive features for one component of ASL signs, handshape. To test the validity of this model, three native users of ASL and three English speakers who knew no ASL participated in two experiments. In the first, subjects identified ASL handshapes obscured by visual noise, and confusion frequencies yielded similarity scores for all possible pairs of handshapes. Pairs that shared more features according to the model produced higher similarity scores for both groups of subjects. In the second experiment, subjects discriminated a subset of all possible handshape pairs in a speeded “same-different” task; discrimination accuracy and reaction times yielded a d’ and a ds value, respectively, for each pair. Pairs that shared more features according to a slightly revised version of the model produced lower discrimination indices for both groups. While the binary model was supported, a model in which handshape features varied continuously in two dimensions was more consistent with all sets of data. Both models describe hearing and deaf performance equally well, suggesting that linguistic experience does little to alter perception of the visual features germane to handshape identification and discrimination.  相似文献   

9.
Three experiments examined short-term encoding processes of deaf signers for different aspects of signs from American Sign Language. Experiment 1 compared short-term memory for lists of formationally similar signs with memory for matched lists of random signs. Just as acoustic similarity of words interferes with short-term memory or word sequences, formational similarity of signs had a marked debilitating effect on the ordered recall of sequences of signs. Experiment 2 evaluated the effects of the semantic similarity of the signs on short-term memory: Semantic similarity had no significant effect on short-term ordered recall of sequences of signs. Experiment 3 studied the role that the iconic (representational) value of signs played in short-term memory. Iconicity also had no reliable effect on short-term recall. These results provide support for the position that deaf signers code signs from American Sign Language at one level in terms of linguistically significant formational parameters. The semantic and iconic information of signs, however, seems to have little effect on short-term memory.  相似文献   

10.
In a short-term memory experiment, signs of American Sign Language in list lengths of three to seven items were presented to deaf college students whose native language is American Sign Language. A comparable short-term memory experiment for words (words representing the English translation-equivalents of the signs) was presented to hearing college students. Recall was written, immediate and ordered. Overall, short-term memory mechanisms in the deaf seem to parallel those found in hearing subjects, even with the modality change. A significant number of multiple intrusion errors made by deaf subjects to signs were based on formational properties of the signs themselves, a result paralleling the phonologically based errors in experiments with hearing subjects.Our results are consistent with a theory that the signs of American Sign Language are actually coded by the deaf in terms of simultaneous formational parameters such as Hand Configuration, Place of Articulation and Movement. Evidence is given that signs are treated by the deaf as consisting of independent parameters — specific to American Sign Language — which are essentially arbitrary in terms of meaning.  相似文献   

11.
Groups of deaf subjects, exposed to tachistoscopic bilateral presentation of English words and American Sign Language (ASL) signs, showed weaker right visual half-field (VHF) superiority for words than hearing comparison groups with both a free-recall and matching response. Deaf subjects showed better, though nonsignificant, recognition of left VHF signs with bilateral presentation of signs but shifted to superior right VHF response to signs when word-sign combinations were presented. Cognitive strategies and hemispheric specialization for ASL are discussed as possible factors affecting half-field asymmetry.  相似文献   

12.
Geraci C  Gozzi M  Papagno C  Cecchetto C 《Cognition》2008,106(2):780-804
It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of signed and spoken languages. This finding is at odds with results showing that short-term memory (STM) for signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of signs: indeed, unlike English (or Italian) words, signs contain both simultaneous and sequential components. Nonetheless, sign languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of signed languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.  相似文献   

13.
American Sign Language (ASL) offers a valuable opportunity for the study of cerebral asymmetries, since it incorporates both language structure and complex spatial relations: processing the former has generally been considered a left-hemisphere function, the latter, a right-hemisphere one. To study such asymmetries, congenitally deaf, native ASL users and normally-hearing English speakers unfamiliar with ASL were asked to identify four kinds of stimuli: signs from ASL, handshapes never used in ASL, Arabic digits, and random geometric forms. Stimuli were presented tachistoscopically to a visual hemifield and subjects manually responded as rapidly as possible to specified targets. Both deaf and hearing subjects showed left-visual-field (hence, presumably right-hemisphere) advantages to the signs and to the non-ASL hands. The hearing subjects, further, showed a left-hemisphere advantage to the Arabic numbers, while the deaf subjects showed no reliable visual-field differences to this material. We infer that the spatial processing required of the signs predominated over their language processing in determining the cerebral asymmetry of the deaf for these stimuli.  相似文献   

14.
Two experiments are reported which investigate the organization and recognition of morphologically complex forms in American Sign Language (ASL) using a repetition priming technique. Three major questions were addressed: (1) Is morphological priming a modality-independent process? (2) Do the different properties of agreement and aspect morphology in ASL affect priming strength? (3) Does early language experience influence the pattern of morphological priming? Prime-target pairs (separated by 26–32 items) were presented to deaf subjects for lexical decision. Primes were inflected for either agreement (dual, reciprocal, multiple) or aspect (habitual, continual); targets were always the base form of the verb. Results of Experiment 1 indicated that subjects exposed to ASL in late childhood were not as sensitive to morphological complexity as native signers, but this result was not replicated in Experiment 2. Both experiments showed stronger facilitation with aspect morphology compared to agreement morphology. Repetition priming was not observed for nonsigns. The scope and structure of the morphological rules for ASL aspect and agreement are argued to explain the different patterns of morphological priming.  相似文献   

15.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

16.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

17.
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses two further questions: first, do extra-grammatical discourse functions in deaf signers show the same right-hemisphere dominance observed for discourse functions in hearing subjects; and second, do discourse functions in ASL that employ spatial relations depend upon more general intact spatial cognitive abilities? We report findings from two right-hemisphere damaged deaf signers, both of whom show disruption of discourse functions in absence of any disruption of grammatical functions. The exact nature of the disruption differs for the two subjects, however. Subject AR shows difficulty in maintaining topical coherence, while SJ shows difficulty in employing spatial discourse devices. Further, the two subjects are equally impaired on non-linguistic spatial tasks, indicating that spared spatial discourse functions can occur even when more general spatial cognition is disrupted. We conclude that, as in the hearing population, discourse functions involve the right hemisphere; that distinct discourse functions can be dissociated from one another in ASL; and that brain organization for linguistic spatial devices is driven by its functional role in language processing, rather than by its surface, spatial characteristics.  相似文献   

18.
Capacity limits in linguistic short-term memory (STM) are typically measured with forward span tasks in which participants are asked to recall lists of words in the order presented. Using such tasks, native signers of American Sign Language (ASL) exhibit smaller spans than native speakers ([Boutla, M., Supalla, T., Newport, E. L., & Bavelier, D. (2004). Short-term memory span: Insights from sign language. Nature Neuroscience, 7(9), 997-1002]). Here, we test the hypothesis that this population difference reflects differences in the way speakers and signers maintain temporal order information in short-term memory. We show that native signers differ from speakers on measures of short-term memory that require maintenance of temporal order of the tested materials, but not on those in which temporal order is not required. In addition, we show that, in a recall task with free order, bilingual subjects are more likely to recall in temporal order when using English than ASL. We conclude that speakers and signers do share common short-term memory processes. However, whereas short-term memory for spoken English is predominantly organized in terms of temporal order, we argue that this dimension does not play as great a role in signers' short-term memory. Other factors that may affect STM processes in signers are discussed.  相似文献   

19.
The aim of this study was to investigate whether language-specific properties influence mental number processing. German Sign Language (DGS) numbers differ from those in spoken German not only in terms of modality but also in their basic language structure. A group of 20 congenitally deaf German signers participated in a number parity (odd/even) judgment task with DGS and printed German number words. The results indicated that two-handed DGS number signs are processed in a decomposed way. This language-specific effect also generalized to another linguistic number notation, German number words, but not to Arabic digit notation. These differences are discussed with respect to two possible routes to number parity.  相似文献   

20.
To examine the claim that phonetic coding plays a special role in temporal order recall, deaf and hearing college students were tested on their recall of temporal and spatial order information at two delay intervals. The deaf subjects were all native signers of American Sign Language. The results indicated that both the deaf and hearing subjects used phonetic coding in short-term temporal recall, and visual coding in spatial recall. There was no evidence of manual or visual coding among either the hearing or the deaf subjects in the temporal order recall task. The use of phonetic coding for temporal recall is consistent with the hypothesis that recall of temporal order information is facilitated by a phonetic code.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号