首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study investigated serial recall by congenitally, profoundly deaf signers for visually specified linguistic information presented in their primary language, American Sign Language (ASL), and in printed or fingerspelled English. There were three main findings. First, differences in the serial-position curves across these conditions distinguished the changing-state stimuli from the static stimuli. These differences were a recency advantage and a primacy disadvantage for the ASL signs and fingerspelled English words, relative to the printed English words. Second, the deaf subjects, who were college students and graduates, used a sign-based code to recall ASL signs, but not to recall English words; this result suggests that well-educated deaf signers do not translate into their primary language when the information to be recalled is in English. Finally, mean recall of the deaf subjects for ordered lists of ASL signs and fingerspelled and printed English words was significantly less than that of hearing control subjects for the printed words; this difference may be explained by the particular efficacy of a speech-based code used by hearing individuals for retention of ordered linguistic information and by the relatively limited speech experience of congenitally, profoundly deaf individuals.  相似文献   

2.
In order to help illuminate general ways in which language users process inflected items, two groups of native signers of American Sign Language (ASL) were asked to recall lists of inflected and uninflected signs. Despite the simultaneous production of base and inflection in ASL, subjects transposed inflections across base forms, recalling the base forms in the correct serial positions, or transposed base forms, recalling the inflections in the correct serial positions. These rearrangements of morphological components within lists occurred significantly more often than did rearrangements of whole forms (base plus inflection). These and other patterns of errors converged to suggest that ASL signers remembered inflected signs componentially in terms of a base and an inflection, much as the available evidence suggests is true for users of spoken language. Componential processing of regularly inflected forms would thus seem to be independent of particular transmission systems and of particular mechanisms for combining lexical and inflectional material.  相似文献   

3.
Two experiments are reported which investigate the organization and recognition of morphologically complex forms in American Sign Language (ASL) using a repetition priming technique. Three major questions were addressed: (1) Is morphological priming a modality-independent process? (2) Do the different properties of agreement and aspect morphology in ASL affect priming strength? (3) Does early language experience influence the pattern of morphological priming? Prime-target pairs (separated by 26–32 items) were presented to deaf subjects for lexical decision. Primes were inflected for either agreement (dual, reciprocal, multiple) or aspect (habitual, continual); targets were always the base form of the verb. Results of Experiment 1 indicated that subjects exposed to ASL in late childhood were not as sensitive to morphological complexity as native signers, but this result was not replicated in Experiment 2. Both experiments showed stronger facilitation with aspect morphology compared to agreement morphology. Repetition priming was not observed for nonsigns. The scope and structure of the morphological rules for ASL aspect and agreement are argued to explain the different patterns of morphological priming.  相似文献   

4.
In order to reveal the psychological representation of movement from American Sign Language (ASL), deaf native signers and hearing subjects unfamiliar with sign were asked to make triadic comparisons of movements that had been isolated from lexical and from grammatically inflected signs. An analysis of the similarity judgments revealed a small set of physically specifiable dimensions that accounted for most of the variance. The dimensions underlying the perception of lexical movement were in general different from those underlying inflectional movement, for both groups of subjects. Most strikingly, deaf and hearing subjects significantly differed in their patterns of dimensional salience for movements, both at the lexical and at the inflectional levels. Linguistically relevant dimensions were of increased salience to native signers. The difference in perception of linguistic movement by native signers and by naive observers demonstrates that modification of natural perceptual categories after language acquisition is not bound to a particular transmission modality, but rather can be a more general consequence of acquiring a formal linguistic system.  相似文献   

5.
HIPS (Human Information Processing Laboratory’s Image processing System) is a software system for image processing that runs under the UNIX operating system. HIPS is modular and flexible: it provides automatic documentation of its actions, and is relatively independent of special equipment. It has proved its usefulness in the study of the perception of American Sign Language (ASL). Here, we demonstrate some of its applications in the study of vision, and as a tool in general signal processing. Ten examples of HIPS-generated stimuli and—in some cases—analyses are provided, including the spatial filtering analysis of two types of visual illusions; the study of frequency channels with sine-wave gratings and band-limited noise; 3-dimensional perceptual reconstruction from 2-dimensional images in the kinetic depth effect; the perception of depth in random dot stereograms and cinematograms; and the perceptual segregation of objects induced by differential dot motion. Finally, examples of noise-masked, cartoon coded, and hierarchically encoded ASL images are provided.  相似文献   

6.
Despite the constantly varying stream of sensory information that surrounds us, we humans can discern the small building blocks of words that constitute language (phonetic forms) and perceive them categorically (categorical perception, CP). Decades of controversy have prevailed regarding what is at the heart of CP, with many arguing that it is due to domain-general perceptual processing and others that it is determined by the existence of domain-specific linguistic processing. What is most key: perceptual or linguistic patterns? Here, we study whether CP occurs withsoundless handshapes that are nonethelessphonetic in American Sign Language (ASL), in signers and nonsigners. Using innovative methods and analyses of identification and, crucially, discrimination tasks, we found that both groups separated the soundless handshapes into two classes perceptually but that only the ASL signers exhibited linguistic CP. These findings suggest that CP of linguistic stimuli is based on linguistic categorization, rather than on purely perceptual categorization.  相似文献   

7.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, and hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.  相似文献   

8.
We tested hearing 6- and 10-month-olds' ability to discriminate among three American Sign Language (ASL) parameters (location, handshape, and movement) as well as a grammatical marker (facial expression). ASL-na?ve infants were habituated to a signer articulating a two-handed symmetrical sign in neutral space. During test, infants viewed novel two-handed signs that varied in only one parameter or in facial expression. Infants detected changes in the signer's facial expression and in the location of the sign but provided no evidence of detecting the changes in handshape or movement. These findings are consistent with children's production errors in ASL and reveal that infants can distinguish among some parameters of ASL more easily than others.  相似文献   

9.
American Sign Language (ASL) offers a valuable opportunity for the study of cerebral asymmetries, since it incorporates both language structure and complex spatial relations: processing the former has generally been considered a left-hemisphere function, the latter, a right-hemisphere one. To study such asymmetries, congenitally deaf, native ASL users and normally-hearing English speakers unfamiliar with ASL were asked to identify four kinds of stimuli: signs from ASL, handshapes never used in ASL, Arabic digits, and random geometric forms. Stimuli were presented tachistoscopically to a visual hemifield and subjects manually responded as rapidly as possible to specified targets. Both deaf and hearing subjects showed left-visual-field (hence, presumably right-hemisphere) advantages to the signs and to the non-ASL hands. The hearing subjects, further, showed a left-hemisphere advantage to the Arabic numbers, while the deaf subjects showed no reliable visual-field differences to this material. We infer that the spatial processing required of the signs predominated over their language processing in determining the cerebral asymmetry of the deaf for these stimuli.  相似文献   

10.
How well can a sequence of frames be represented by a subset of the frames? Video sequences of American Sign Language (ASL) were investigated in two modes: dynamic (ordinary video) and static (frames printed side by side on the display). An activity index was used to choose critical frames at event boundaries, times when the difference between successive frames is at a local minimum. Sign intelligibility was measured for 32 experienced ASL signers who viewed individual signs. For full gray-scale dynamic signs activity-index subsampling yielded sequences that were significantly more intelligible than when every mth frame was chosen. This result was even more pronounced for static images. For binary images, the relative advantage of activity subsampling was smaller. We conclude that event boundaries can be defined computationally and that subsampling from event boundaries is better than choosing at regular intervals.  相似文献   

11.
Positron emission tomography was used to investigate whether the motor-iconic basis of certain forms in American Sign Language (ASL) partially alters the neural systems engaged during lexical retrieval. Most ASL nouns denoting tools and ASL verbs referring to tool-based actions are produced with a handshape representing the human hand holding a tool and with an iconic movement depicting canonical tool use, whereas the visual iconicity of animal signs is more idiosyncratic and inconsistent across signs. We investigated whether the motor-iconic relation between a sign and its referent alters the neural substrate for lexical retrieval in ASL. Ten deaf native ASL signers viewed photographs of tools/utensils or of actions performed with or without an implement and were asked to overtly produce the ASL sign for each object or action. The control task required subjects to judge the orientation of unknown faces. Compared to the control task, naming tools engaged left inferior and middle frontal gyri, bilateral parietal lobe, and posterior inferotemporal cortex. Naming actions performed with or without a tool engaged left inferior frontal gyrus, bilateral parietal lobe, and posterior middle temporal gyrus at the temporo-occipital junction (area MT). When motor-iconic verbs were compared with non-iconic verbs, no differences in neural activation were found. Overall, the results indicate that even when the form of a sign is indistinguishable from a pantomimic gesture, the neural systems underlying its production mirror those engaged when hearing speakers name tools or tool-based actions with speech.  相似文献   

12.
Perception of dynamic events of American Sign Language (ASL) was studied by isolating information about motion in the language from information about form. Four experiments utilized Johansson's technique for presenting biological motion as moving points of light. In the first, deaf signers were highly accurate in matching movements of lexical signs presented in point-light displays to those normally presented. Both discrimination accuracy and the pattern of errors were similar in this matching task to that obtained in a control condition in which the same signs were always represented normally. The second experiment showed that these results held for discrimination of morphological operations presented in point-light displays as well. In the third experiment, signers were able to accurately identify signs of a constant handshape and morphological operations acting on signs presented in point-light displays. Finally, in Experiment 4, we evaluated what aspects of the motion patterns carried most of the information for sign identifiability. We presented signs in point-light displays with certain lights removed and found that the movement of the fingertips, but not of any other pair of points, is necessary for sign identification and that, in general, the more distal the joint, the more information its movement carries.  相似文献   

13.
Groups of deaf subjects, exposed to tachistoscopic bilateral presentation of English words and American Sign Language (ASL) signs, showed weaker right visual half-field (VHF) superiority for words than hearing comparison groups with both a free-recall and matching response. Deaf subjects showed better, though nonsignificant, recognition of left VHF signs with bilateral presentation of signs but shifted to superior right VHF response to signs when word-sign combinations were presented. Cognitive strategies and hemispheric specialization for ASL are discussed as possible factors affecting half-field asymmetry.  相似文献   

14.
A sign decision task, in which deaf signers made a decision about the number of hands required to form a particular sign of American Sign Language (ASL), revealed significant facilitation by repetition among signs that share a base morpheme. A lexical decision task on English words revealed facilitation by repetition among words that share a base morpheme in both English and ASL, but not among those that share a base morpheme in ASL only. This outcome occurred for both deaf and hearing subjects. The results are interpreted as evidence that the morphological principles of lexical organization observed in ASL do not extend to the organization of English for skilled deaf readers.  相似文献   

15.
Following groundbreaking work by linguists and cognitive scientists over the past thirty years, it is now generally accepted that sign languages of the deaf, such as ASL (American Sign Language) or BSL (British Sign Language), are structured and processed in a similar manner to spoken languages. The one striking difference is that they operate in a wholly non-auditory, visuospatial medium. How does the medium impact on language processing itself?  相似文献   

16.
Two experiments were conducted on short-term recall of printed English words by deaf signers of American Sign Language (ASL). Compared with hearing subjects, deaf subjects recalled significantly fewer words when ordered recall of words was required, but not when free recall was required. Deaf subjects tended to use a speech-based code in probed recall for order, and the greater the reliance on a speech-based code, the more accurate the recall. These results are consistent with the hypothesis that a speech-based code facilitates the retention of order information.  相似文献   

17.
This experiment tested the hypothesis that syntactic constituents in American Sign Language (ASL) serve as perceptual units. We adapted the strategy first employed by Fodor and Bever in 1965 in a study of the psychological reality of linguistic speech segments. Four deaf subjects were shown ASL sign sequences constructed to contain a single constituent break. The dependent measure was the subjective location of a light flash occurring during the sign sequence. The prediction that the flashes would be attracted to the constituent boundary was supported for two of the subjects, while the other two showed random placement of the flash location on either side of the constituent boundary. The two subjects not performing in the predicted direction were more proficient in English (written) than the two giving the effect. It was suggested that this relatively greater proficiency may have interfered in some way with the ASL syntax to produce the results obtained.  相似文献   

18.
In two experiments, sign-naive subjects acquired the meanings for manual signs of American Sign Language by learning to respond with the English word equivalents when signs were presented. The results showed that when the signs on a to-be-learned list were related to each other in handshape configuration (cheremically similar), they were more difficult to acquire than when semantically similar. Whether the similar signs were grouped together during presentation or were separated by other dissimilar signs had no effect on the number of signs correctly acquired. These results were the same for the identical signs learned in the cheremically or semantically similar contexts as for the fists as a whole. The results have implications for teaching sign language to hearing adults.  相似文献   

19.
The relationship between knowledge of American Sign Language (ASL) and the ability to encode facial expressions of emotion was explored. Participants were 55 college students, half of whom were intermediate-level students of ASL and half of whom had no experience with a signed language. In front of a video camera, participants posed the affective facial expressions of happiness, sadness, fear, surprise, anger, and disgust. These facial expressions were randomized onto stimulus tapes that were then shown to 60 untrained judges who tried to identify the expressed emotions. Results indicated that hearing subjects knowledgeable in ASL were generally more adept than were hearing nonsigners at conveying emotions through facial expression. Results have implications for better understanding the nature of nonverbal communication in hearing and deaf individuals.  相似文献   

20.
Previous studies indicate that hearing readers sometimes convert printed text into a phonological form during silent reading. The experiments reported here investigated whether second-generation congenitally deaf readers use any analogous recoding strategy. Fourteen congenitally and profoundly deaf adults who were native signers of American Sign Language (ASL) served as subjects. Fourteen hearing people of comparable reading levels were control subjects. These subjects participated in four experiments that tested for the possibilities of (a) recoding into articulation, (b) recoding into fingerspelling, (c) recoding into ASL, or (d) no recoding at all. The experiments employed paradigms analogous to those previously used to test for phonological recoding in hearing populations. Interviews with the deaf subjects provided supplementary information about their reading strategies. The results suggest that these deaf subjects as a group do not recode into articulation or fingerspelling, but do recode into sign.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号