首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Perception of American Sign Language (ASL) handshape and place of articulation parameters was investigated in three groups of signers: deaf native signers, deaf non-native signers who acquired ASL between the ages of 10 and 18, and hearing non-native signers who acquired ASL as a second language between the ages of 10 and 26. Participants were asked to identify and discriminate dynamic synthetic signs on forced choice identification and similarity judgement tasks. No differences were found in identification performance, but there were effects of language experience on discrimination of the handshape stimuli. Participants were significantly less likely to discriminate handshape stimuli drawn from the region of the category prototype than stimuli that were peripheral to the category or that straddled a category boundary. This pattern was significant for both groups of deaf signers, but was more pronounced for the native signers. The hearing L2 signers exhibited a similar pattern of discrimination, but results did not reach significance. An effect of category structure on the discrimination of place of articulation stimuli was also found, but it did not interact with language background. We conclude that early experience with a signed language magnifies the influence of category prototypes on the perceptual processing of handshape primes, leading to differences in the distribution of attentional resources between native and non-native signers during language comprehension.  相似文献   

2.
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a “frame” (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a “last item” belonging to one of four categories: a high-close-probability sign (a “semantically reasonable” completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a “semantically odd” completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity.  相似文献   

3.
The study of sign languages provides a promising vehicle for investigating language production because the movements of the articulators in sign are directly observable. Movement of the hands and arms is an essential element not only in the lexical structure of American Sign Language (ASL), but most strikingly, in the grammatical structure of ASL: It is in patterned changes of the movement of signs that many grammatical attributes are represented. The "phonological" (formational) structure of movement in ASL surely reflects in part constraints on the channel through which it is conveyed. We evaluate the relation between one neuromotor constraint on movement-regulation of final position rather than of movement amplitude-and the phonological structure of movement in ASL. We combine three-dimensional measurements of ASL movements with linguistic analyses of the distinctiveness and predictability of the final position (location) versus the amplitude (length). We show that final position, not movement amplitude, is distinctive in the language and that a phonological rule in ASL predicts variation in movement amplitude-a development which may reflect a neuromuscular constraint on the articulatory mechanism through which the language is conveyed.  相似文献   

4.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neighborhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations.  相似文献   

5.
Lane, Boyes-Braem, and Bellugi (1976) suggested that American Sign Language (ASL) was perceived according to distinctive features, as is the case with speech. They advanced a binary model of distinctive features for one component of ASL signs, handshape. To test the validity of this model, three native users of ASL and three English speakers who knew no ASL participated in two experiments. In the first, subjects identified ASL handshapes obscured by visual noise, and confusion frequencies yielded similarity scores for all possible pairs of handshapes. Pairs that shared more features according to the model produced higher similarity scores for both groups of subjects. In the second experiment, subjects discriminated a subset of all possible handshape pairs in a speeded “same-different” task; discrimination accuracy and reaction times yielded a d’ and a ds value, respectively, for each pair. Pairs that shared more features according to a slightly revised version of the model produced lower discrimination indices for both groups. While the binary model was supported, a model in which handshape features varied continuously in two dimensions was more consistent with all sets of data. Both models describe hearing and deaf performance equally well, suggesting that linguistic experience does little to alter perception of the visual features germane to handshape identification and discrimination.  相似文献   

6.
Despite the constantly varying stream of sensory information that surrounds us, we humans can discern the small building blocks of words that constitute language (phonetic forms) and perceive them categorically (categorical perception, CP). Decades of controversy have prevailed regarding what is at the heart of CP, with many arguing that it is due to domain-general perceptual processing and others that it is determined by the existence of domain-specific linguistic processing. What is most key: perceptual or linguistic patterns? Here, we study whether CP occurs withsoundless handshapes that are nonethelessphonetic in American Sign Language (ASL), in signers and nonsigners. Using innovative methods and analyses of identification and, crucially, discrimination tasks, we found that both groups separated the soundless handshapes into two classes perceptually but that only the ASL signers exhibited linguistic CP. These findings suggest that CP of linguistic stimuli is based on linguistic categorization, rather than on purely perceptual categorization.  相似文献   

7.
ERPs were recorded from deaf and hearing native signers and from hearing subjects who acquired ASL late or not at all as they viewed ASL signs that formed sentences. The results were compared across these groups and with those from hearing subjects reading English sentences. The results suggest that there are constraints on the organization of the neural systems that mediate formal languages and that these are independent of the modality through which language is acquired. These include different specializations of anterior and posterior cortical regions in aspects of grammatical and semantic processing and a bias for the left hemisphere to mediate aspects of mnemonic functions in language. Additionally, the results suggest that the nature and timing of sensory and language experience significantly impact the development of the language systems of the brain. Effects of the early acquisition of ASL include an increased role for the right hemisphere and for parietal cortex and this occurs in both hearing and deaf native signers. An increased role of posterior temporal and occipital areas occurs in deaf native signers only and thus may be attributable to auditory deprivation.  相似文献   

8.
We report the results of an experiment investigating the ramifications of using space to express coreference in American Sign Language (ASL). Nominals in ASL can be associated with locations in signing space, and pronouns are directed toward those locations to convey coreference. A probe recognition technique was used to investigate the case of "locus doubling" in which a single referent is associated with two distinct spatial locations. The experiment explored whether an ASL pronoun activates both its antecedent referent and the location associated with that referent. An introductory discourse associated a referent (e.g, MOTHER) with two distinct locations (eg., STOREleft, KITCHENright), and a continuation sentence followed that either contained a pronoun referring to the referent in one location or contained no anaphora (the control sentence). Twenty-four deaf participants made lexical decisions to probe signs presented during the continuation sentences. The probe signs were either the referent of the pronoun, the referent-location determined by the pronoun, or the most recently mentioned location (not referenced by the pronoun). The results indicated that response times to referent nouns were faster in the pronoun than in the no-pronoun control condition and that response times to the location signs did not differ across conditions. Thus, the spatial nature of coreference in ASL does not alter the processing mechanism underlying the on-line interpretation of pronouns. Pronouns activate only referent nouns, not spatial location nouns associated with the referent.  相似文献   

9.
Can face actions that carry significance within language be perceived categorically? We used continua produced by computational morphing of face-action images to explore this question in a controlled fashion. In Experiment 1 we showed that question-type-a syntactic distinction in British Sign Language (BSL)-can be perceived categorically, but only when it is also identified as a question marker. A few hearing non-signers were sensitive to this distinction; among those who used sign, late sign learners were no less sensitive than early sign users. A very similar facial-display continuum between 'surprise' and 'puzzlement' was perceived categorically by deaf and hearing participants, irrespective of their sign experience (Experiment 2). The categorical processing of facial displays can be demonstrated for sign, but may be grounded in universally perceived distinctions between communicative face actions. Moreover, the categorical perception of facial actions is not confined to the six universal facial expressions.  相似文献   

10.
A program is described that allows experimenters to generate American Sign Language forms by computer. The computer synthesis of such signs could allow major advances in the experimental investigation of the perception of sign language, much as computer-generated speech has done for the study of speech perception.  相似文献   

11.
Two experiments are reported which investigate the organization and recognition of morphologically complex forms in American Sign Language (ASL) using a repetition priming technique. Three major questions were addressed: (1) Is morphological priming a modality-independent process? (2) Do the different properties of agreement and aspect morphology in ASL affect priming strength? (3) Does early language experience influence the pattern of morphological priming? Prime-target pairs (separated by 26–32 items) were presented to deaf subjects for lexical decision. Primes were inflected for either agreement (dual, reciprocal, multiple) or aspect (habitual, continual); targets were always the base form of the verb. Results of Experiment 1 indicated that subjects exposed to ASL in late childhood were not as sensitive to morphological complexity as native signers, but this result was not replicated in Experiment 2. Both experiments showed stronger facilitation with aspect morphology compared to agreement morphology. Repetition priming was not observed for nonsigns. The scope and structure of the morphological rules for ASL aspect and agreement are argued to explain the different patterns of morphological priming.  相似文献   

12.
A sign decision task, in which deaf signers made a decision about the number of hands required to form a particular sign of American Sign Language (ASL), revealed significant facilitation by repetition among signs that share a base morpheme. A lexical decision task on English words revealed facilitation by repetition among words that share a base morpheme in both English and ASL, but not among those that share a base morpheme in ASL only. This outcome occurred for both deaf and hearing subjects. The results are interpreted as evidence that the morphological principles of lexical organization observed in ASL do not extend to the organization of English for skilled deaf readers.  相似文献   

13.
Previous studies of cerebral asymmetry for the perception of American Sign Language (ASL) have used only static representations of signs; in this study we present moving signs. Congenitally deaf, native ASL signers identified moving signs, static representations of signs, and English words. The stimuli were presented rapidly by motion picture to each visual hemifield. Normally hearing English speakers also identified the English words. Consistent with previous findings, both the deaf and the hearing subjects showed a left-hemisphere advantage to the English words; likewise, the deaf subjects showed a right hemisphere advantage to the statically presented signs. With the moving signs, the deaf showed no lateral asymmetry. The shift from right dominance to a more balanced hemispheric involvement with the change from static to moving signs is consistent with Kimura's position that the left hemisphere predominates in the analysis of skilled motor sequencing (Kimura 1976). The results also indicate that ASL may be more bilaterally represented than is English and that the spatial component of language stimuli can greatly influence lateral asymmetries.  相似文献   

14.
In order to help illuminate general ways in which language users process inflected items, two groups of native signers of American Sign Language (ASL) were asked to recall lists of inflected and uninflected signs. Despite the simultaneous production of base and inflection in ASL, subjects transposed inflections across base forms, recalling the base forms in the correct serial positions, or transposed base forms, recalling the inflections in the correct serial positions. These rearrangements of morphological components within lists occurred significantly more often than did rearrangements of whole forms (base plus inflection). These and other patterns of errors converged to suggest that ASL signers remembered inflected signs componentially in terms of a base and an inflection, much as the available evidence suggests is true for users of spoken language. Componential processing of regularly inflected forms would thus seem to be independent of particular transmission systems and of particular mechanisms for combining lexical and inflectional material.  相似文献   

15.
The incidence of homonymy in children's early sign language production was examined in nine young children of deaf parents. Analysis of parental reports of how their children formed their signs revealed that all of the children produced homonymous forms (i.e., a single manual form used to represent two or more different adult target signs). Altogether, the children were reported as producing 26 sets of homonymous forms to represent 59 adult target signs. This incidence of homonymy in the children's early signing did not differ significantly from the incidence of homonymy previously reported in a study of normally developing children acquiring spoken language. This finding is interpreted as indicating an important similarity in language acquisition processes across language modalities. Analysis of the sign formational characteristics of the children's homonymous forms revealed that the children's signs and the adult target signs typically shared a common location aspect. The movement and handshape aspects of the adult target signs were less frequently retained in the children's homonymous forms.  相似文献   

16.
17.
How well can a sequence of frames be represented by a subset of the frames? Video sequences of American Sign Language (ASL) were investigated in two modes: dynamic (ordinary video) and static (frames printed side by side on the display). An activity index was used to choose critical frames at event boundaries, times when the difference between successive frames is at a local minimum. Sign intelligibility was measured for 32 experienced ASL signers who viewed individual signs. For full gray-scale dynamic signs activity-index subsampling yielded sequences that were significantly more intelligible than when every mth frame was chosen. This result was even more pronounced for static images. For binary images, the relative advantage of activity subsampling was smaller. We conclude that event boundaries can be defined computationally and that subsampling from event boundaries is better than choosing at regular intervals.  相似文献   

18.
The relationship between knowledge of American Sign Language (ASL) and the ability to encode facial expressions of emotion was explored. Participants were 55 college students, half of whom were intermediate-level students of ASL and half of whom had no experience with a signed language. In front of a video camera, participants posed the affective facial expressions of happiness, sadness, fear, surprise, anger, and disgust. These facial expressions were randomized onto stimulus tapes that were then shown to 60 untrained judges who tried to identify the expressed emotions. Results indicated that hearing subjects knowledgeable in ASL were generally more adept than were hearing nonsigners at conveying emotions through facial expression. Results have implications for better understanding the nature of nonverbal communication in hearing and deaf individuals.  相似文献   

19.
20.
The tracking of complex two-dimensional movement patterns was studied. Subjects were blindfolded, and their right hand moved around stencil patterns in the midsagittal plane, while the left hand concurrently reproduced the right-hand movement. The accuracy with which the left hand shadowed the criterion movements of the right hand was measured in shape and size. Right-hand movements were active or passive. Present tracking performance was contrasted with errors in recall reported by Bairstow and Laszlo (1978). Results showed that tracking performance was accurate. Active and passive criterion movements were tracked differently. Tracking was clearly superior to recall performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号