首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The study of sign languages provides a promising vehicle for investigating language production because the movements of the articulators in sign are directly observable. Movement of the hands and arms is an essential element not only in the lexical structure of American Sign Language (ASL), but most strikingly, in the grammatical structure of ASL: It is in patterned changes of the movement of signs that many grammatical attributes are represented. The "phonological" (formational) structure of movement in ASL surely reflects in part constraints on the channel through which it is conveyed. We evaluate the relation between one neuromotor constraint on movement-regulation of final position rather than of movement amplitude-and the phonological structure of movement in ASL. We combine three-dimensional measurements of ASL movements with linguistic analyses of the distinctiveness and predictability of the final position (location) versus the amplitude (length). We show that final position, not movement amplitude, is distinctive in the language and that a phonological rule in ASL predicts variation in movement amplitude-a development which may reflect a neuromuscular constraint on the articulatory mechanism through which the language is conveyed.  相似文献   

2.
Bimodal bilinguals, fluent in a signed and a spoken language, provide unique insight into the nature of syntactic integration and language control. We investigated whether bimodal bilinguals who are conversing with English monolinguals produce American Sign Language (ASL) grammatical facial expressions to accompany parallel syntactic structures in spoken English. In ASL, raised eyebrows mark conditionals, and furrowed eyebrows mark wh-questions; the grammatical brow movement is synchronized with the manual onset of the clause. Bimodal bilinguals produced more ASL-appropriate facial expressions than did nonsigners and synchronized their expressions with the onset of the corresponding English clauses. This result provides evidence for a dual-language architecture in which grammatical information can be integrated up to the level of phonological implementation. Overall, participants produced more raised brows than furrowed brows, which can convey negative affect. Bimodal bilinguals suppressed but did not completely inhibit ASL facial grammar when it conflicted with conventional facial gestures. We conclude that morphosyntactic elements from two languages can be articulated simultaneously and that complete inhibition of the nonselected language is difficult.  相似文献   

3.
In two studies, we find that native and non-native acquisition show different effects on sign language processing. Subjects were all born deaf and used sign language for interpersonal communication, but first acquired it at ages ranging from birth to 18. In the first study, deaf signers shadowed (simultaneously watched and reproduced) sign language narratives given in two dialects, American Sign Language (ASL) and Pidgin Sign English (PSE), in both good and poor viewing conditions. In the second study, deaf signers recalled and shadowed grammatical and ungrammatical ASL sentences. In comparison with non-native signers, natives were more accurate, comprehended better, and made different kinds of lexical changes; natives primarily changed signs in relation to sign meaning independent of the phonological characteristics of the stimulus. In contrast, non-native signers primarily changed signs in relation to the phonological characteristics of the stimulus independent of lexical and sentential meaning. Semantic lexical changes were positively correlated to processing accuracy and comprehension, whereas phonological lexical changes were negatively correlated. The effects of non-native acquisition were similar across variations in the sign dialect, viewing condition, and processing task. The results suggest that native signers process lexical structural automatically, such that they can attend to and remember lexical and sentential meaning. In contrast, non-native signers appear to allocate more attention to the task of identifying phonological shape such that they have less attention available for retrieval and memory of lexical meaning.  相似文献   

4.
Does knowledge of language transfer across language modalities? For example, can speakers who have had no sign language experience spontaneously project grammatical principles of English to American Sign Language (ASL) signs? To address this question, here, we explore a grammatical illusion. Using spoken language, we first show that a single word with doubling (e.g., trafraf) can elicit conflicting linguistic responses, depending on the level of linguistic analysis (phonology vs. morphology). We next show that speakers with no command of a sign language extend these same principles to novel ASL signs. Remarkably, the morphological analysis of ASL signs depends on the morphology of participants' spoken language. Speakers of Malayalam (a language with rich reduplicative morphology) prefer XX signs when doubling signals morphological plurality, whereas no such preference is seen in speakers of Mandarin (a language with no productive plural morphology). Our conclusions open up the possibility that some linguistic principles are amodal and abstract.  相似文献   

5.
We tested hearing 6- and 10-month-olds' ability to discriminate among three American Sign Language (ASL) parameters (location, handshape, and movement) as well as a grammatical marker (facial expression). ASL-na?ve infants were habituated to a signer articulating a two-handed symmetrical sign in neutral space. During test, infants viewed novel two-handed signs that varied in only one parameter or in facial expression. Infants detected changes in the signer's facial expression and in the location of the sign but provided no evidence of detecting the changes in handshape or movement. These findings are consistent with children's production errors in ASL and reveal that infants can distinguish among some parameters of ASL more easily than others.  相似文献   

6.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

7.
Bilinguals report more tip-of-the-tongue (TOT) failures than monolinguals. Three accounts of this disadvantage are that bilinguals experience between-language interference at (a) semantic and/or (b) phonological levels, or (c) that bilinguals use each language less frequently than monolinguals. Bilinguals who speak one language and sign another help decide between these alternatives because their languages lack phonological overlap. Twenty-two American Sign Language (ASL)-English bilinguals, 22 English monolinguals, and 11 Spanish-English bilinguals named 52 pictures in English. Despite no phonological overlap between languages, ASL-English bilinguals had more TOTs than monolinguals, and equivalent TOTs as Spanish-English bilinguals. These data eliminate phonological blocking as the exclusive source of bilingual disadvantages. A small advantage of ASL-English over Spanish-English bilinguals in correct retrievals is consistent with semantic interference and a minor role for phonological blocking. However, this account faces substantial challenges. We argue reduced frequency of use is the more comprehensive explanation of TOT rates in all bilinguals.  相似文献   

8.
American Sign Language (ASL) has evolved within a completely different biological medium, using the hands and face rather than the vocal tract and perceived by eye rather than by ear. The research reviewed in this article addresses the consequences of this different modality for language processing, linguistic structure, and spatial cognition. Language modality appears to affect aspects of lexical recognition and the nature of the grammatical form used for reference. Select aspects of nonlinguistic spatial cognition (visual imagery and face discrimination) appear to be enhanced in deaf and hearing ASL signers. It is hypothesized that this enhancement is due to experience with a visual-spatial language and is tied to specific linguistic processing requirements (interpretation of grammatical facial expression, perspective transformations, and the use of topographic classifiers). In addition, adult deaf signers differ in the age at which they were first exposed to ASL during childhood. The effect of late acquisition of language on linguistic processing is investigated in several studies. The results show selective effects of late exposure to ASL on language processing, independent of grammatical knowledge.This research was supported in part by National Institutes of Health grant HD-13249 awarded to Ursula Bellugi and Karen Emmorey, as well as NIH grants DC-00146, DC-00201, and HD-26022. I would like to thank and acknowledge Ursula Bellugi for her collaboration during much of the research described in this article.  相似文献   

9.
Control of velocity and position in single joint movements   总被引:1,自引:0,他引:1  
Previous research on single joint movements has lead to the development of models of control that propose that movement speed and distance are controlled through an initial pulsatile signal that can be modified in both amplitude and duration. However, the manner in which the amplitude and duration are modulated during the control of movement remains controversial. We now report two studies that were designed to differentiate the mechanisms used to control movement speed from those employed to control final position accuracy. In our first study, participants move at a series of speeds to a single spatial target. In this task, acceleration duration (pulse-width) varied substantially across speeds, and was negatively correlated with peak acceleration (pulse-height). In a second experiment, we removed the spatial target, but required movements at the three speeds similar to those used in the first study. In this task, acceleration amplitude varied extensively across the speed targets, while acceleration duration remained constant. Taken together, our current findings demonstrate that pulse-width measures can be modulated independently from pulse-height measures, and that a positive correlation between such measures is not obligatory, even when sampled across a range of movement speeds. In addition, our findings suggest that pulse-height modulation plays a primary role in controlling movement speed and specifying target distance, whereas pulse-width mechanisms are employed to correct errors in pulse-height control, as required to achieve spatial precision in final limb position.  相似文献   

10.
Several phonological and prosodic properties of words have been shown to relate to differences between grammatical categories. Distributional information about grammatical categories is also a rich source in the child's language environment. In this paper we hypothesise that such cues operate in tandem for developing the child's knowledge about grammatical categories. We term this the Phonological-Distributional Coherence Hypothesis (PDCH). We tested the PDCH by analysing phonological and distributional information in distinguishing open from closed class words and nouns from verbs in four languages: English, Dutch, French, and Japanese. We found an interaction between phonological and distributional cues for all four languages indicating that when distributional cues were less reliable, phonological cues were stronger. This provides converging evidence that language is structured such that language learning benefits from the integration of information about category from contextual and sound-based sources, and that the child's language environment is less impoverished than we might suspect.  相似文献   

11.
Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent–child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.  相似文献   

12.
《Cognitive psychology》2008,56(4):259-305
Several phonological and prosodic properties of words have been shown to relate to differences between grammatical categories. Distributional information about grammatical categories is also a rich source in the child’s language environment. In this paper we hypothesise that such cues operate in tandem for developing the child’s knowledge about grammatical categories. We term this the Phonological-Distributional Coherence Hypothesis (PDCH). We tested the PDCH by analysing phonological and distributional information in distinguishing open from closed class words and nouns from verbs in four languages: English, Dutch, French, and Japanese. We found an interaction between phonological and distributional cues for all four languages indicating that when distributional cues were less reliable, phonological cues were stronger. This provides converging evidence that language is structured such that language learning benefits from the integration of information about category from contextual and sound-based sources, and that the child’s language environment is less impoverished than we might suspect.  相似文献   

13.
Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English is used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language.  相似文献   

14.
听障人群的工作记忆机制   总被引:2,自引:0,他引:2  
听障人群听觉通道受损,使用手语交流,提供了独特的切入点来探讨工作记忆的结构和功能。研究表明,听障人群具备在功能上与正常人的语音环路平行的手语复述机制。通过发声训练,听障人群也可采用语音编码,即语音环路可被通达。听障人群具有与正常人相当的语言工作记忆资源,但是这种资源在具体使用时受视觉通道处理特性的限制。越来越多的研究支持互补理论,认为手语的使用增强了听障人群非语言的视空间处理能力。  相似文献   

15.
We report a 27-year-old woman with chronic auditory agnosia following Landau-Kleffner Syndrome (LKS) diagnosed at age 4 1/2 . She grew up in the hearing/speaking community with some exposure to manually coded English and American Sign Language (ASL). Manually coded (signed) English is her preferred mode of communication. Comprehension and production of spoken language remain severely compromised. Disruptions in auditory processing can be observed in tests of pitch and duration, suggesting that her disorder is not specific to language. Linguistic analysis of signed, spoken, and written English indicates her language system is intact but compromised because of impoverished input during the critical period for acquisition of spoken phonology. Specifically, although her sign language phonology is intact, spoken language phonology is markedly impaired. We argue that deprivation of auditory input during a period critical for the development of a phonological grammar and auditory–verbal short-term memory has limited her lexical and syntactic development in specific ways.  相似文献   

16.
In order to reveal the psychological representation of movement from American Sign Language (ASL), deaf native signers and hearing subjects unfamiliar with sign were asked to make triadic comparisons of movements that had been isolated from lexical and from grammatically inflected signs. An analysis of the similarity judgments revealed a small set of physically specifiable dimensions that accounted for most of the variance. The dimensions underlying the perception of lexical movement were in general different from those underlying inflectional movement, for both groups of subjects. Most strikingly, deaf and hearing subjects significantly differed in their patterns of dimensional salience for movements, both at the lexical and at the inflectional levels. Linguistically relevant dimensions were of increased salience to native signers. The difference in perception of linguistic movement by native signers and by naive observers demonstrates that modification of natural perceptual categories after language acquisition is not bound to a particular transmission modality, but rather can be a more general consequence of acquiring a formal linguistic system.  相似文献   

17.
Perception of dynamic events of American Sign Language (ASL) was studied by isolating information about motion in the language from information about form. Four experiments utilized Johansson's technique for presenting biological motion as moving points of light. In the first, deaf signers were highly accurate in matching movements of lexical signs presented in point-light displays to those normally presented. Both discrimination accuracy and the pattern of errors were similar in this matching task to that obtained in a control condition in which the same signs were always represented normally. The second experiment showed that these results held for discrimination of morphological operations presented in point-light displays as well. In the third experiment, signers were able to accurately identify signs of a constant handshape and morphological operations acting on signs presented in point-light displays. Finally, in Experiment 4, we evaluated what aspects of the motion patterns carried most of the information for sign identifiability. We presented signs in point-light displays with certain lights removed and found that the movement of the fingertips, but not of any other pair of points, is necessary for sign identification and that, in general, the more distal the joint, the more information its movement carries.  相似文献   

18.
Keller F  Alexopoulou T 《Cognition》2001,79(3):301-372
In this paper, we investigate the interaction of phonological and syntactic constraints on the realization of Information Structure in Greek, a free word order language. We use magnitude estimation as our experimental paradigm, which allows us to quantify the influence of a given linguistic constraint on the acceptability of a sentence. We present results from two experiments. In the first experiment, we focus on the interaction of word order and context. In the second experiment, we investigate the additional effect of accent placement and clitic doubling. The results show that word order, in contrast to standard assumptions in the theoretical literature, plays only a secondary role in marking the Information Structure of a sentence. Order preferences are relatively weak and can be overridden by constraints on accent placement and clitic doubling. Our experiments also demonstrate that a null context shows the same preference pattern as an all focus context, indicating that 'default' word order and accent placement (in the absence of context) can be explained in terms of Information Structure. In the theoretical part of this paper, we formalize the interaction of syntactic and phonological constraints on Information Structure. We argue that this interaction is best captured using a notion of grammatical competition, such as the one developed by Optimality Theory (Prince & Smolensky, 1993, Optimality theory: constraint interaction in generative grammar (Technical Report No. 2). Center for Cognitive Science, Rutgers University, Piscataway, NJ; Science 275 (1997) 1604). In particular, we exploit the optimality theoretic concept of constraint ranking to account for the fact that some constraint violations are more serious than others. We extend standard Optimality Theory to obtain a grammar model that predicts not only the optimal (i.e. grammatical) realization of a given input, but also makes predictions about the relative grammaticality of suboptimal structures. This allows us to derive a constraint hierarchy that accounts for the interaction of phonological and syntactic constraints on Information Structure and models the acceptability patterns found in the experimental data.  相似文献   

19.
Most theories of the programming of saccadic eye movements (SEM) agree that direction and amplitude are the two basic dimensions that are under control when an intended movement is planned. But they disagree over whether these two basic parameters are specified separately or in conjunction. We measured saccadic reaction time (SRT) in a situation where information about amplitude and direction of the required movement became available at different moments in time. The delivery of information about either direction or amplitude prior to another reduced duration of SRT demonstrated that direction and amplitude were specified separately rather than in conjunction or in a fixed serial order. All changes in SRT were quantitatively explained by a simple growth-process (accumulator) model according to which a movement starts when two separate neural activities, embodying the direction and amplitude programming, have both reached a constant threshold level of activity. Although, in isolation, the amplitude programming was faster than the direction programming, the situation reversed when two dimensions had to be specified at the same time. We conclude that beside the motor maps representing the desired final position of the eye or a fixed movement vector, another processing stage is required in which the basic parameters of SEM, direction and amplitude, are clearly separable.  相似文献   

20.
Sign language phonological parameters are somewhat analogous to phonemes in spoken language. Unlike phonemes, however, there is little linguistic literature arguing that these parameters interact at the sublexical level. This situation raises the question of whether such interaction in spoken language phonology is an artifact of the modality or whether sign language phonology has not been approached in a way that allows one to recognize sublexical parameter interaction. We present three studies in favor of the latter alternative: a shape-drawing study with deaf signers from six countries, an online dictionary study of American Sign Language, and a study of selected lexical items across 34 sign languages. These studies show that, once iconicity is considered, handshape and movement parameters interact at the sublexical level. Thus, consideration of iconicity makes transparent similarities in grammar across both modalities, allowing us to maintain certain key findings of phonological theory as evidence of cognitive architecture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号