首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sign language displays all the complex linguistic structure found in spoken languages, but conveys its syntax in large part by manipulating spatial relations. This study investigated whether deaf signers who rely on a visual-spatial language nonetheless show a principled cortical separation for language and nonlanguage visual-spatial functioning. Four unilaterally brain-damaged deaf signers, fluent in American Sign Language (ASL) before their strokes, served as subjects. Three had damage to the left hemisphere and one had damage to the right hemisphere. They were administered selected tests of nonlanguage visual-spatial processing. The pattern of performance of the four patients across this series of tests suggests that deaf signers show hemispheric specialization for nonlanguage visual-spatial processing that is similar to hearing speaking individuals. The patients with damage to the left hemisphere, in general, appropriately processed visual-spatial relationships, whereas, in contrast, the patient with damage to the right hemisphere showed consistent and severe visual-spatial impairment. The language behavior of these patients was much the opposite, however. Indeed, the most striking separation between linguistic and nonlanguage visual-spatial functions occurred in the left-hemisphere patient who was most severely aphasic for sign language. Her signing was grossly impaired, yet her visual-spatial capacities across the series of tests were surprisingly normal. These data suggest that the two cerebral hemispheres of congenitally deaf signers can develop separate functional specialization for nonlanguage visual-spatial processing and for language processing, even though sign language is conveyed in large part via visual-spatial manipulation.  相似文献   

2.
In this study the sign-based perceptual abilities of 59 deaf children are investigated. Like many hearing speaking children, deaf signing children appear to perceive isolated lexical items based on the formational parameters of those items. Also, deaf signers show trends similar to those exhibited by hearing speakers for the development of the perceptual ability necessary to distinguish between minimal pairs within their respective language systems.  相似文献   

3.
《Cognitive development》2005,20(2):159-172
Recent studies with “late-signing” deaf children (deaf children born into families in which no-one uses a sign language) have indicated that they have difficulty performing tasks that require them to reason about other people's false beliefs. However, virtually no research has so far investigated how far late signers’ difficulties with mental state understanding extend. This paper reports one study that uses an imitation paradigm to examine whether late signers may also have difficulty in interpreting other people's actions in terms of their goals. Both late-signing (N = 15) and second generation “native-signing” deaf children (N = 19) produced a pattern of responses to this task that indicates that they can and readily do view the actions of others as goal-directed. We conclude that this form of mental state understanding (generally seen as a precursor to understanding false beliefs) is intact in late-signing deaf children.  相似文献   

4.
《Cognition》2009,112(2):217-228
Commenting on perceptual similarities between objects stands out as an important linguistic achievement, one that may pave the way towards noticing and commenting on more abstract relational commonalities between objects. To explore whether having a conventional linguistic system is necessary for children to comment on different types of similarity comparisons, we observed four children who had not been exposed to usable linguistic input - deaf children whose hearing losses prevented them from learning spoken language and whose hearing parents had not exposed them to sign language. These children developed gesture systems that have language-like structure at many different levels. Here we ask whether the deaf children used their gestures to comment on similarity relations and, if so, which types of relations they expressed. We found that all four deaf children were able to use their gestures to express similarity comparisons (point to cat + point to tiger) resembling those conveyed by 40 hearing children in early gesture + speech combinations (cat + point to tiger). However, the two groups diverged at later ages. Hearing children, after acquiring the word like, shifted from primarily expressing global similarity (as in cat/tiger) to primarily expressing single-property similarity (as in crayon is brown like my hair). In contrast, the deaf children, lacking an explicit term for similarity, continued to primarily express global similarity. The findings underscore the robustness of similarity comparisons in human communication, but also highlight the importance of conventional terms for comparison as likely contributors to routinely expressing more focused similarity relations.  相似文献   

5.
ABSTRACT

Being connected to other people at the level of inner and unobservable mental states is one of the most essential aspects of a meaningful life, including psychological well-being and successful cooperation. The foundation for this kind of connectedness is our theory of mind (ToM), that is the ability to understand our own and others’ inner experiences in terms of mental states such as beliefs and desires. But how do we develop this ability? Forty-six 17- to 107-months-old children completed a non-verbal eye-tracker false-belief task. There were 9 signing deaf children from deaf families and two comparison groups, that is 13 deaf children with cochlear implants and 24 typically developing hearing children. We show that typically developing hearing children and deaf children from deaf families, but not deaf children with cochlear implants, succeeded on a non-verbal eye-tracking ToM task. The findings suggest that the ability to recognize others’ mental states is supported by very early, continuous and fluent language-based communication with caregivers.  相似文献   

6.
Stuttering is a disorder of speech production that typically arises in the preschool years, and many accounts of its onset and development implicate language and motor processes as critical underlying factors. There have, however, been very few studies of speech motor control processes in preschool children who stutter. Hearing novel nonwords and reproducing them engages multiple neural networks, including those involved in phonological analysis and storage and speech motor programming and execution. We used this task to explore speech motor and language abilities of 31 children aged 4–5 years who were diagnosed as stuttering. We also used sensitive and specific standardized tests of speech and language abilities to determine which of the children who stutter had concomitant language and/or phonological disorders. Approximately half of our sample of stuttering children had language and/or phonological disorders. As previous investigations would suggest, the stuttering children with concomitant language or speech sound disorders produced significantly more errors on the nonword repetition task compared to typically developing children. In contrast, the children who were diagnosed as stuttering, but who had normal speech sound and language abilities, performed the nonword repetition task with equal accuracy compared to their normally fluent peers. Analyses of interarticulator motions during accurate and fluent productions of the nonwords revealed that the children who stutter (without concomitant disorders) showed higher variability in oral motor coordination indices. These results provide new evidence that preschool children diagnosed as stuttering lag their typically developing peers in maturation of speech motor control processes.Educational objectives: The reader will be able to: (a) discuss why performance on nonword repetition tasks has been investigated in children who stutter; (b) discuss why children who stutter in the current study had a higher incidence of concomitant language deficits compared to several other studies; (c) describe how performance differed on a nonword repetition test between children who stutter who do and do not have concomitant speech or language deficits; (d) make a general statement about speech motor control for nonword production in children who stutter compared to controls.  相似文献   

7.
Hu Z  Wang W  Liu H  Peng D  Yang Y  Li K  Zhang JX  Ding G 《Brain and language》2011,116(2):64-70
Effective literacy education in deaf students calls for psycholinguistic research revealing the cognitive and neural mechanisms underlying their written language processing. When learning a written language, deaf students are often instructed to sign out printed text. The present fMRI study was intended to reveal the neural substrates associated with word signing by comparing it with picture signing. Native deaf signers were asked to overtly sign in Chinese Sign Language (CSL) common objects indicated with written words or presented as pictures. Except in left inferior frontal gyrus and inferior parietal lobule where word signing elicited greater activation than picture signing, the two tasks engaged a highly overlapping set of brain regions previously implicated in sign production. The results suggest that word signing in the deaf signers relies on meaning activation from printed visual forms, followed by similar production processes from meaning to signs as in picture signing. The present study also documents the basic brain activation pattern for sign production in CSL and supports the notion of a universal core neural network for sign production across different sign languages.  相似文献   

8.
The present study examines the impact of highly inconsistent input on language acquisition. The American deaf community provides a unique opportunity to observe children exposed to nonnative language models as their only linguistic input. This research is a detailed case study of one child acquiring his native language in such circumstances. It asks whether this child is capable of organizing a natural language out of input data that are not representative of certain natural language principles. Simon is a deaf child whose deaf parents both learned American Sign Language (ASL) after age 15. Simon's only ASL input is provided by his late-learner parents. The study examines Simon's performance at age 7 on an ASL morphology task, compared with eight children who have native signing parents, and also compared with Simon's own parents. The results show that Simon's production of ASL substantially surpasses that of his parents. Simon's parents, like other late learners of ASL, perform below adult native signing criteria, with many inconsistencies and errors in their use of ASL morphology. In contrast, Simon's performance is much more regular, and in fact on most ASL morphemes is equal to that of children exposed to a native signing model. The results thus indicate that Simon is capable of acquiring a regular and orderly morphological rule system for which his input provides only highly inconsistent and noisy data. In addition, the results provide some insight into the mechanisms by which such learning may occur. Although the ASL situation is rare, it reveals clues that may contribute to our understanding of the human capacity for language learning.  相似文献   

9.
A time-sharing paradigm was used to assess language lateralization in language-disordered and normal children aged 4–7 years. Several expressive language tasks as well as a vocal, but nonlinguistic, task were administered concurrently with unimanual finger tapping. Dependent variables were percent disruption scores and number of syllables produced per concurrent trial. All language concurrent tasks produced tapping reductions for both hands for both groups. This result contrasts to similar time-sharing studies claiming asymmetrical interference and hence language lateralization in children (N. White & M. Kinsbourne, 1980, Brain and Language, 10, 215–223; J. Obrzut, G. Hynd, A. Obrzut, & J. Leitgeb, 1980, Brain and Language, 11, 181–194). A multiple regression analysis revealed a significant interaction effect differentiating language-disordered from normal children. Normals exhibited a parallel response pattern for speech and tapping (both increased or decreased in rate) under all lateralization conditions. Language-disordered children exhibited an inverse response pattern (e.g., if speech output increased, tapping rate decreased) only under left-hemisphere time-sharing.  相似文献   

10.
Wechsler Intelligence Scale for Children—Revised (WISC-R) Performance Scale metrics and subtest factor loadings, derived separately from deaf (N = 1228) and hearing (N = 2200) samples, are practically identical. Small mean differences are probably attributable to the higher incidence of brain damage among deaf children. In addition to demonstrating the absence of construct bias in WISC-R Performance IQ (PIQ) measurement for deaf children, the results contradict theories which propose linguistic bias as the cause of the white-black difference in Performance IQ. Spearman's hypothesis that white-black mental test differences are primarily a difference in g received significant support. The results indicate that cognition, as measured by PIQ, is virtually independent of language acquisition.  相似文献   

11.
L. S. Gottfredson's preceding comment (Journal of Vocational Behavior 1983, 23, 203–212)is characterized by undocumented and arbitrary assertions. Moreover, we still maintain and cite further evidence that the features of the stages she describes represent an implausible account of development. We conclude that there is nothing in either L. S. Gottfredson's original (Journal of Counseling Psychology 1981, 28, 545–579) article or her preceding paper that leads us to alter our belief that the views we present in our own article (Journal of Vocational Behavior 1983, 23, 179–212) will be useful for the future development of vocational theory and intervention.  相似文献   

12.
This paper reports two studies which support the prediction derived from Hershenson's (Journal of Counseling Psychology, 1968, 15, 23–30) life-stage vocational development model that average scores on Self-differentiation (worker self-concept and motivation) would exceed those on Competence (work habits, skills, and interpersonal relations), which in turn would exceed those on Independence (appropriateness and crystallization of vocational goals). The first study involved ratings by project staff on an inner city, socially disadvantaged population, and the second study involved self-ratings by individuals who had changed occupations in midcareer. Findings are consistent with those reported by Hershenson and Langbauer (Journal of Counseling Psychology, 1973, 20, 519–521) on a population of deaf clients.  相似文献   

13.
John L. Locke 《Cognition》1978,6(3):175-187
Twenty-four deaf and hearing children silently read a printed passage while crossing out all detected cases of a pre-specified target letter. Target letters appeared in phonemically modal form, a category loosely analogous to “pronounced” letters (e.g., the g in badge), and in phonemically nonmodal form, a class which included “silent” letters and those pronounced in somewhat atypical fashion (e.g., the g in rough). Hearing children detected significantly more modal than nonmodal forms, an expected pronunciation effect for individuals in whom speech and reading ordinarily are in close functional relationship. The deaf detected exactly as many modal as nonmodal letter forms, provoking the interpretation that deaf children, as a group, do not effectively mediate print with speech. The deaf also were relatively unaffected by grammatical class, while hearing subjects were considerably more likely to detect a target letter if it occured in a content word than a functor term. Questions pertaining to reading instruction in the deaf are discussed.  相似文献   

14.
Developmental psychology plays a central role in shaping evidence‐based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.]  相似文献   

15.
The use of pointing and its place in word combinations and the organization of sentences were examined in children acquiring Japanese Sign Language as a first language. Subjects were three deaf children of signing deaf parents, and were aged from 1 year 9 months to 3 years 1 month at the time of observation. They were observed and videotaped periodically in free play settings. Pointing gestures were observed frequently in the earlier utterances in the development of sign language. It was also found that some pointing was referentially redundant and had a fixed position at the end of a sentence. This suggests that pointing, as well as being used referentially, plays a grammatical role in organizing the sentence.  相似文献   

16.
In recent years numerous operant language training programmes have been designed for teaching both receptive and expressive language to autistic and retarded children (e.g. Bricker and Bricker. 1970a; Bricker and Bricker, 1970b; Lovaas, 1968; Sloane et al, 1968). There have been suggestions that the content of such programmes should be to some extent dictated by the findings of psycholinguists, while the methods be designed along behaviour modification lines (Lynch and Bricker, 1972; Miller and Yoder, 1972). Certainly operant programmes have been shown to produce some improvement of language function in autistic and retarded children (Bricker and Bricker. 1970a; Sloane et al, 1968; Lovaas. 1968; Guesset al, 1968); but the problem of whether all retarded children can be taught some language by these means has not been tackled. Psycholinguists, following Chomsky (Chomsky, 1965) maintain that the development of language in children is dependent on the language acquisition device, or LAD. Unfortunately there are no independent means of determining the presence of LAD in a child, so that relating a child's inability to use language to the absence of LAD becomes a circular argument.It frequently seems to be assumed that, provided no perceptual deficits are present, language acquisition is as difficult in one medium as in another. Individuals who are deaf and retarded have been taught sign language with some success (Berger, 1972; Cornforth et al, 1974), and retarded children who are non-speaking have been taught symbolic languages (Bliss symbols in Vanderheiden et al, 1975; Premack symbols in Hollis and Carrier, 1975, and Hodges, 1976). It is unclear, however, whether those learning symbolic languages, but having no gross physical or perceptual handicap, could have learnt sign language or even spoken language with an equivalent method of training. The present study is a report of a retarded boy with unreliable hearing (which ruled out spoken language), who seemed unable to learn (receptive or expressive) sign language after extensive operant training, but who rapidly acquired a limited symbolic “language” using an identical training method. The symbols used were pictorial representations of the objects (cf. Bliss and Premack symbols).  相似文献   

17.
Central to the interface of social-cognitive and communicative development is the growth of a theory of mind (ToM). ToM is mastered by most hearing children and deaf children of signing deaf parents by the age of 5 or 6 but is often seriously delayed in deaf children of hearing parents. This paper reviews recently published research on deaf children's ToM development and presents an original study consisting of eight longitudinal case histories that collectively map late-signing deaf children's ToM performance from 44 to 158 months of age. While five tentative conclusions can be posited from the collective research so far, further investigation of each of these possibilities is clearly needed.  相似文献   

18.
This article reviews theoretical and empirical issues concerning the relations of language and memory in deaf children and adults. An integration of previous studies, together with the presentation of new findings, suggests that there is an intimate relation between spoken language and memory. Either spoken language or sign language can serve as a natural mode of communication for young children (deaf or hearing), leading to normal language, social, and cognitive development. Nevertheless, variation in spoken language abilities can be shown to have a direct impact on memory span. Although the ways in which memory span can effect other cognitive processes and academic achievement are not considered in depth here, several variables that can have direct impact on the language-memory interaction are considered. These findings have clear implications for the education of deaf children.  相似文献   

19.
It has been consistently reported the deaf children have tremendous problems in reading English sentences. Three experiments were conducted in the present study to investigate the nature of deaf children's reading inability. The first experiment looked into the letter-decoding process. It was found that deaf subjects took longer than normal-hearing subjects in encoding and decoding alphabetic letters. The second experiment employed a sentence-picture verification paradigm. The results showed that deaf subjects adopted a visual-imagery coding strategy rather than a general linguistic coding strategy as described by H. H. Clark and W. Chase (Cognitive Psychology, 1972, 3, 472–517) and by P. A. Carpenter and M. A. Just (Memory and Cognition, 1975, 3, 465–473). However, when the sentence was presented in manual signs (Experiment 3), deaf subjects' verification time showed that they adopted a general linguistic coding strategy. Thus, deaf subjects are capable of linguistic coding strategy, but they do not apply it to process printed English sentences. A second-language hypothesis was advanced to account for the obtained data. Deaf children's reading inability was also discussed from this perspective.  相似文献   

20.
Successful research of Caribbean signed languages and deaf communities involves negotiating complex communication ethics toward both people and languages. In this article, I ground a call for ethical listening to Caribbean deaf and signing communities in sociolinguistic research that investigated deaf community and sign language boundaries in the Caribbean. I argue that a dialogic ethic that privileges listening is foundational for ethical research with Caribbean deaf and signing communities by discussing two ethical challenges that were central to understanding their narrative ground: the communicative construction of categories of linguistic membership and advocacy of social justice and human rights.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号