首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To learn to read is to acquire a visual language skill which systematically maps onto extant spoken language skills. Some children perform this task quite adeptly while others encounter much difficulty, and it has become a question of both scientific and practical merit to ask why there exists such a range of success in learning to read. Obviously, learning to read places a complex burden on many emerging capacities, and in principle, at least, reading disability could arise at any level from visual perception to general cognition. Yet since reading is parasitic on spoken language, the possibility also exists that reading disability is derived from some subtle difficulty in the language domain. This article reviews some of the many studies which have explored the association between early reading skills and spoken language skills. The focus will be on findings which reveal that when the linguistic short-term memory skills of good and poor beginning readers are critically examined, considerably many, though perhaps not all, poor readers prove to possess subtle deficiencies which correlate with their problems in learning to read.  相似文献   

2.
3.
Background: Previous studies have reported that children score better in language tasks using sung rather than spoken stimuli. We examined word detection ease in sung and spoken sentences that were equated for phoneme duration and pitch variations in children aged 7 to 12 years with typical language development (TLD) as well as in children with specific language impairment (SLI ), and hypothesized that the facilitation effect would vary with language abilities. Method: In Experiment 1, 69 children with TLD (7–10 years old) detected words in sentences that were spoken, sung on pitches extracted from speech, and sung on original scores. In Experiment 2, we added a natural speech rate condition and tested 68 children with TLD (7–12 years old). In Experiment 3, 16 children with SLI and 16 age-matched children with TLD were tested in all four conditions. Results: In both TLD groups, older children scored better than the younger ones. The matched TLD group scored higher than the SLI group who scored at the level of the younger children with TLD . None of the experiments showed a facilitation effect of sung over spoken stimuli. Conclusions: Word detection abilities improved with age in both TLD and SLI groups. Our findings are compatible with the hypothesis of delayed language abilities in children with SLI , and are discussed in light of the role of durational prosodic cues in words detection.  相似文献   

4.
This study has two theoretical dimensions: (a) to explore which components of Baddeley's (1986) working memory model are associated with children's spoken language comprehension, and (b) to compare the extent to which measures of the components of this fractionated model and an index of a unitary model (listening span) are able to predict individual differences in spoken language comprehension. Correlational analyses revealed that within a group of 66 4– and 5-year-old children both listening span and phonological memory, but not visuospatial memory, were associated with vocabulary knowledge and spoken language comprehension. However, of the proposed measures of central executive function—dual task coordination, sustained attention, verbal fluency—only the latter was related to children's ability to understand spoken language. Hierarchical regression analyses indicated that variance in vocabulary knowledge was best explained by phonological memory skills, whereas individual differences in spoken language comprehension exhibited unique and independent associations with verbal fluency.  相似文献   

5.
Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent–child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.  相似文献   

6.
Not long ago, poor language skills did not necessarily interfere with the quality of a person's life. Many occupations did not require sophisticated language or literacy. Interactions with other people could reasonably be restricted to family members and a few social or business contacts. But in the 21st century, advances in technology and burgeoning population centers have made it necessary for children to acquire high levels of proficiency with at least one language, in both spoken and written form. This situation increases the urgency for us to develop better theoretical accounts of the problems underlying disorders of language, including dyslexia. Empirical investigations of language-learning deficits largely focus on phonological representations and often ask to what extent labeling responses are "categorical." This article describes the history of this approach and presents some relevant findings regarding the perceptual organization of speech signals-findings that should prompt us to expand our investigations of language disorders.  相似文献   

7.
Children have difficulty learning to read alphabetic writing systems, in part, because they have difficulty segmenting spoken language into phonemes. Young children also have difficulty attending to the individual dimensions of visual objects. Thus, children's early difficulty in reading may be one sign of a general inability to selectively attend to the parts of any perceptual wholes. To explore this notion, children in kindergarten through fourth grade (Experiments 1, 3, and 4) and adults (Experiment 2) classified triads of spoken syllables and triads of visual objects. Classifying speech by common parts was positively related to reading and spelling ability (Experiments 1 and 4), but usually not to classifying visual stimuli by common parts under free classification instructions (Experiments 1 through 3). However, classification was more consistent across the visual and auditory modalities when the children were told to classify based on a shared constituent (Experiment 4). Regardless of instructions, performance on the visual tasks did not usually relate to reading and spelling skill. The ability to attend selectively to phonemes seems to be a "special" skill--one which may require specific experiences with language, such as those involved in learning to read an alphabetic writing system.  相似文献   

8.
Studies of young children with unilateral perinatal stroke (PS) have confirmed the plasticity of the developing brain for acquiring language. While recent studies of typically developing children have demonstrated the significant development of language well into adolescence, we know little regarding the course of language development in the PS group as they mature. Will children with PS continue to show the same remarkable plasticity that they exhibited at younger ages? In the present paper we investigate later language and discourse in children with perinatal stroke (ages 7–16) using spoken personal narrative as the discourse context. In contrast to the findings of the discourse studies of younger children with PS, children with left hemisphere lesions made more morphological errors, used less complex syntax and fewer syntactic types than controls; they also produced more impoverished story settings. In contrast, those with right hemisphere lesions performed comparably to controls, except in their impoverished use of complex syntax. The findings provide insight into the nature of later spoken language development in these children, revealing both the nature and extent of neuroplasticity for language as well as potential regional biases.  相似文献   

9.
We investigated the extent to which learning to read and write affects spoken word recognition. Previous studies have reported orthographic effects on spoken language in skilled readers. However, very few studies have addressed the development of these effects as a function of reading expertise. We therefore studied orthographic neighborhood (ON) and phonological neighborhood (PN) effects in spoken word recognition in beginning and advanced readers and in children with developmental dyslexia. We predicted that whereas both beginning and advanced readers would show normal PN effects, only advanced readers would show ON effects. The results confirmed these predictions. The size of the ON effect on spoken word recognition was strongly predicted by written language experience and proficiency. In contrast, the size of the PN effect was not affected by reading level. Moreover, dyslexic readers showed no orthographic effects on spoken word recognition. In sum, these data suggest that orthographic effects on spoken word recognition are not artifacts of some uncontrolled spoken language property but reflect a genuine influence of orthographic information on spoken word recognition.  相似文献   

10.
Concern for the impact of prenatal cocaine exposure (PCE) on human language development is based on observations of impaired performance on assessments of language skills in these children relative to non-exposed children. We investigated the effects of PCE on speech processing ability using event-related potentials (ERPs) among a sample of adolescents followed prospectively since birth. This study presents findings regarding cortical functioning in 107 prenatally cocaine-exposed (PCE) and 46 non-drug-exposed (NDE) 13-year-old adolescents.PCE and NDE groups differed in processing of auditorily presented non-words at very early sensory/phonemic processing components (N1/P2), in somewhat higher-level phonological processing components (N2), and in late high-level linguistic/memory components (P600).These findings suggest that children with PCE have atypical neural responses to spoken language stimuli during low-level phonological processing and at a later stage of processing of spoken stimuli.  相似文献   

11.
12.
This article reviews theoretical and empirical issues concerning the relations of language and memory in deaf children and adults. An integration of previous studies, together with the presentation of new findings, suggests that there is an intimate relation between spoken language and memory. Either spoken language or sign language can serve as a natural mode of communication for young children (deaf or hearing), leading to normal language, social, and cognitive development. Nevertheless, variation in spoken language abilities can be shown to have a direct impact on memory span. Although the ways in which memory span can effect other cognitive processes and academic achievement are not considered in depth here, several variables that can have direct impact on the language-memory interaction are considered. These findings have clear implications for the education of deaf children.  相似文献   

13.
For hearing people, structure given to orthographic information may be influenced by phonological structures that develop with experience of spoken language. In this study we examine whether profoundly deaf individuals structure orthographic representation differently. We ask "Would deaf students who are advanced readers show effects of syllable structure despite their altered experience of spoken language, or would they, because of reduced influence from speech, organize their orthographic knowledge according to groupings defined by letter frequency?" We used a task introduced by Prinzmetal (Prinzmetal, Treiman, & Rho, 1986) in which participants were asked to judge the colour of letters in briefly presented words. As with hearing participants, the number of errors made by deaf participants was influenced by syllable structure (Prinzmetal et al., 1986; Rapp, 1992). This effect could not be accounted for by letter frequency. Furthermore, there was no correlation between the strength of syllable effects and residual speech or hearing. Our results support the view that the syllable is a unit of linguistic organization that is abstract enough to apply to both spoken and written language.  相似文献   

14.
15.
This investigation examined whether access to sign language as a medium for instruction influences theory of mind (ToM) reasoning in deaf children with similar home language environments. Experiment 1 involved 97 deaf Italian children ages 4-12 years: 56 were from deaf families and had LIS (Italian Sign Language) as their native language, and 41 had acquired LIS as late signers following contact with signers outside their hearing families. Children receiving bimodal/bilingual instruction in LIS together with Sign-Supported and spoken Italian significantly outperformed children in oralist schools in which communication was in Italian and often relied on lipreading. Experiment 2 involved 61 deaf children in Estonia and Sweden ages 6-16 years. On a wide variety of ToM tasks, bilingually instructed native signers in Estonian Sign Language and spoken Estonian succeeded at a level similar to age-matched hearing children. They outperformed bilingually instructed late signers and native signers attending oralist schools. Particularly for native signers, access to sign language in a bilingual environment may facilitate conversational exchanges that promote the expression of ToM by enabling children to monitor others' mental states effectively.  相似文献   

16.
Young children's awareness of the word as a unit of spoken language was investigated in a series of five experiments that required children aged from 4 to 7 years to segment spoken language strings into words. The results of the first three experiments suggest that young children have considerable success in segmenting spoken language materials, regardless of the grammaticality of the strings, and regardless of the grammatical form class, plurality, or syllabic length of the component words. The basis of such successful segmentation ability was considered further in a fourth experiment, which indicated that children may use stress as a basis of response. A fifth experiment therefore manipulated syllabic stress and morphemic structure to determine what response strategies are employed by children of different ages in segmenting speech. The results suggest that 4- to 5-year-old children respond primarily on the basis of acoustic factors such as stress, whereas somewhat older 5- to 6-year-old children respond on the basis of (unbound) morphemic structure. By age 7, most children have abandoned strategies and now respond on the basis of word concept. Implications of these findings for reading acquisition are briefly indicated.This research was supported by a Tertiary Education Commission, General Development Grant (University of Western Australia), and by an Education Research Grant from the Education Research and Development Committee, Canberra.  相似文献   

17.
Non-vocal language intervention is mostly used to develop communication skills in severely dysfunctional children. In the present study, a 3-year-old dysphatic boy was taught signs to facilitate his speech development. After 6 months of sign instruction, he showed substantial improvement in spoken language, and had gained one year on the Receptive and the Expressive scales of Reynell Developmental Language Scales in half a years time. Behavior problems were markedly reduced. It is concluded that sign instruction may be used with a wider range of subjects than is usual today.  相似文献   

18.
Infant signs are intentionally taught/learned symbolic gestures which can be used to represent objects, actions, requests, and mental state. Through infant signs, parents and infants begin to communicate specific concepts earlier than children’s first spoken language. This study examines whether cultural differences in language are reflected in children’s and parents’ use of infant signs. Parents speaking East Asian languages with their children utilize verbs more often than do English-speaking mothers; and compared to their English-learning peers, Chinese children are more likely to learn verbs as they first acquire spoken words. By comparing parents’ and infants’ use of infant signs in the U.S. and Taiwan, we investigate cultural differences of noun/object versus verb/action bias before children’s first language. Parents reported their own and their children's use of first infant signs retrospectively. Results show that cultural differences in parents’ and children’s infant sign use were consistent with research on early words, reflecting cultural differences in communication functions (referential versus regulatory) and child-rearing goals (independent versus interdependent). The current study provides evidence that intergenerational transmission of culture through symbols begins prior to oral language.  相似文献   

19.
The present study employs event related potentials (ERPs) to verify the utility of using electrophysiological measures to study developmental questions within the field of language comprehension. Established ERP components (N400 and P600) that reflect semantic and syntactic processing were examined. Fifteen adults and 14 children (ages 8-13) processed spoken stimuli containing either semantic or syntactic anomalies. Adult participants showed a significant N400 in response to semantic anomalies and P600 components in response to syntactic anomalies. Children also show evidence of both ERP components. The children's N400 component differed from the adults' in scalp location, latency, and component amplitude. The children's P600 was remarkably similar to the P600 shown by adults in scalp location, component amplitude, and component latency. Theoretical implication for theories of language comprehension in adults and children will be discussed.  相似文献   

20.
We analyzed postsurgery linguistic outcomes of 43 hemispherectomy patients operated on at UCLA. We rated spoken language (Spoken Language Rank, SLR) on a scale from 0 (no language) to 6 (mature grammar) and examined the effects of side of resection/damage, age at surgery/seizure onset, seizure control postsurgery, and etiology on language development. Etiology was defined as developmental (cortical dysplasia and prenatal stroke) and acquired pathology (Rasmussen's encephalitis and postnatal stroke). We found that clinical variables were predictive of language outcomes only when they were considered within distinct etiology groups. Specifically, children with developmental etiologies had lower SLRs than those with acquired pathologies (p =.0006); age factors correlated positively with higher SLRs only for children with acquired etiologies (p =.0006); right-sided resections led to higher SLRs only for the acquired group (p =.0008); and postsurgery seizure control correlated positively with SLR only for those with developmental etiologies (p =.0047). We argue that the variables considered are not independent predictors of spoken language outcome posthemispherectomy but should be viewed instead as characteristics of etiology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号