首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Speech perception deficits are commonly reported in dyslexia but longitudinal evidence that poor speech perception compromises learning to read is scant. We assessed the hypothesis that phonological skills, specifically phoneme awareness and RAN, mediate the relationship between speech perception and reading. We assessed longitudinal predictive relationships between categorical speech perception, phoneme awareness, RAN, language, attention and reading at ages 5½ and 6½ years in 237 children many of whom were at high risk of reading difficulties. Speech perception at 5½ years correlated with language, attention, phoneme awareness and RAN concurrently and was a predictor of reading at 6½ years. There was no significant indirect effect of speech perception on reading via phoneme awareness, suggesting that its effects are separable from those of phoneme awareness. Children classified with dyslexia at 8 years had poorer speech perception than age‐controls at 5½ years and children with language disorders (with or without dyslexia) had more severe difficulties with both speech perception and attention control. Categorical speech perception tasks tap factors extraneous to perception, including decision‐making skills. Further longitudinal studies are needed to unravel the complex relationships between categorical speech perception tasks and measures of reading and language and attention.  相似文献   

4.
To learn to read is to acquire a visual language skill which systematically maps onto extant spoken language skills. Some children perform this task quite adeptly while others encounter much difficulty, and it has become a question of both scientific and practical merit to ask why there exists such a range of success in learning to read. Obviously, learning to read places a complex burden on many emerging capacities, and in principle, at least, reading disability could arise at any level from visual perception to general cognition. Yet since reading is parasitic on spoken language, the possibility also exists that reading disability is derived from some subtle difficulty in the language domain. This article reviews some of the many studies which have explored the association between early reading skills and spoken language skills. The focus will be on findings which reveal that when the linguistic short-term memory skills of good and poor beginning readers are critically examined, considerably many, though perhaps not all, poor readers prove to possess subtle deficiencies which correlate with their problems in learning to read.  相似文献   

5.
Language acquisition involves a number of complex skills that evolve in correlation with each other, thus making it possible for learners of their first spoken language —or sign language— to achieve the best results with minimal effort, as long as they do so within the appropriate period of time. In this regard, it is proposed that early speech perception has a primary role in language acquisition. In order to provide an overview of the current scientific knowledge as to the capabilities of children under the age of one to perceive spoken language, this paper presents the results of the most relevant research on discrimination of classes and types of words, interidiomatic and prosodic discrimination, phonological and phonotactic discrimination, as well as recognition of distributional regularities amongst the elements of the speech signal.  相似文献   

6.
In this paper, we describe the application of new computer and speech synthesis technologies for reading instruction. Stories are presented on the computer screen, and readers may designate words or parts of words that they cannot read for immediate speech feedback. The important contingency between speech sounds and their corresponding letter patterns is emphasized by displaying the letter patterns in reverse video as they are spoken. Speech feedback is provided by an advanced text-to-speech synthesizer (DECtalk). Intelligibility data are presented, showing that DECtalk can be understood almost as well as natural human speech by both normal adults and reading disabled children. Preliminary data from 26 disabled readers indicate that there are significant benefits of speech feedback for reading comprehension and word recognition, and that children enjoy reading with the system.  相似文献   

7.
Three experiments are reported which address the problem of defining a role for inner speech. Experiments 1 and 2 establish that inner speech is acquired by normally developing readers between the ages of 8 and 11, and that both slow and fast readers show a similar pattern of acquistition, but do so at a different rate from normal readers. We suggest that the development of inner speech accompanies a strategy of reading aloud “with expression”; and that it is a manifestation of the need to prestructure oral utterances. These will thus contain the lexical items visible on the page within an appropriate prosodic envelope. Both segmental and suprasegmental phonemes contribute to meaning of spoken and, by analogy, written language. Experiment 3 showed that children at this critical point in learning to read comprehended text better when certain prosodic features were made visible on the text. Prosodic restructureing may thus be an important skill acquired by young readers as they progress toward fluent, silent adult reading.  相似文献   

8.
Children have difficulty learning to read alphabetic writing systems, in part, because they have difficulty segmenting spoken language into phonemes. Young children also have difficulty attending to the individual dimensions of visual objects. Thus, children's early difficulty in reading may be one sign of a general inability to selectively attend to the parts of any perceptual wholes. To explore this notion, children in kindergarten through fourth grade (Experiments 1, 3, and 4) and adults (Experiment 2) classified triads of spoken syllables and triads of visual objects. Classifying speech by common parts was positively related to reading and spelling ability (Experiments 1 and 4), but usually not to classifying visual stimuli by common parts under free classification instructions (Experiments 1 through 3). However, classification was more consistent across the visual and auditory modalities when the children were told to classify based on a shared constituent (Experiment 4). Regardless of instructions, performance on the visual tasks did not usually relate to reading and spelling skill. The ability to attend selectively to phonemes seems to be a "special" skill--one which may require specific experiences with language, such as those involved in learning to read an alphabetic writing system.  相似文献   

9.
One intriguing question in language research concerns the extent to which orthographic information impacts on spoken word processing. Previous research has faced a number of methodological difficulties and has not reached a definitive conclusion. Our research addresses these difficulties by capitalizing on recent developments in the area of word learning. Participants were trained to criterion on a set of associations between novel pictures and novel spoken words. Spelling-sound consistent or spelling-sound inconsistent spellings were introduced on the 2nd day, and the influence of these spellings on speech processing was assessed on the 3rd day. Results showed significant orthographic effects on speech perception and speech production in a situation in which spelling-sound consistency was manipulated with perfect experimental control. Results are discussed in terms of a highly interactive language system in which there is a rapid and automatic flow of activation in both directions between orthographic and phonological representations.  相似文献   

10.
This study reports the reading difficulties of five children following unilateral left hemisphere stroke sustained either before or during the early stages of literacy acquisition. Although each of the children experienced a period of disturbed language processing in the initial stages postonset, at the time of testing none of the children were considered to be clinically aphasic. Yet, on a standardized test of oral reading each of the children achieved a reading age that lagged behind chronological age and marked reading impairments were disclosed in four of the five children. A set of standardized and nonstandardized tests, aimed at measuring aspects of cognitive and spoken language processing that are considered to be important for normal reading acquisition, was administered. Where nonstandardized tests were used, performance of each of the stroke children was compared to that of groups of normally developing control children, closely matched for chronological age. A range of residual deficits in cognitive and spoken language processing was disclosed among the five brain-damaged children that appeared to be associated with their reading impairments. Two children had expectedly poor reading due to a selective impairment in verbal IQ; a specific phonological reading disorder was revealed in two children, each of which had a residual impairment to phonological awareness; and delayed reading acquisition was observed in one child with a general language deficit. It is suggested that when a child suffers damage to the left hemisphere in the early stages of reading acquisition, difficulties with learning to read are likely to ensue and may arise as a consequence of an underlying cognitive or linguistic deficit.  相似文献   

11.
Several researchers who have compared the performance of dyslexic and normal-reading children on a variety of different tasks have suggested that dyslexic children may have subtle deficits in the phonemic analysis of spoken as well as written language. Thus it is of interest to know how children who have extraordinary difficulty learning to read can perform explicity auditory-phonetic tasks. Seventeen dyslexic children (10 years of age) and a group of 17 controls were administered tests of identification and discrimination of synthesized voiced stop consonants differing in place of articulation. These were tests of the type used to study categorical perception in adults, adapted for use with young children. Significant differences between dyslexics and controls were found in both kinds of tasks; the pattern of identification and discrimination differences suggests an inconsistency in the dyslexics' phonetic classification of auditory cues. A significant relationship was found between reading level and speech discrimination.  相似文献   

12.
Oh JS  Jun SA  Knightly LM  Au TK 《Cognition》2003,86(3):B53-B64
While early language experience seems crucial for mastering phonology, it remains unclear whether there are lasting benefits of speaking a language regularly during childhood if the quantity and quality of speaking drop dramatically after childhood. This study explored the accessibility of early childhood language memory. Specifically, it compared perception and production of Korean speech sounds by childhood speakers who had spoken Korean regularly for a few years during childhood to those of two other groups: (1) childhood hearers who had heard Korean regularly during childhood but had spoken Korean minimally, if at all; and (2) novice learners. All three groups were enrolled in first-year college Korean language classes. Childhood speakers were also compared to native speakers of Korean to see how native-like they were. The results revealed measurable long-term benefits of childhood speaking experience, underscoring the importance of early language experience, even if such experience diminishes dramatically beyond childhood.  相似文献   

13.
It is widely believed that reading to preschool children promotes their language and literacy skills. Yet, whether early parent–child book reading is an index of generally rich linguistic input or a unique predictor of later outcomes remains unclear. To address this question, we asked whether naturally occurring parent–child book reading interactions between 1 and 2.5 years‐of‐age predict elementary school language and literacy outcomes, controlling for the quantity of other talk parents provide their children, family socioeconomic status, and children's own early language skill. We find that the quantity of parent–child book reading interactions predicts children's later receptive vocabulary, reading comprehension, and internal motivation to read (but not decoding, external motivation to read, or math skill), controlling for these other factors. Importantly, we also find that parent language that occurs during book reading interactions is more sophisticated than parent language outside book reading interactions in terms of vocabulary diversity and syntactic complexity.  相似文献   

14.
Recent experiments have shown that people iconically modulate their prosody corresponding with the meaning of their utterance (e.g., Shintel et al., 2006). This article reports findings from a story reading task that expands the investigation of iconic prosody to abstract meanings in addition to concrete ones. Participants read stories that contrasted along concrete and abstract semantic dimensions of speed (e.g., a fast drive, slow career progress) and size (e.g., a small grasshopper, an important contract). Participants read fast stories at a faster rate than slow stories, and big stories with a lower pitch than small stories. The effect of speed was distributed across the stories, including portions that were identical across stories, whereas the size effect was localized to size‐related words. Overall, these findings enrich the documentation of iconicity in spoken language and bear on our understanding of the relationship between gesture and speech.  相似文献   

15.
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non‐speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.  相似文献   

16.
Background. There is evidence that children who are taught to read later in childhood (age 6–7) make faster progress in early literacy than those who are taught at a younger age (4–5 years), as is current practice in the UK. Aims. Steiner‐educated children begin learning how to read at age 7, and have better reading‐related skills at the onset of instruction. Therefore, it is hypothesized that older Steiner‐educated children will make faster progress in early literacy than younger standard‐educated controls. Samples. A total of 30 Steiner‐educated children (age 7–9) were compared to a matched group of 31 standard‐educated controls (age 4–6). Method. Children were tested for reading, spelling, phonological awareness, and letter knowledge at three time points during their first year of formal reading instruction and again at the end of the second year. Results. There were no significant differences between groups in word reading at the end of the first and second year or reading comprehension at the end of the second year; however, the standard group outperformed the Steiner group on spelling at the end of both years. The Steiner group maintained an overall lead in phonological skills while letter knowledge was similar in both groups. Conclusions. The younger children showed similar, and in some cases, better progress in literacy than the older children; this was attributed to more consistent and high‐quality synthetic phonics instruction as is administered in standard schools. Consequently, concerns that 4‐ to 5‐year‐olds are ‘too young’ to begin formal reading instruction may be unfounded.  相似文献   

17.
Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker??s phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory?Cvisual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.  相似文献   

18.
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors--"slips of the tongue". The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units--gestures--in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action.  相似文献   

19.
Giroux I  Rey A 《Cognitive Science》2009,33(2):260-272
Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units ( Swingley, 2005 ). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990 ; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes.  相似文献   

20.
Three experiments in Serbo-Croatian were conducted on the effects of phonological ambiguity and lexical ambiguity on printed word recognition. Subjects decided rapidly if a printed and a spoken word matched or not. Printed words were either phonologically ambiguous (two possible pronunciations) or unambiguous. If phonologically ambiguous, either both pronunciations were real words or only one was, the other being a nonword. Spoken words were necessarily unambiguous. Half the spoken words were auditorily degraded. In addition, the relative onsets of speech and print were varied. Speed of matching print to speech was slowed by phonological ambiguity, and the effect was amplified when the stimulus was also lexically ambiguous. Auditory degradation did not interact with print ambiguity, suggesting that perception of the spoken word was independent of the printed word.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号