首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study examined whether children use prosodic correlates to word meaning when interpreting novel words. For example, do children infer that a word spoken in a deep, slow, loud voice refers to something larger than a word spoken in a high, fast, quiet voice? Participants were 4- and 5-year-olds who viewed picture pairs that varied along a single dimension (e.g., big vs. small flower) and heard a recorded voice asking them, for example, “Can you get the blicket one?” spoken with either meaningful or neutral prosody. The 4-year-olds failed to map prosodic cues to their corresponding meaning, whereas the 5-year-olds succeeded (Experiment 1). However, 4-year-olds successfully mapped prosodic cues to word meaning following a training phase that reinforced children’s attention to prosodic information (Experiment 2). These studies constitute the first empirical demonstration that young children are able to use prosody-to-meaning correlates as a cue to novel word interpretation.  相似文献   

2.
This paper reports on the psycholinguistic investigation of a surface dyslexic aphasic patient's abilities to handle written material. The analysis of paralexic errors produced in reading aloud single words and nonwords classically suggested that the patient was using an analytical strategy parsing the letter string stimulus, from left to right, into graphemes, and assigning phonemic values to graphemes. The patient's results were found to be sensitive to irregularities in correspondence between graphemes and phonemes not only in reading aloud but in lexical decisions, writing on dictation, rhyming, and written-word comprehension. Moreover, the patient's linguistic behavior brought out the reverse pattern observed in deep-dyslexic performances within word/nonword and content/function word dimensions. It was found that some semantic information about written words could be retrieved from both phonological and nonphonological processes presumably operating concurrently and both providing converging or conflicting pieces of meaning to the understanding of written words. Some considerations derived from the observation of this pathological reading behavior are discussed, contributing to a psycholinguistic model of normal reading.  相似文献   

3.
A case of pure alexia due to an ischemic lesion of the occipital temporal region is described. Written words could be matched but not read. Immediate memory span for graphemes was defective. The reading defect probably depends on the inability to modify the written word “globally”; the phonological process was intact, but the memory disturbance impeded reading. The dissociation is explained by the preservation of word forms, which are linked to the semantic stage. Nonwritten stimuli trigger a “meaning” which evokes the word form and so the written word is recognized even though it cannot be read.  相似文献   

4.
Autistic children were compared with control children on tasks in which retention was tested by different methods. In three tests of recall, using named pictures, written words and spoken words as test stimuli, autistic children were impaired in comparison with age-matched normal children and with controls matched for verbal ability. In one test of forced-choice recognition of pictures, autistic children were impaired in comparison with ability-matched controls. In three tests of cued recall, using named pictures, written words and spoken words as test stimuli, and acoustic, graphemic and semantic cues, autistic children were not impaired in comparison with normal age-matched controls. In one test of paired-associate learning using non-related word pairs as test stimuli autistic children were not impaired in comparison with normal age-matched controls. These experimental paradigms were similar to some that have been used to investigate the amnesic syndrome in man. Thus findings on paired-associate learning differ in autistic and amnesic subjects, but findings on recall, recognition and cued recall are comparable. A possible parallel between autism and amnesia is discussed.  相似文献   

5.
Two databases of Spanish surface word forms are presented. Surface word forms are words considered as orthographically or phonologically specified without reference to their meaning or syntactic category. The databases are based on the productive written vocabulary of children between the ages of 6 and 10 years. Statistical and structural information is presented concerning surface word-form frequency, consonant-vowel (CV) structure, number of syllables, syllables, syllable CV structure, and subsyllabic units. LEX I was intended to aid in the study of reading processes. Entries were orthographic surface word forms; words were divided in their components following orthographic criteria. LEX II was designed for spoken language research. Accordingly, words were transcribed phonologically and phonological criteria were applied in extracting the internal units. Information about stress location was also provided. Together, LEX I and LEX II represent a useful tool for psycholinguists interested in the study of people acquiring Spanish as a first or foreign language and of Spanish-speaking populations in general.  相似文献   

6.
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating feedback consistency of rhymes. The present lexical decision study, done in English, manipulated the spelling of individual vowels within consistent rhymes. Participants recognized words with consistent rhymes where the vowel has the most typical spelling (e.g., lobe) faster than words with consistent rhymes where the vowel has a less typical spelling (e.g., loaf). The present study extends previous literature by showing that auditory word recognition is affected by orthographic regularities at different grain sizes, just like written word recognition and spelling. The theoretical and methodological implications for future research in spoken word recognition are discussed.  相似文献   

7.
Two experiments tested the hypothesis that children at the left of the distribution of right minus left (R-L) hand skill are at risk for poor phonological processing. In the first experiment, individual assessments of spoken rhyme awareness were made in 5- to 8-year-olds. In the second experiment, a group test of word order memory for spoken confusable and nonconfusable items was given to 9- to 11-year-olds. Evidence of poorer phonological processing in those at the left of the R-L distribution was found in both experiments. Rhyme judgements and word order memory were both associated with reading ability, but reading did not interact with effects for hand skill. A group test of homophone comprehension was given to the same children tested for word order memory. Homophone errors did not differ between hand skill groups, showing a dissociation between the two tasks for R-L hand difference. The findings suggest that some risks for phonological processing could be due to normal genetic variation associated with the hypothesized rs + gene (Annett, 1972, 1978).  相似文献   

8.
We explore whether children's willingness to produce unfamiliar sequences of words reflects their experience with similar lexical patterns. We asked children to repeat unfamiliar sequences that were identical to familiar phrases (e.g., A piece of toast) but for one word (e.g., a novel instantiation of A piece of X, like A piece of brick). We explore two predictions-motivated by findings in the statistical learning literature-that children are likely to have detected an opportunity to substitute alternative words into the final position of a four-word sequence if (a) it is difficult to predict the fourth word given the first three words and (b) the words observed in the final position are distributionally similar. Twenty-eight 2-year-olds and thirty-one 3-year-olds were significantly more likely to correctly repeat unfamiliar variants of patterns for which these properties held. The results illustrate how children's developing language is shaped by linguistic experience.  相似文献   

9.
The ability to create temporary binding representations of information from different sources in working memory has recently been found to relate to the development of monolingual word recognition in children. The current study explored this possible relationship in an adult word-learning context. We assessed whether the relationship between cross-modal working memory binding and lexical development would be observed in the learning of associations between unfamiliar spoken words and their semantic referents, and whether it would vary across experimental conditions in first- and second-language word learning. A group of English monolinguals were recruited to learn 24 spoken disyllable Mandarin Chinese words in association with either familiar or novel objects as semantic referents. They also took a working memory task in which their ability to temporarily bind auditory-verbal and visual information was measured. Participants’ performance on this task was uniquely linked to their learning and retention of words for both novel objects and for familiar objects. This suggests that, at least for spoken language, cross-modal working memory binding might play a similar role in second language-like (i.e., learning new words for familiar objects) and in more native-like situations (i.e., learning new words for novel objects). Our findings provide new evidence for the role of cross-modal working memory binding in L1 word learning and further indicate that early stages of picture-based word learning in L2 might rely on similar cognitive processes as in L1.  相似文献   

10.
Introduction     
ABSTRACT

Subjects for the study were 65 children whose ages ranged from 5.6 to 9.5 years. Tasks from the Downing‐Oliver (1973‐1974) study were used to explore children's knowledge of spoken words and the relationship between this knowledge and reading achievement. Five examples from each of eight classes of auditory stimuli were presented to the child who indicated whether or not each example heard was a spoken word. Results from the investigation led the researcher to the following conclusions: (1) the average child's knowledge of a spoken word improves with age; and (2) significant relationships exist between children's knowledge of spoken words and reading achievement.

  相似文献   

11.
The aim of the present study was to investigate the effects of aging on both spoken and written word production by using analogous tasks. To do so, a phonological neighbor generation task (Experiment 1) and an orthographic neighbor generation task (Experiment 2) were designed. In both tasks, young and older participants were given a word and had to generate as many words as they could think of by changing one phoneme in the target word (Experiment 1) or one letter in the target word (Experiment 2). The data of the two experiments were consistent, showing that the older adults generated fewer lexical neighbors and made more errors than the young adults. For both groups, the number of words produced, as well as their lexical frequency, decreased as a function of time. These data strongly support the assumption of a symmetrical age-related decline in the transmission of activation within the phonological and orthographic systems.  相似文献   

12.
Onsets and rimes as units of spoken syllables: evidence from children   总被引:6,自引:0,他引:6  
The effects of syllable structure on the development of phonemic analysis and reading skills were examined in four experiments. The experiments were motivated by theories that syllables consist of an onset (initial consonant or cluster) and a rime (vowel and any following consonants). Experiment 1 provided behavioral support for the syllable structure model by showing that 8-year-olds more easily learned word games that treated onsets and rimes as units than games that did not. Further support for the cohesiveness of the onset came from Experiments 2 and 3, which found that 4- and 5-year-olds less easily recognized a spoken or printed consonant target when it was the first phoneme of a cluster than when it was a singleton. Experiment 4 extended these results to printed words by showing that consonant-consonant-vowel nonsense syllables were more difficult for beginning readers to decode than consonant-vowel-consonant syllables.  相似文献   

13.
The effects of perceptual adjustments to voice information on the perception of isolated spoken words were examined. In two experiments, spoken target words were preceded or followed within a trial by a neutral word spoken in the same voice or in a different voice as the target. Over-all, words were reproduced more accurately on trials on which the voice of the neutral word matched the voice of the spoken target word, suggesting that perceptual adjustments to voice interfere with word processing. This result, however, was mediated by selective attention to voice. The results provide further evidence of a close processing relationship between perceptual adjustments to voice and spoken word recognition.  相似文献   

14.
Children's understanding of counting   总被引:7,自引:0,他引:7  
K Wynn 《Cognition》1990,36(2):155-193
This study examines the abstractness of children's mental representation of counting, and their understanding that the last number word used in a count tells how many items there are (the cardinal word principle). In the first experiment, twenty-four 2- and 3-year-olds counted objects, actions, and sounds. Children counted objects best, but most showed some ability to generalize their counting to actions and sounds, suggesting that at a very young age, children begin to develop an abstract, generalizable mental representation of the counting routine. However, when asked "how many" following counting, only older children (mean age 3.6) gave the last number word used in the count a majority of the time, suggesting that the younger children did not understand the cardinal word principle. In the second experiment (the "give-a-number" task), the same children were asked to give a puppet one, two, three, five, and six items from a pile. The older children counted the items, showing a clear understanding of the cardinal word principle. The younger children succeeded only at giving one and sometimes two items, and never used counting to solve the task. A comparison of individual children's performance across the "how-many" and "give-a-number" tasks shows strong within-child consistency, indicating that children learn the cardinal word principle at roughly 3 1/2 years of age. In the third experiment, 18 2- and 3-year-olds were asked several times for one, two, three, five, and six items, to determine the largest numerosity at which each child could succeed consistently. Results indicate that children learn the meanings of smaller number words before larger ones within their counting range, up to the number three or four. They then learn the cardinal word principle at roughly 3 1/2 years of age, and perform a general induction over this knowledge to acquire the meanings of all the number words within their counting range.  相似文献   

15.
Eight retarded adolescents were trained to select one (a trained S+) of two visual stimuli in response to a spoken word (a trained word). Two different visual stimuli alternated randomly as the S-. To determine if the spoken work was merely a temporal discriminative stimulus for when to respond, or if it also specified which visual stimulus to select, the subjects were given intermittent presentations of untrained (novel) spoken words. All subjects consistently selected the trained S+ in response to the trained spoken word and selected the previous S- in response to the untrained spoken words. It was hypothesized that the subjects were responding away from the trained S+ in response to untrained spoken words, and control by untrained spoken words would not be observed when the trained S+ was not present. The two visual S- stimuli selected on trials of untrained spoken words were presented simultaneously. The untrained spoken words presented on these trials no longer controlled stimulus selections for seven subjects. The results supported the hypothesis that previous control by spoken words was due to responding away from the trained S+ in response to untrained spoken words.  相似文献   

16.
We studied the influence of word frequency and orthographic depth on the interaction of orthographic and phonetic information in word perception. Native speakers of English and Serbo-Croatian were presented with simultaneous printed and spoken verbal stimuli and had to decide whether they were equivalent. Decision reaction time was measured in three experimental conditions: Clear print and clear speech, degraded print and clear speech, and clear print and degraded speech. Within each language, the effects of visual and auditory degradation were measured, relative to the undegraded presentation. Both effects of degradation were much stronger in English than in Serbo-Croatian. Moreover, they were the same for high- and low-frequency words in both languages. These results can be accounted for by a parallel interactive processing model that assumes lateral connections between the orthographic and phonological systems at all of their levels. The structure of these lateral connections is independent of word frequency and is determined by the relationship between spelling and phonology in the language: simple isomorphic connections between graphemes and phonemes in Serbo-Croatian, but more complex, many-to-one, connections in English.  相似文献   

17.
The purpose of this study was to determine the effects of self-graphing on the writing of 3 fourth grade students with high-incidence disabilities. Measures of written expression included total number of words written and number of correct word sequences. During intervention, students self-graphed their total number of words written in response to a timed story starter. A functional relationship was found between the self-graphing intervention and the total words written and number of correct word sequences. Implications for future research and practice are discussed.  相似文献   

18.
Syntax allows human beings to build an infinite number of new sentences from a finite stock of words. Because toddlers typically utter only one or two words at a time, they have been thought to have no syntax. Using event-related potentials (ERPs), we demonstrated that 2-year-olds do compute syntactic structure when listening to spoken sentences. We observed an early left-lateralized brain response when an expected verb was incorrectly replaced by a noun (or vice versa). Thus, toddlers build on-line expectations as to the syntactic category of the next word in a sentence. In addition, the response topography was different for nouns and verbs, suggesting that different neural networks already underlie noun and verb processing in toddlers, as they do in adults.  相似文献   

19.
In three experiments, the processing of words that had the same overall number of neighbors but varied in the spread of the neighborhood (i.e., the number of individual phonemes that could be changed to form real words) was examined. In an auditory lexical decision task, a naming task, and a same-different task, words in which changes at only two phoneme positions formed neighbors were responded to more quickly than words in which changes at all three phoneme positions formed neighbors. Additional analyses ruled out an account based on the computationally derived uniqueness points of the words. Although previous studies (e.g., Luce & Pisoni, 1998) have shown that the number of phonological neighbors influences spoken word recognition, the present results show that the nature of the relationship of the neighbors to the target word--as measured by the spread of the neighborhood--also influences spoken word recognition. The implications of this result for models of spoken word recognition are discussed.  相似文献   

20.
Effects of presentation modality and response format were investigated using visual and auditory versions of the word stem completion task. Study presentation conditions (visual, auditory, non-studied) were manipulated within participants, while test conditions (visual/written, visual/spoken, auditory/written, auditory/spoken, recall-only) were manipulated between participants. Results showed evidence for same modality and cross modality priming on all four word stem completion tasks. Words from the visual study list led to comparable levels of priming across all test conditions. In contrast, words from the auditory study list led to relatively low levels of priming in the visual/written test condition and high levels of priming in the auditory/spoken test condition. Response format was found to influence priming performance following auditory study in particular. The findings confirm and extend previous research and suggest that, for implicit memory studies that require auditory presentation, it may be especially beneficial to use spoken rather than written responses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号