首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language‐specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross‐linguistic viability of different statistical learning strategies by analyzing child‐directed speech corpora from nine languages and by modeling possible statistics‐based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non‐statistical cues when they begin their process of speech segmentation.  相似文献   

2.
The authors investigated the role of syntax in verb learning in Mandarin Chinese, which allows pervasive ellipsis of noun arguments. Two questions were investigated using the Beijing corpus on CHILDES: (a) Does the input to young children manifest syntactic-semantic correspondences as needed for acquiring verb meanings? (b) Are verbs presented in multiple frames? Over 6,000 child-directed utterances were parsed. Analyses revealed that transitive verbs, motion verbs, and internal/communication verbs were distinguished syntactically; moreover, the 60 target verbs were used in multiple sentence frames. These findings support a role for syntactic bootstrapping in Mandarin verb learning.  相似文献   

3.
Lidz J  Gleitman H  Gleitman L 《Cognition》2003,87(3):151-178
Studies under the heading "syntactic bootstrapping" have demonstrated that syntax guides young children's interpretations during verb learning. We evaluate two hypotheses concerning the origins of syntactic bootstrapping effects. The "universalist" view, holding that syntactic bootstrapping falls out from universal properties of the syntax-semantics mapping, is shown to be superior to the "emergentist" view, which holds that argument structure patterns emerge from a process of categorization and generalization over the input. These theories diverge in their predictions about a language in which syntactic structure is not the most reliable cue to a certain meaning. In Kannada, causative morphology is a better predictor of causative meaning than transitivity is. Hence, the emergentist view predicts that Kannada-speaking children will associate causative morphology (in favor of transitive syntax) with causative meaning. The universalist theory, however, predicts the opposite pattern. Using an act-out task, we found that 3-year-old native speakers of Kannada associate argument number and not morphological form with causativity, supporting the universalist approach.  相似文献   

4.
Co‐hyperintensionality, or hyperintensional equivalence, is a relation holding between two or more contents that can be substituted in a hyperintensional context salva veritate. I argue that two strategies used to provide criteria for co‐hyperintensionality (appeal to some form of impossible worlds, or to structural or procedural equivalence of propositions) fail. I argue that there is no generalized notion of co‐hyperintensionality that meets plausible desiderata, by showing that the opposite thesis leads to falsity. As a conclusion, I suggest to take co‐hyperintensionality as a primitive and I provide a general criterion of co‐hyperintensionality whose content depends on each hyperintensional notion we aim to formalize.  相似文献   

5.
Although little studied, whining is a vocal pattern that is both familiar and irritating to parents of preschool‐ and early school‐age children. The current study employed multidimensional scaling to identify the crucial acoustic characteristics of whining speech by analysing participants' perceptions of its similarity to other types of speech (question, neutral speech, angry statement, demand, and boasting). We discovered not only that participants find whining speech more annoying than other forms of speech, but that it shares the salient acoustic characteristics found in motherese, namely increased pitch, slowed production, and exaggerated pitch contours. We think that this relationship is not random but may reflect the fact that the two forms of vocalization are the result of a similar accommodation to a universal human auditory sensitivity to the prosody of both forms of speech. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
7.
The aim of the current study was to establish whether feedback from a co‐witness concerning their choice of suspect could influence an individual witness' certainty and other testimony‐relevant judgements. Eighty‐two university students and members of the general public viewed a film of a staged mugging in pairs and then made an identification of who they thought was the suspect from a culprit‐absent line‐up (i.e. identification parade). The participants were then required to tell their partner whom they had identified and to fill out a questionnaire with testimony‐relevant questions (e.g. How good a view did you get of the person in the line‐up?). When the pairs of participants agreed on their choice of suspect, their scores on the testimony‐relevant questions tended to be higher than when the pairs did not agree. This shows that co‐witnesses can influence each others' memory reports when giving each other feedback after the identification process. The implications of these findings are discussed. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
The paper reports on two experiments with the head turn preference method which provide evidence that already at 7 to 9 months, but not yet at 6 months, German‐learning infants do recognize unstressed closed‐class lexical elements in continuous speech. These findings support the view that even preverbal children are able to compute at least phonological representations for closed‐class functional elements. They also suggest that these elements must be available to the language learning mechanisms of the child from very early on, allowing the child to make use of the distributional properties of closed‐class lexical elements for further top‐down analysis of the linguistic input, e.g. segmentation and syntactic categorization.  相似文献   

9.
10.
The interplay between action and language is still not fully understood in terms of its relevance for early language development. Here, we investigated whether action imitation may be beneficial for first language acquisition. In a word-learning study 24-, 30- and 36-month-old children (N = 96) learned the labels of different actions in one of two conditions: Either the children just observed the experimenter producing the action (observation condition) or children produced the action themselves (action condition). The results show that 36-month-olds learned the labels of the more complex actions in both conditions, whereas 30-month-olds learned the labels only in the action but not in the observation condition. These findings suggest that action imitation is beneficial for verb learning early in life.  相似文献   

11.
Young children have an overall preference for child‐directed speech (CDS) over adult‐directed speech (ADS), and its structural features are thought to facilitate language learning. Many studies have supported these findings, but less is known about processing of CDS at short, sub‐second timescales. How do the moment‐to‐moment dynamics of CDS influence young children's attention and learning? In Study 1, we used hierarchical clustering to characterize patterns of pitch variability in a natural CDS corpus, which uncovered four main word‐level contour shapes: ‘fall’, ‘rise’, ‘hill’, and ‘valley’. In Study 2, we adapted a measure from adult attention research—pupil size synchrony—to quantify real‐time attention to speech across participants, and found that toddlers showed higher synchrony to the dynamics of CDS than to ADS. Importantly, there were consistent differences in toddlers’ attention when listening to the four word‐level contour types. In Study 3, we found that pupil size synchrony during exposure to novel words predicted toddlers’ learning at test. This suggests that the dynamics of pitch in CDS not only shape toddlers’ attention but guide their learning of new words. By revealing a physiological response to the real‐time dynamics of CDS, this investigation yields a new sub‐second framework for understanding young children's engagement with one of the most important signals in their environment.  相似文献   

12.
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).  相似文献   

13.
The aim of this study was to test the impact of an audible marker on the production of subject‐verb agreements. Earlier studies have shown that educated French‐speaking adults make subject‐verb agreement errors when writing as soon as a secondary task demands their attention. One hypothesis is that these errors occur primarily because in French many of the written inflections of the verbal plural are silent. However, errors of the same type have been reported in spoken English: in configurations such as “the dog of the neighbours arrive(s)”, arrive agrees with the noun closest to the verb rather than with the subject. The current experiment compares the production of subject‐verb agreements in written French depending on whether the singular/plural opposition is audible (finit/finissent) or not (chante/chantent). After having changed the tense of the verb, adult subjects had to recall, in writing, sentences which had been read aloud to them and which shared the same start (La flamme de la bougie = the flame of the candle) but contained different verbs matched for semantic plausibility and frequency, and either possessing (éblouir = to blind) or not possessing (éclairer = to illuminate) an audible singular/plural opposition. The results show that the presence of an audible marker reduces the error frequency and makes the agreement easier to manage. A chronometric study suggests that it is the competition between concurrent markers (e.g., ‐e, ‐s, ‐ent) that causes difficulties with regular verbs and that this competition is resolved at the very last moment, at the point when the marker is transcribed.  相似文献   

14.
15.
16.
A theory of verb form use in the speech of agrammatic aphasics   总被引:2,自引:1,他引:1  
  相似文献   

17.
Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8‐ to 10‐year‐old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture–speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., “pet” + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., “bird” + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post‐test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture–speech integration in children overlaps with—but is broader than—the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.  相似文献   

18.
Peer interaction has been found to be conducive to learning in many settings. Knowledge co‐construction (KCC) has been proposed as one explanatory mechanism. However, KCC is a theoretical construct that is too abstract to guide the development of instructional software that can support peer interaction. In this study, we present an extensive analysis of a corpus of peer dialogs that we collected in the domain of introductory Computer Science. We show that the notion of task initiative shifts correlates with both KCC and learning. Speakers take task initiative when they contribute new content that advances problem solving and that is not invited by their partner; if initiative shifts between the partners, it indicates they both contribute to problem solving. We found that task initiative shifts occur more frequently within KCC episodes than outside. In addition, task initiative shifts within KCC episodes correlate with learning for low pre‐testers, and total task initiative shifts correlate with learning for high pre‐testers. As recognizing task initiative shifts does not require as much deep knowledge as recognizing KCC, task initiative shifts as an indicator of productive collaboration are potentially easier to model in instructional software that simulates a peer.  相似文献   

19.
The effects of perceptual learning of talker identity on the recognition of spoken words and sentences were investigated in three experiments. In each experiment, listeners were trained to learn a set of 10 talkers’ voices and were then given an intelligibility test to assess the influence of learning the voices on the processing of the linguistic content of speech. In the first experiment, listeners learned voices from isolated words and were then tested with novel isolated words mixed in noise. The results showed that listeners who were given words produced by familiar talkers at test showed better identification performance than did listeners who were given words produced by unfamiliar talkers. In the second experiment, listeners learned novel voices from sentence-length utterances and were then presented with isolated words. The results showed that learning a talker’s voice from sentences did not generalize well to identification of novel isolated words. In the third experiment, listeners learned voices from sentence-length utterances and were then given sentence-length utterances produced by familiar and unfamiliar talkers at test. We found that perceptual learning of novel voices from sentence-length utterances improved speech intelligibility for words in sentences. Generalization and transfer from voice learning to linguistic processing was found to be sensitive to the talker-specific information available during learning and test. These findings demonstrate that increased sensitivity to talker-specific information affects the perception of the linguistic properties of speech in isolated words and sentences.  相似文献   

20.
Two experiments are reported in which sentence context facilitation of paired associate learning was examined. The major purpose was to determine whether semantic changes in the verb connecting a pair of nouns affects the facilitation due to the verb connective. It was found that trial to trial changes in the connective had no effect, and facilitation was as great as when the verb remained the same on each trial. These results were interpreted in terms of imaginal coding in multitrial experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号