首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two experiments are reported examining the relationship between lexical and syntactic processing during language comprehension, combining techniques common to the on-line study of syntactic ambiguity resolution with priming techniques common to the study of lexical processing. By manipulating grammatical properties of lexical primes, we explore how lexically based knowledge is activated and guides combinatory sentence processing. Particularly, we find that nouns (like verbs, see Trueswell & Kim, 1998) can activate detailed lexically specific syntactic information and that these representations guide the resolution of relevant syntactic ambiguities pertaining to verb argument structure. These findings suggest that certain principles of knowledge representation common to theories of lexical knowledge—such as overlapping and distributed representations—also characterize grammatical knowledge. Additionally, observations from an auditory comprehension study suggest similar conclusions about the lexical nature of parsing in spoken language comprehension. They also suggest that thematic role and syntactic preferences are activated during word recognition and that both influence combinatory processing.  相似文献   

2.
Prior eye-tracking studies of spoken sentence comprehension have found that the presence of two potential referents, e.g., two frogs, can guide listeners toward a Modifier interpretation of Put the frog on the napkin… despite strong lexical biases associated with Put that support a Goal interpretation of the temporary ambiguity (Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632–1634; Trueswell, J. C., Sekerina, I., Hill, N. M. & Logrip, M. L. (1999). The kindergarten-path effect: Studying on-line sentence processing in young children. Cognition, 73, 89–134). This pattern is not expected under constraint-based parsing theories: cue conflict between the lexical evidence (which supports the Goal analysis) and the visuo-contextual evidence (which supports the Modifier analysis) should result in uncertainty about the intended analysis and partial consideration of the Goal analysis. We reexamined these put studies (Experiment 1) by introducing a response time-constraint and a spatial contrast between competing referents (a frog on a napkin vs. a frog in a bowl). If listeners immediately interpret on the… as the start of a restrictive modifier, then their eye movements should rapidly converge on the intended referent (the frog on something). However, listeners showed this pattern only when the phrase was unambiguously a Modifier (Put the frog that’s on the…). Syntactically ambiguous trials resulted in transient consideration of the Competitor animal (the frog in something). A reading study was also run on the same individuals (Experiment 2) and performance was compared between the two experiments. Those individuals who relied heavily on lexical biases to resolve a complement ambiguity in reading (The man heard/realized the story had been…) showed increased sensitivity to both lexical and contextual constraints in the put-task; i.e., increased consideration of the Goal analysis in 1-Referent Scenes, but also adeptness at using spatial constraints of prepositions (in vs. on) to restrict referential alternatives in 2-Referent Scenes. These findings cross-validate visual world and reading methods and support multiple-constraint theories of sentence processing in which individuals differ in their sensitivity to lexical contingencies.  相似文献   

3.
Two striking contrasts currently exist in the sentence processing literature. First, whereas adult readers rely heavily on lexical information in the generation of syntactic alternatives, adult listeners in world-situated eye-gaze studies appear to allow referential evidence to override strong countervailing lexical biases (Tanenhaus, Spivey-Knowlton, Eberhard, and Sedivy, 1995). Second, in contrast to adults, children in similar listening studies fail to use this referential information and appear to rely exclusively on verb biases or perhaps syntactically based parsing principles (Trueswell, Sekerina, Hill, and Logrip, 1999). We explore these contrasts by fully crossing verb bias and referential manipulations in a study using the eye-gaze listening technique with adults (Experiment 1) and five-year-olds (Experiment 2). Results indicate that adults combine lexical and referential information to determine syntactic choice. Children rely exclusively on verb bias in their ultimate interpretation. However, their eye movements reveal an emerging sensitivity to referential constraints. The observed changes in information use over ontogenetic time best support a constraint-based lexicalist account of parsing development, which posits that highly reliable cues to structure, like lexical biases, will emerge earlier during development and more robustly than less reliable cues.  相似文献   

4.
Three retarded children were trained, using prompting and reinforcement procedures, to respond correctly to three categories of prepositional requests: “put the___ next to the___”, “put the___under the___”, and “put the___on top of the___”. Training sessions were alternated with probe sessions throughout the study. During training, a child was trained to respond to one request (e.g., “put the doll next to the cup”); during probing, the child was tested for generalization of this training to untrained requests. Responses to untrained requests were never prompted nor reinforced. The results showed that, as requests from one category were trained, the children's responses to the untrained requests of that category became increasingly correct. As discriminations among two or more categories were trained, the children's responses to the untrained requests of those categories also became increasingly correct. Thus, the methods employed appear to be successful in training generalized receptive discrimination among prepositional categories and possibly can be utilized in training other generalized receptive language skills.  相似文献   

5.
Subject relative clauses (SRCs) are typically processed more easily than object relative clauses (ORCs), but this difference is diminished by an inanimate head-noun in semantically non-reversible ORCs (“The book that the boy is reading”). In two eye-tracking experiments, we investigated the influence of animacy on online processing of semantically reversible SRCs and ORCs using lexically inanimate items that were perceptually animate due to motion (e.g., “Where is the tractor that the cow is chasing”). In Experiment 1, 48 children (aged 4;5–6;4) and 32 adults listened to sentences that varied in the lexical animacy of the NP1 head-noun (Animate/Inanimate) and relative clause (RC) type (SRC/ORC) with an animate NP2 while viewing two images depicting opposite actions. As expected, inanimate head-nouns facilitated the correct interpretation of ORCs in children; however, online data revealed children were more likely to anticipate an SRC as the RC unfolded when an inanimate head-noun was used, suggesting processing was sensitive to perceptual animacy. In Experiment 2, we repeated our design with inanimate (rather than animate) NP2s (e.g., “where is the tractor that the car is following”) to investigate whether our online findings were due to increased visual surprisal at an inanimate as agent, or to similarity-based interference. We again found greater anticipation for an SRC in the inanimate condition, supporting our surprisal hypothesis. Across the experiments, offline measures show that lexical animacy influenced children's interpretation of ORCs, whereas online measures reveal that as RCs unfolded, children were sensitive to the perceptual animacy of lexically inanimate NPs, which was not reflected in the offline data. Overall measures of syntactic comprehension, inhibitory control, and verbal short-term memory and working memory were not predictive of children's accuracy in RC interpretation, with the exception of a positive correlation with a standardized measure of syntactic comprehension in Experiment 1.  相似文献   

6.
The development of comprehension and production of spatial deictic terms “this/that”, “here/there”, “my/your”, and “in front of/behind” was investigated in the context of a hide-and-seek game. The first three contrasts are produced according to the speaker's perspective, so comprehension requires a nonegocentric viewpoint. The contrast “in front of/behind” is produced relative to the hearer, i.e., production is nonegocentric. The subjects were 39 children, rangin in age from 2.5–4.5 years, and 18 college undergraduates. The 2.5-year-old children were best at those contrasts which do not require a shift in perspective. The 3- and 4-year-old children were adept at switching to the speaker's perspective for comprehension of the terms requiring this shift, i.e., were nonegocentric. Four-year-olds were also capable of nonegocentric production of “in front of/behind”.  相似文献   

7.
Previous work has reported that children creatively make syntactic errors that are ungrammatical in their target language, but are grammatical in another language. One of the most well-known examples is medial wh-question errors in English-speaking children's wh-questions (e.g., What do you think who the cat chased? from Thornton, 1990). The evidence for this non-target-like structure in both production and comprehension has been taken to support the existence of innate, syntactic parameters that define all possible grammatical variation, which serve as a top-down constraint guiding children's language acquisition process. The present study reports new story-based production and comprehension experiments that challenge this interpretation. While we replicated previous observations of medial wh-question errors in children's sentence production (Experiment 1), we saw a reduction in evidence indicating that English-speaking children assign interpretations that conform to the medial wh-question pattern (Experiment 2). Crucially, we found no correlation between production and comprehension errors (Experiment 3). We suggest that these errors are the result of children's immature sentence production mechanisms rather than immature grammatical knowledge.  相似文献   

8.
9.
Children growing up in a dual-language environment have to constantly monitor the dynamic communicative context to determine what the speaker is trying to say and how to respond appropriately. Such self-generated efforts to monitor speakers' communicative needs may heighten children's sensitivity to, and allow them to make better use of, referential gestures to figure out a speaker's referential intent. In a series of studies, we explored monolingual and bilingual preschoolers' use of nonverbal referential gestures such as pointing and gaze direction to figure out a speaker's intent to refer. In Study 1, we found that 3- and 4-year-old bilingual children were better able than monolingual children to use referential gestures (e.g., gaze direction) to locate a hidden toy in the face of conflicting body-distal information (the experimenter was seated behind an empty box while the cue was directed at the correct box). Study 2 found that by 5 years of age, monolingual children had mastered this task. Study 3 established that the bilingual advantage can be found in children as young as 2 years old. Thus, the experience of growing up in a bilingual environment fosters the development of the understanding of referential intent.  相似文献   

10.
《Cognitive development》2005,20(2):173-189
The present study examined developmental relations among understanding false belief, understanding “false” photographs, performance on the Dimensional Change Card Sort (DCCS), and performance on a picture–sentence verification task in 69 3–5-year-old children. Results showed that performance on the DCCS predicted performance on false belief questions even after controlling for children's age and verbal ability. However, neither performance on the picture–sentence verification task, nor performance on the “false” photograph task predicted false belief understanding. Implications of these findings are discussed in the context of suggestions that understanding false belief reflects a general understanding of representation, propositional negation, and the ability to use higher order rules.  相似文献   

11.
Research with adults has shown that ambiguous spoken sentences are resolved efficiently, exploiting multiple cues--including referential context--to select the intended meaning. Paradoxically, children appear to be insensitive to referential cues when resolving ambiguous sentences, relying instead on statistical properties intrinsic to the language such as verb biases. The possibility that children's insensitivity to referential context may be an artifact of the experimental design used in previous work was explored with 60 4- to 11-year-olds. An act-out task was designed to discourage children from making incorrect pragmatic inferences and to prevent premature and ballistic responses by enforcing delayed actions. Performance on this task was compared directly with the standard act-out task used in previous studies. The results suggest that young children (5 years) do not use contextual information, even under conditions designed to maximize their use of such cues, but that adult-like processing is evident by approximately 8 years of age. These results support and extend previous findings by Trueswell and colleagues (Cognition (1999), Vol. 73, pp. 89-134) and are consistent with a constraint-based learning account of children's linguistic development.  相似文献   

12.
The present experiment tested the hypothesis that development of syntactic comprehension through verbal modeling is enhanced by referent concreteness as a contextual influence. Young children heard a model narrate a series of events in passive form while the model either performed the corresponding activities, showed pictures portraying the same activities, or displayed no referential aids. In accord with prediction, verbal modeling with enactive referents produced higher levels of comprehension of passives than modeling with pictorial referents or modeling without referential aids. Modeling with pictorial referents and modeling without referents did not differ in overall efficacy. However, modeling alone produced results that were less consistent across different measures of comprehension. Children who lacked understanding of passives were more dependent on concrete referents than those who had some initial comprehension of the linguistic form. The results suggest that verbal modeling with pictorial referents and verbal modeling alone facilitate comprehension of passives, whereas verbal modeling with enactive referents promotes learning. Findings of a supplemental experiment reveal that the effects of verbal modeling on comprehension are enhanced when syntactic forms occur in a meaningful verbal context.  相似文献   

13.
Two experiments were conducted to investigate the role of phonemic activation in children's listening and reading comprehension. Phonemically confusing stories were presented in a listening comprehension task to kindergarten and second-grade children and in a reading comprehension task to second-grade children only. Rhymes induced phonemic confusion more consistently than did alliteratives in both the listening and reading tasks at both grade levels, suggesting that rhyme is inherently more confusing than alliteration, and furthermore, that phonemic information is activated in similar ways when children listen and when they read silently. Children's reading skill was also assessed to examine a possible relationship between reading skill and phonemic sensitivity, but no significant interactions between children's reading skill and their sensitivity to phonemic confusion were found in the reading task. In the listening task, all groups showed phonemic confusion in gist recall scores, but prereaders were less likely than readers to exhibit susceptibility to phonemic confusion in verbatim recall scores.  相似文献   

14.
This study deals with the comprehension of direct and nonconventional indirect directives by children 3 to 6 years old. It refers to both the philosophies of language and the psychological theories that favor the social and cognitive factors of language acquisition. Twenty-four 3- and 4-year-old children and twenty-four 5- and 6-year-old children performed a story completion task presented in the form of comic strips. The variables studied were children's age. linguistic nature of the utterance (direct or nonconventional indirect directives), and strength of the production context (strong or weak context). The results were (a) in contrast to most other studies, direct directives were better understood than indirect directives: (b) the comprehension of both indirect and direct directives depended on the production context of utterance; (c) 5- and 6-year-old children performed better than 3- and 4-year-old children. The detailed results are discussed in terms of types of comprchension of directives linked to comprehension of the speaker's intention.  相似文献   

15.
The role of focusing 4-year-olds' attention on “feeling” or “looking” was examined in three experiments by testing predictions about children's memory for their interactions with an adult partner as they engaged in a collaborative task. Children made collages with an adult partner, and they were later asked to remember who placed the pieces on the collage. Children were more likely to claim they placed pieces actually placed by their partner (Experiments 1, 2, and 3), unless directed to think about how their partner looked when placing the partner's pieces (Experiments 1 and 3). False claims were observed after children were directed to think about how it would “feel” to perform the actions, whether motoric instructions were focused on the self (Experiment 2, N = 48) or partner (Experiment 1, N = 40, and Experiment 3, N = 24). Furthermore, false claims (referred to as I did it errors) were positively associated with accurate collage memory (Experiment 3). These findings suggest that adopting a perspective during encoding that involves “feeling” movements—whether focused on the self or partner—plays an important role in children's memory for collaboration (in this context, memory for contributions made by children or their adult partners to the completion of a collage). A focus on “feeling” may be a way to “enter into” the experiences of another, promoting anticipation and recoding, which may lead to better learning in both collaborative and non-collaborative contexts.  相似文献   

16.
This study was designed to examine the relevance of some cognitive skills to Chinese reading and to identify those that distinguish readers of different proficiency levels. Third-, fourth-, and fifth-grade children were given two sets of tests, the Chinese Reading Proficiency Test and five component skill tests. Skills in word recognition and in comprehension were examined through tasks measuring component detection, lexical coding, memory for gist, knowledge of syntax, and use of context. Subjects' responses on the knowledge of syntax task and the lexical coding task were found to be most effective in predicting reading proficiency of the third graders, but not the fourth- and fifth-grade children. Instead, use of context task was the best predictor of the older children's reading proficiency.  相似文献   

17.
Two studies examined young children's comprehension and production of representational drawings across and within 2 socioeconomic strata (SES). Participants were 130 middle-SES (MSES) and low-SES (LSES) Argentine children, from 30 to 60 months old, given a task with 2 phases, production and comprehension. The production phase assessed free drawing and drawings from simple 3-dimensional objects (model drawing); the comprehension phase assessed children's understanding of an adult's line drawings of the objects. MSES children solved the comprehension phase of the task within the studied age range; representational production emerged first in model drawing (42 months) and later in free drawing (48 months). The same developmental pathway was observed in LSES children but with a clear asynchrony in the age of onset of comprehension and production: Children understood the symbolic nature of drawings at 42 months old and the first representational drawings were found at 60 months old. These results provide empirical evidence that support the crucial influence of social experiences by organizing and constraining graphic development.  相似文献   

18.
Previously, researchers have relied on asking young children to plot a given number on a 0-to-10 number line to assess their mental representation of numbers 1 to 9. However, such a (“conventional”) number-to-position (N-P) task may underestimate the accuracy of young children's magnitude estimates and misrepresent the nature of their number representation. The purpose of this study was to compare young children's performance on the conventional N-P task and a “modified” N-P task that is more consistent with a discrete-quantity view of number and with measures of theoretically related mathematical competencies. Participants (n = 45), ranging in age from 4;0 to 6;0, were administered both versions of the N-P task twice during 4 sessions in 1 of 2 randomly assigned and counterbalanced orders. Between and within conditions, children were significantly more accurate on the modified version than on the conventional task. The results indicate that the conventional task, in particular, may be confusing and that several simple modifications can make it more understandable for young children. However, when performance on theoretically related number tasks is taken into account, both the conventional and the modified N-P tasks appeared to underestimate competence.  相似文献   

19.
20.
Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8‐ to 10‐year‐old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture–speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., “pet” + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., “bird” + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post‐test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture–speech integration in children overlaps with—but is broader than—the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号