首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The linguistic input to language learning is usually thought to consist of simple strings of words. We argue that input must also include information about how words group into syntactic phrases. Natural languages regularly incorporate correlated cues to phrase structure, such as prosody, function words, and concord morphology. The claim that such cues are necessary for successful acquisition of syntax was tested in a series of miniature language learning experiments with adult subjects. In each experiment, when input included some cue marking the phrase structure of sentences, subjects were entirely successful in learning syntax; in contrast, when input lacked such a cue (but was otherwise identical), subjects failed to learn significant portions of syntax. Cues to phrase structure appear to facilitate learning by indicating to the learner those domains within which distributional analyses may be most efficiently pursued, thereby reducing the amount and complexity of required input data. More complex target systems place greater premiums on efficient analysis; hence, such cues may be even more crucial for acquisition of natural language syntax. We suggest that the finding that phrase structure cues are a necessary aspect of language input reflects the limited capacities of human language learners; languages may incorporate structural cues in part to circumvent such limitations and ensure successful acquisition.  相似文献   

2.
杨群  张积家  范丛慧 《心理学报》2021,53(7):746-757
词汇歧义是语言的普遍现象。在汉语中, 歧义词的种类繁多, 是少数民族学生汉语学习困难的重要原因之一。通过两个实验, 考察在不同加工时间条件下维吾尔族和汉族的大学生在汉语歧义词消解中的语境促进效应及抑制效应。结果发现, 两个民族的大学生均出现了语境促进效应, 但在短时加工条件下, 汉族大学生的语境促进效应显著大于维吾尔族大学生, 在长时加工条件下, 两个民族的大学生的语境促进效应并无显著差异。在短时加工条件下, 仅汉族大学生可以有效地抑制无关信息的干扰; 在长时加工条件下, 两个民族的大学生均可以有效地抑制无关信息的干扰。整个研究表明, 在汉语歧义词消解中, 随着加工时间增加, 维吾尔族大学生的语境促进效应和对无关信息的抑制均可以达到与汉族大学生相近的水平。  相似文献   

3.
Three experiments were conducted to investigate the influence of contextual constraint on lexical ambiguity resolution in the cerebral hemispheres. A cross-modal priming variant of the divided visual field task was utilized in which subjects heard sentences containing homonyms and made lexical decisions to targets semantically related to dominant and subordinate meanings. Experiment 1 showed priming in both hemispheres of dominant meanings for homonyms embedded in neutral sentence contexts. Experiment 2 showed priming in both hemispheres of dominant and subordinate meanings for homonyms embedded in sentence contexts that biased a central semantic feature of the subordinate meaning. Experiment 3 showed priming of dominant meanings in the left hemisphere (LH), and priming of the subordinate meaning in the right hemisphere (RH) for homonyms embedded in sentences that biased a peripheral semantic feature of the subordinate meaning. These results are consistent with a context-sensitive model of language processing that incorporates differential sensitivity to semantic relationships in the cerebral hemispheres.  相似文献   

4.
In the present study, we investigated whether patterns of letter detection for function and content words in texts are affected by the familiarity of the material being read. In Experiment 1, subjects searched for target letters in sentences that had been rehearsed prior to performing the letter detection on them as well as on unfamiliar sentences. In Experiment 2, subjects searched for target letters in highly familiar verses (e.g., nursery rhymes) and in unfamiliar sentences that were matched to the familiar verses. A disadvantage in letter detection for function as compared with content words consistently found with unfamiliar passages was reduced significantly with the familiar material in both experiments. Specifically, letter detection for content words grew worse in familiar text, but letter detection for function words showed a contrasting modest, though nonsignificant, improvement. The results are consistent with the proposition that in very familiar texts, parafoveal analysis permits the identification of generally less familiar content words. Simultaneously, the normal pattern of weighing the structure and content elements of text changes so that more fixations on function words occur than when one is reading unfamiliar texts.  相似文献   

5.
It is assumed linguistic symbols must be grounded in perceptual information to attain meaning, because the sound of a word in a language has an arbitrary relation with its referent. This paper demonstrates that a strong arbitrariness claim should be reconsidered. In a computational study, we showed that one phonological feature (nasals in the beginning of a word) predicted negative valence in three European languages (English, Dutch, and German) and positive valence in Chinese. In three experiments, we tested whether participants used this feature in estimating the valence of a word. In Experiment 1, Chinese and Dutch participants rated the valence of written valence-neutral words, with Chinese participants rating the nasal-first neutral-valence words more positive and the Dutch participants rating nasal-first neutral-valence words more negative. In Experiment 2, Chinese (and Dutch) participants rated the valence of Dutch (and Chinese) written valence-neutral words without being able to understand the meaning of these words. The patterns replicated the valence patterns from Experiment 1. When the written words from Experiment 2 were transformed into spoken words, results in Experiment 3 again showed that participants estimated the valence of words on the basis of the sound of the word. The computational study and psycholinguistic experiments indicated that language users can bootstrap meaning from the sound of a word.  相似文献   

6.
The perceptual complexity of lexically ambiguous and unambiguous sentences was compared in three experiments. In Experiment 1, the report of ambiguous words from rapidly presented ambiguous sentences was worse than the report of corresponding unambiguous words from unambiguous sentences. Results of Experiment 2 showed that the effect was not reduced by the presence of prior biasing context within the sentence. Experiment 3 repeated the finding with a sentence meaning classification task. It was concluded that both meanings of a lexically ambiguous sentence must be computed, even when prior context makes one meaning more plausible than the other.  相似文献   

7.
The experiments reported here were designed to investigate the effects on linguistic performance of varying the interaction between the syntactic form of sentences and their semantic function. The experimental task required subjects to decide whether pairs of sentences had the same or a different meaning. The results of Experiment I confirmed the prediction that the times taken to decide about pairs of affirmative and negative sentences would be shorter when the negative was performing its natural function of signalling a change of meaning. To a lesser extent, performance on pairs of active and passive sentences was facilitated when the two sentences meant the same thing. These results were found both with “meaningful” sentence material and with abstract x-y sentences. A second experiment provided a control for the possibility that the results were due to syntactic derivational factors rather than to the semantic function interaction.  相似文献   

8.
Stroop effects in bilingual translation   总被引:2,自引:0,他引:2  
In two experiments, bilinguals proficient in English and Spanish translated words from one language to the other. In each experiment, following the target word to be translated, distractor words were presented after a short (200-msec) or long (500-msec) stimulus onset asynchrony. In Experiment 1, the distractor words appeared in the language of production and were related to the meaning or form of the spoken translation. The results replicated past studies in demonstrating that semantically related distractor words produced Stroop-type interference, whereas form-related distractor words produced facilitation. In Experiment 2, the distractors appeared in the language of input and were related to the meaning or form of the target word itself. In contrast to the results of Experiment 1, there were only marginal effects of the distractors on translation performance. These results suggest that language cues related to the nature of the input in translation may serve to reduce competition among lexical competitors during lexicalization. The contrast between these results and those in bilingual picture-word interference studies provides important constraints for models of language production and for claims about the locus of language selection.  相似文献   

9.
We studied how Dutch children learned English as a second language (L2) in the classroom. Learners at different levels of L2 proficiency recognized words under different task conditions. Beginning learners in primary school (fifth and sixth grades) and more advanced learners in secondary school (seventh and ninth grades) made lexical decisions on words that are similar for English and Dutch in both meaning and form (“cognates”) or only in form (“false friends”). Cognates were processed faster than matched control words by all participant groups in an English lexical decision task (Experiment 1) but not in a Dutch lexical decision task (Experiment 2). An English lexical decision task that mixed cognates and false friends (Experiment 3) led to consistently longer reaction times for both item types relative to controls. Thus, children in the early stages of learning an L2 already activate word candidates in both of their languages (language-nonselective access) and respond differently to cognates in the presence or absence of false friends in the stimulus list.  相似文献   

10.
Two experiments investigated priming in word association, an implicit memory task. In the study phase of Experiment 1, semantically ambiguous target words were presented in sentences that biased their interpretation. The appropriate interpretation of the target was either congruent or incongruent with the cue presented in a subsequent word association task. Priming (i.e., a higher proportion of target responses relative to a nonstudied baseline) was obtained for the congruent condition, but not for the incongruent condition. In Experiment 2, study sentences emphasized particular meaning aspects of nonambiguous targets. The word association task showed a higher proportion of target responses for targets studied in the more congruent sentence context than for targets studied in the less congruent sentence context. These results indicate that priming in word association depends largely on the storage of information relating the cue and target.  相似文献   

11.
Changes to our everyday activities mean that adult language users need to learn new meanings for previously unambiguous words. For example, we need to learn that a "tweet" is not only the sound a bird makes, but also a short message on a social networking site. In these experiments, adult participants learned new fictional meanings for words with a single dominant meaning (e.g., "ant") by reading paragraphs that described these novel meanings. Explicit recall of these meanings was significantly better when there was a strong semantic relationship between the novel meaning and the existing meaning. This relatedness effect emerged after relatively brief exposure to the meanings (Experiment 1), but it persisted when training was extended across 7?days (Experiment 2) and when semantically demanding tasks were used during this extended training (Experiment 3). A lexical decision task was used to assess the impact of learning on online recognition. In Experiment 3, participants responded more quickly to words whose new meaning was semantically related than to those with an unrelated meaning. This result is consistent with earlier studies showing an effect of meaning relatedness on lexical decision, and it indicates that these newly acquired meanings become integrated with participants' preexisting knowledge about the meanings of words.  相似文献   

12.
Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker’s dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.  相似文献   

13.
In two experiments we show that (a) distracting stimuli are inhibited after intention formation, (b) this inhibition is episodic rather than semantic in nature, and (c) inhibition of distracting stimuli is terminated once intentions are completed. In both experiments participants were asked to form an intention to press the space bar in response to six cues (i.e. intention cues). After intention formation we measured accessibility of intention cues, of words that are semantically related to the intention cues (i.e. related cues) and of semantically unrelated words (i.e. control cues). In Experiment 1, we obtained slower responses towards related cues compared with both intention cues and control cues in a recognition task, but not in a lexical decision task. In Experiment 2, we showed that inhibition of related cues is terminated after intention completion. Together these results are consistent with theorizing that inhibition of distracting (i.e. related) stimuli is functional for completing previously formed intentions, and give insight in the nature of inhibitory processes during goal pursuit.  相似文献   

14.
The goal of this study was to explore the ability to discriminate languages using the visual correlates of speech (i.e., speech-reading). Participants were presented with silent video clips of an actor pronouncing two sentences (in Catalan and/or Spanish) and were asked to judge whether the sentences were in the same language or in different languages. Our results established that Spanish-Catalan bilingual speakers could discriminate running speech from their two languages on the basis of visual cues alone (Experiment 1). However, we found that this ability was critically restricted by linguistic experience, since Italian and English speakers who were unfamiliar with the test languages could not successfully discriminate the stimuli (Experiment 2). A test of Spanish monolingual speakers revealed that knowledge of only one of the two test languages was sufficient to achieve the discrimination, although at a lower level of accuracy than that seen in bilingual speakers (Experiment 3). Finally, we evaluated the ability to identify the language by speech-reading particularly distinctive words (Experiment 4). The results obtained are in accord with recent proposals arguing that the visual speech signal is rich in informational content, above and beyond what traditional accounts based solely on visemic confusion matrices would predict.  相似文献   

15.
Five experiments were designed to examine whether subjects attend to different aspects of meaning for familiar and unfamiliar words. In Experiments 1–3, subjects gave free associations to high- and low-familiarity words from the same taxonomic category (e.g.,seltzer:sarsparilla; Experiment 1), from the same noun synonym set (e.g.,baby:neonate; Experiment 2), and from the same verb synonym set (e.g.,abscond:escape; Experiment 3). In Experiments 4 and 5, subjects first read a context sentence containing the stimulus word and then gave associations; stimuli were novel words or either high- or low-familiarity nouns. Low-familiarity and novel words elicited more nonsemantically based responses (e.g.,engram:graham) than did high-familiarity words. Of the responses semantically related to the stimulus, low-familiarity and novel words elicited a higher proportion of definitional responses [category (e.g.,sarsparilla:soda), synonym (e.g.,neonate:newborn), and coordinate (e.g.,armoire:dresser)], whereas high-familiarity stimuli elicited a higher proportion of event-based responses [thematic (e.g.,seltzer:glass) and noun:verb (e.g.,baby:cry)]. Unfamiliar words appear to elicit a shift of attentional resources from relations useful in understanding the message to relations useful in understanding the meaning of the unfamiliar word.  相似文献   

16.
In two experiments, we investigated the relationship between reading ability and linguistic knowledge in adults. The results from Experiment 1 showed that good comprehenders performed more accurately than average comprehenders in a syntactic-judgment task that required them to decide whether pairs of words served the same grammatical function in sentences. By contrast, the reader groups performed similarly when required to make semantic judgments about whether pairs of words were related in meaning. In Experiment 2, individuals were classified according to comprehension level and reading speed. Good comprehenders again performed more accurately than average comprehenders in the syntactic task but not the semantic task. We argued that differences in form-class knowledge could be associated with corresponding differences in syntacticprocessing efficiency, and thus with variation in reading-comprehension skill generally.  相似文献   

17.
The idea that subjects often use imagery to discriminate semantically similar sentences was tested in three experiments. In the first experiment, subjects heard subject-verb-object sentences in the context of either a comprehension task or an image-generation task. Their memory for the sentences was tested using a two-alternative forced-choice recognition test in which different types of distractor sentence were used. A sentence semantically similar to the target sentence was one type; a sentence with the same subject and object nouns as the target sentence, but dissimilar in meaning, was another type; and a sentence similar in meaning to one of the stimulus sentences, but not to the target sentence, was a third type. The results showed that the image-generation instructions enhanced later recognition performance, but only for semantically similar test items. A second experiment showed that this finding only holds for high-imagery sentences containing concrete noun concepts. A third experiment demonstrated that the enhanced recognition performance could not be accounted for in terms of a semantic model of test-item discrimination. Collectively, the results were interpreted as providing evidence for the notion that subjects discriminate the semantically similar test items by elaborating the sentence encoding through image processing.  相似文献   

18.
This study examined how skilled Japanese readers activate semantic information when reading kanji compound words at both the lexical and sentence levels. Experiment 1 used a lexical decision task for two-kanji compound words and nonwords. When nonwords were composed of kanji that were semantically similar to the kanji of real words, reaction times were longer and error rates were higher than when nonwords had kanji that were not semantically similar. Experiment 2 used a proofreading task (detection of kanji miscombinations) for the same two-kanji compound words and nonwords at the sentence level. In this task, semantically similar nonwords were detected faster than dissimilar nonwords, but error rates were much higher for the semantically similar nonwords. Experiment 3 used a semantic decision task for sentences with the same two-kanji compound words and nonwords. It took longer to detect semantically similar nonwords than dissimilar nonwords. This indicates that semantic involvement in the processing of Japanese kanji produces different effects, depending on whether this processing is done at the lexical or sentence level, which in turn is related to where the reader's attention lies.  相似文献   

19.
Two experiments were carried out in order to investigate how the linguistic context (in the form of a sentence) facilitates the interpretation of unambiguous words. Experiment I established that if a sentential context as a whole is sufficient to evoke an inference that calls to mind a particular aspect of a word's meaning, the presence of a verb with appropriate selectional restrictions does not enhance the process. Experiment II showed that when no other cues are provided within a sentence, both verbs and adjectives are effective in enhancing a specific aspect of the meaning of a word. These findings were taken to support the hypothesis that in understanding a sentence, people instantiate particular aspects of the meanings of words in order to construct specific interpretations, and linguistic context guides this process of selection.  相似文献   

20.
Cimpian A  Markman EM 《Cognition》2008,107(1):19-53
Sentences that refer to categories - generic sentences (e.g., "Dogs are friendly") - are frequent in speech addressed to young children and constitute an important means of knowledge transmission. However, detecting generic meaning may be challenging for young children, since it requires attention to a multitude of morphosyntactic, semantic, and pragmatic cues. The first three experiments tested whether 3- and 4-year-olds use (a) the immediate linguistic context, (b) their previous knowledge, and (c) the social context to determine whether an utterance with ambiguous scope (e.g., "They are afraid of mice", spoken while pointing to 2 birds) is generic. Four-year-olds were able to take advantage of all the cues provided, but 3-year-olds were sensitive only to the first two. In Experiment 4, we tested the relative strength of linguistic-context cues and previous-knowledge cues by putting them in conflict; in this task, 4-year-olds, but not 3-year-olds, preferred to base their interpretations on the explicit noun phrase cues from the linguistic context. These studies indicate that, from early on, children can use contextual and semantic information to construe sentences as generic, thus taking advantage of the category knowledge conveyed in these sentences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号