首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   930篇
  免费   56篇
  国内免费   2篇
  2023年   7篇
  2022年   5篇
  2021年   34篇
  2020年   34篇
  2019年   18篇
  2018年   21篇
  2017年   35篇
  2016年   40篇
  2015年   35篇
  2014年   49篇
  2013年   126篇
  2012年   56篇
  2011年   77篇
  2010年   30篇
  2009年   79篇
  2008年   80篇
  2007年   72篇
  2006年   34篇
  2005年   34篇
  2004年   38篇
  2003年   32篇
  2002年   20篇
  2001年   3篇
  2000年   4篇
  1999年   2篇
  1998年   2篇
  1997年   9篇
  1993年   1篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1980年   3篇
  1979年   4篇
  1978年   1篇
排序方式: 共有988条查询结果,搜索用时 15 毫秒
11.
Much previous work has suggested that word order preferences across languages can be explained by the dependency distance minimization constraint (Ferrer-i Cancho, 2008, 2015; Hawkins, 1994). Consistent with this claim, corpus studies have shown that the average distance between a head (e.g., verb) and its dependent (e.g., noun) tends to be short cross-linguistically (Ferrer-i Cancho, 2014; Futrell, Mahowald, & Gibson, 2015; Liu, Xu, & Liang, 2017). This implies that on average languages avoid inefficient or complex structures for simpler structures. But a number of studies in psycholinguistics (Konieczny, 2000; Levy & Keller, 2013; Vasishth, Suckow, Lewis, & Kern, 2010) show that the comprehension system can adapt to the typological properties of a language, for example, verb-final order, leading to more complex structures, for example, having longer linear distance between a head and its dependent. In this paper, we conduct a corpus study for a group of 38 languages, which were either Subject–Verb–Object (SVO) or Subject–Object–Verb (SOV), in order to investigate the role of word order typology in determining syntactic complexity. We present results aggregated across all dependency types, as well as for specific verbal (objects, indirect objects, and adjuncts) and nonverbal (nominal, adjectival, and adverbial) dependencies. The results suggest that dependency distance in a language is determined by the default word order of a language, and crucially, the direction of a dependency (whether the head precedes the dependent or follows it; e.g., whether the noun precedes the verb or follows it). Particularly we show that in SOV languages (e.g., Hindi, Korean) as well as SVO languages (e.g., English, Spanish), longer linear distance (measured as number of words) between head and dependent arises in structures when they mirror the default word order of the language. In addition to showing results on linear distance, we also investigate the influence of word order typology on hierarchical distance (HD; measured as number of heads between head and dependent). The results for HD are similar to that of linear distance. At the same time, in comparison to linear distance, the influence of adaptability on HD seems less strong. In particular, the results show that most languages tend to avoid greater structural depth. Together, these results show evidence for “limited adaptability” to the default word order preferences in a language. Our results support a large body of work in the processing literature that highlights the importance of linguistic exposure and its interaction with working memory constraints in determining sentence complexity. Our results also point to the possible role of other factors such as the morphological richness of a language and a multifactor account of sentence complexity remains a promising area for future investigation.  相似文献   
12.
We trained a computational model (the Chunk-Based Learner; CBL) on a longitudinal corpus of child–caregiver interactions in English to test whether one proposed statistical learning mechanism—backward transitional probability—is able to predict children's speech productions with stable accuracy throughout the first few years of development. We predicted that the model less accurately reconstructs children's speech productions as they grow older because children gradually begin to generate speech using abstracted forms rather than specific “chunks” from their speech environment. To test this idea, we trained the model on both recently encountered and cumulative speech input from a longitudinal child language corpus. We then assessed whether the model could accurately reconstruct children's speech. Controlling for utterance length and the presence of duplicate chunks, we found no evidence that the CBL becomes less accurate in its ability to reconstruct children's speech with age.  相似文献   
13.
Moreton E 《Cognition》2002,84(1):55-71
Native-language phonemes combined in a non-native way can be misperceived so as to conform to native phonotactics, e.g. English listeners are biased to hear syllable-initial [tr] rather than the illegal [tl] (Perception and Psychophysics 34 (1983) 338; Perception and Psychophysics 60 (1998) 941). What sort of linguistic knowledge causes phonotactic perceptual bias? Two classes of models were compared: unit models, which attribute bias to the listener's differing experience of each cluster (such as their different frequencies), and structure models, which use abstract phonological generalizations (such as a ban on [coronal][coronal] sequences). Listeners (N=16 in each experiment) judged synthetic 6 x 6 arrays of stop-sonorant clusters in which both consonants were ambiguous. The effect of the stop judgment on the log odds ratio of the sonorant judgment was assessed separately for each stimulus token to provide a stimulus-independent measure of bias. Experiment 1 compared perceptual bias against the onsets [bw] and [dl], which violate different structural constraints but are both of zero frequency. Experiment 2 compared bias against [dl] in CCV and VCCV contexts, to investigate the interaction of syllabification with segmentism and to rule out a compensation-for-coarticulation account of Experiment 1. Results of both experiments favor the structure models (supported by NSF).  相似文献   
14.
All languages rely to some extent on word order to signal relational information. Why? We address this question by exploring communicative and cognitive factors that could lead to a reliance on word order. In Study 1, adults were asked to describe scenes to another using their hands and not their mouths. The question was whether this home-made "language" would contain gesture sentences with consistent order. In addition, we asked whether reliance on order would be influenced by three communicative factors (whether the communication partner is permitted to give feedback; whether the information to be communicated is present in the context that recipient and gesturer share; whether the gesturer assumes the role of gesture receiver as well as gesture producer). We found that, not only was consistent ordering of semantic elements robust across the range of communication situations, but the same non-English order appeared in all contexts. Study 2 explored whether this non-English order is found only when a person attempts to share information with another. Adults were asked to reconstruct scenes in a non-communicative context using pictures drawn on transparencies. The adults picked up the pictures for their reconstructions in a consistent order, and that order was the same non-English order found in Study 1. Finding consistent ordering patterns in a non-communicative context suggests that word order is not driven solely by the demands of communicating information to another, but may reflect a more general property of human thought.  相似文献   
15.
Barner D  Snedeker J 《Cognition》2005,97(1):41-66
Three experiments explored the semantics of the mass-count distinction in young children and adults. In Experiments 1 and 2, the quantity judgments of participants provided evidence that some mass nouns refer to individuals, as such. Participants judged one large portion of stuff to be "more" than three tiny portions for substance-mass nouns (e.g. mustard, ketchup), but chose according to number for count nouns (e.g. shoes, candles) and object-mass nouns (e.g. furniture, jewelry). These results suggest that some mass nouns quantify over individuals, and that therefore reference to individuals does not distinguish count nouns from mass nouns. Thus, Experiments 1 and 2 failed to support the hypothesis that there exist one-to-one mappings between mass-count syntax and semantics for either adults or young children. In Experiment 3, it was found that for mass-count flexible terms (e.g. string, stone) participants based quantity judgments on number when the terms were used with count syntax, but on total amount of stuff when used with mass syntax. Apparently, the presence of discrete physical objects in a scene (e.g. stones) is not sufficient to permit quantity judgments based on number. It is proposed that object-mass nouns (e.g. furniture) can be used to refer to individuals due to lexically specified grammatical features that normally occur in count syntax. Also, we suggest that children learning language parse words that refer to individuals as count nouns unless given morpho-syntactic and referential evidence to the contrary, in which case object-mass nouns are acquired.  相似文献   
16.
Ferreira VS  Slevc LR  Rogers ES 《Cognition》2005,96(3):263-284
Three experiments assessed how speakers avoid linguistically and nonlinguistically ambiguous expressions. Speakers described target objects (a flying mammal, bat) in contexts including foil objects that caused linguistic (a baseball bat) and nonlinguistic (a larger flying mammal) ambiguity. Speakers sometimes avoided linguistic-ambiguity, and they did so equally regardless of whether they also were about to describe foils. This suggests that comprehension processes can sometimes detect linguistic-ambiguity before producing it. However, once produced, speakers consistently avoided using the same linguistically ambiguous expression again for a different meaning. This suggests that production processes can successfully detect linguistic-ambiguity after-the-fact. Speakers almost always avoided nonlinguistic-ambiguity. Thus, production processes are especially sensitive to nonlinguistic- but not linguistic-ambiguity, with the latter avoided consistently only once it is already articulated.  相似文献   
17.
Idioms are phrases with figurative meanings that are not directly derived from the literal meanings of the words in the phrase. Idiom comprehension varies with: literality, whether the idiom is literally plausible; compositionality, whether individual words contribute to a figurative meaning; and contextual bias. We studied idiom comprehension in children with spina bifida meningomyelocele (SBM), a neurodevelopmental disorder associated with problems in discourse comprehension and agenesis and hypoplasia of the corpus callosum. Compared to age peers, children with SBM understood decomposable idioms (which are processed more like literal language) but not non-decomposable idioms (which require contextual analyses for acquisition). The impairment in non-decomposable idioms was related to congenital agenesis of the corpus callosum, which suggests that the consequences of impaired interhemispheric communication, whether congenital or acquired in adulthood, are borne more by configurational than by compositional language.  相似文献   
18.
Two striking contrasts currently exist in the sentence processing literature. First, whereas adult readers rely heavily on lexical information in the generation of syntactic alternatives, adult listeners in world-situated eye-gaze studies appear to allow referential evidence to override strong countervailing lexical biases (Tanenhaus, Spivey-Knowlton, Eberhard, and Sedivy, 1995). Second, in contrast to adults, children in similar listening studies fail to use this referential information and appear to rely exclusively on verb biases or perhaps syntactically based parsing principles (Trueswell, Sekerina, Hill, and Logrip, 1999). We explore these contrasts by fully crossing verb bias and referential manipulations in a study using the eye-gaze listening technique with adults (Experiment 1) and five-year-olds (Experiment 2). Results indicate that adults combine lexical and referential information to determine syntactic choice. Children rely exclusively on verb bias in their ultimate interpretation. However, their eye movements reveal an emerging sensitivity to referential constraints. The observed changes in information use over ontogenetic time best support a constraint-based lexicalist account of parsing development, which posits that highly reliable cues to structure, like lexical biases, will emerge earlier during development and more robustly than less reliable cues.  相似文献   
19.
Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological deficits and thereby SLI. We investigate this possibility by testing the auditory discrimination abilities of G-SLI children for speech and non-speech sounds, at varying presentation rates, and controlling for the effects of age and language on performance. For non-speech formant transitions, 69% of the G-SLI children showed normal auditory processing, whereas for the same acoustic information in speech, only 31% did so. For rapidly presented tones, 46% of the G-SLI children performed normally. Auditory performance with speech and non-speech sounds differentiated the G-SLI children from their age-matched controls, whereas speed of processing did not. The G-SLI children evinced no relationship between their auditory and phonological/grammatical abilities. We found no consistent evidence that a deficit in processing rapid acoustic information causes or maintains G-SLI. The findings, from at least those G-SLI children who do not exhibit any auditory deficits, provide further evidence supporting the existence of a primary domain-specific deficit underlying G-SLI.  相似文献   
20.
A series of three experiments examined children's sensitivity to probabilistic phonotactic structure as reflected in the relative frequencies with which speech sounds occur and co-occur in American English. Children, ages 212 and 312 years, participated in a nonword repetition task that examined their sensitivity to the frequency of individual phonetic segments and to the frequency of combinations of segments. After partialling out ease of articulation and lexical variables, both groups of children repeated higher phonotactic frequency nonwords more accurately than they did low phonotactic frequency nonwords, suggesting sensitivity to phoneme frequency. In addition, sensitivity to individual phonetic segments increased with age. Finally, older children, but not younger children, were sensitive to the frequency of larger (diphone) units. These results suggest not only that young children are sensitive to fine-grained acoustic-phonetic information in the developing lexicon but also that sensitivity to all aspects of the sound structure increases over development. Implications for the acoustic nature of both developing and mature lexical representations are discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号