首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   876篇
  免费   102篇
  国内免费   42篇
  2024年   6篇
  2023年   19篇
  2022年   7篇
  2021年   45篇
  2020年   53篇
  2019年   60篇
  2018年   61篇
  2017年   57篇
  2016年   69篇
  2015年   33篇
  2014年   38篇
  2013年   121篇
  2012年   26篇
  2011年   53篇
  2010年   29篇
  2009年   51篇
  2008年   58篇
  2007年   46篇
  2006年   19篇
  2005年   20篇
  2004年   27篇
  2003年   27篇
  2002年   23篇
  2001年   13篇
  2000年   7篇
  1999年   6篇
  1998年   4篇
  1997年   5篇
  1996年   4篇
  1995年   4篇
  1994年   1篇
  1993年   2篇
  1992年   5篇
  1991年   2篇
  1990年   2篇
  1989年   2篇
  1988年   1篇
  1987年   2篇
  1985年   2篇
  1984年   2篇
  1983年   2篇
  1982年   1篇
  1981年   1篇
  1980年   3篇
  1977年   1篇
排序方式: 共有1020条查询结果,搜索用时 0 毫秒
131.
Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature‐based conceptual account assumes that the statistical characteristics of concepts’ features—the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co‐occurrence (correlational strength)—determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech‐to‐meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time‐sensitive co‐occurrence‐driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general‐to‐specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation.  相似文献   
132.
Individuals with agrammatic Broca's aphasia experience difficulty when processing reversible non‐canonical sentences. Different accounts have been proposed to explain this phenomenon. The Trace Deletion account (Grodzinsky, 1995, 2000, 2006) attributes this deficit to an impairment in syntactic representations, whereas others (e.g., Caplan, Waters, Dede, Michaud, & Reddy, 2007; Haarmann, Just, & Carpenter, 1997) propose that the underlying structural representations are unimpaired, but sentence comprehension is affected by processing deficits, such as slow lexical activation, reduction in memory resources, slowed processing and/or intermittent deficiency, among others. We test the claims of two processing accounts, slowed processing and intermittent deficiency, and two versions of the Trace Deletion Hypothesis (TDH), in a computational framework for sentence processing (Lewis & Vasishth, 2005) implemented in ACT‐R (Anderson, Byrne, Douglass, Lebiere, & Qin, 2004). The assumption of slowed processing is operationalized as slow procedural memory, so that each processing action is performed slower than normal, and intermittent deficiency as extra noise in the procedural memory, so that the parsing steps are more noisy than normal. We operationalize the TDH as an absence of trace information in the parse tree. To test the predictions of the models implementing these theories, we use the data from a German sentence—picture matching study reported in Hanne, Sekerina, Vasishth, Burchert, and De Bleser (2011). The data consist of offline (sentence‐picture matching accuracies and response times) and online (eye fixation proportions) measures. From among the models considered, the model assuming that both slowed processing and intermittent deficiency are present emerges as the best model of sentence processing difficulty in aphasia. The modeling of individual differences suggests that, if we assume that patients have both slowed processing and intermittent deficiency, they have them in differing degrees.  相似文献   
133.
Everyday reasoning requires more evidence than raw data alone can provide. We explore the idea that people can go beyond this data by reasoning about how the data was sampled. This idea is investigated through an examination of premise non‐monotonicity, in which adding premises to a category‐based argument weakens rather than strengthens it. Relevance theories explain this phenomenon in terms of people's sensitivity to the relationships among premise items. We show that a Bayesian model of category‐based induction taking premise sampling assumptions and category similarity into account complements such theories and yields two important predictions: First, that sensitivity to premise relationships can be violated by inducing a weak sampling assumption; and second, that premise monotonicity should be restored as a result. We test these predictions with an experiment that manipulates people's assumptions in this regard, showing that people draw qualitatively different conclusions in each case.  相似文献   
134.
Li and Baroody present a study in which they investigate toddlers’ spontaneous attention to exact quantity without acknowledging how previous studies of spontaneous focusing on numerosity (SFON) are related to their concept and methods. In this commentary requested by the European Journal of Developmental Psychology, we argue that the concept and the methods of spontaneous attention to exact quantity in the study of Li and Baroody clearly arise from previous research on SFON, as the authors have previously noted in their paper published in 2008. It is highly questionable whether their approach can be theoretically or methodologically dissociated from the previous research on SFON tendency to the extent that it is necessary to use an alternative name for the concept in their study.  相似文献   
135.
A multitrait-multimethod model with minimal assumptions   总被引:1,自引:0,他引:1  
Michael Eid 《Psychometrika》2000,65(2):241-261
A new model of confirmatory factor analysis (CFA) for multitrait-multimethod (MTMM) data sets is presented. It is shown that this model can be defined by only three assumptions in the framework of classical psychometric test theory (CTT). All other properties of the model, particularly the uncorrelated-ness of the trait with the method factors are logical consequences of the definition of the model. In the model proposed there are as many trait factors as different traits considered, but the number of method factors is one fewer than the number of methods included in an MTMM study. The covariance structure implied by this model is derived, and it is shown that this model is identified even under conditions under which other CFA-MTMM models are not. The model is illustrated by two empirical applications. Furthermore, its advantages and limitations are discussed with respect to previously developed CFA models for MTMM data.  相似文献   
136.
Some of the things that adults learn about language, and about the world, are very specific, whereas others are more abstract or rulelike. This article reviews evidence showing that infants, too, can very rapidly acquire both specific and abstract information, and considers the mechanisms that infants might use in doing so.  相似文献   
137.
138.
We explored differences in distress scores at intake as well as the change in anxiety and depression scores over the course of 12 therapy sessions for Native Hawaiian and Pacific Islander (NHPI) college students. Data were collected from the Center for Collegiate Mental Health (= 256,242). Results support the notion that NHPI college students experience anxiety and depression in therapy differently from other ethnic groups with moderate-to-large magnitudes of effect.  相似文献   
139.
Cross validation is a useful way of comparing predictive generalizability of theoretically plausible a priori models in structural equation modeling (SEM). A number of overall or local cross validation indices have been proposed for existing factor-based and component-based approaches to SEM, including covariance structure analysis and partial least squares path modeling. However, there is no such cross validation index available for generalized structured component analysis (GSCA) which is another component-based approach. We thus propose a cross validation index for GSCA, called Out-of-bag Prediction Error (OPE), which estimates the expected prediction error of a model over replications of so-called in-bag and out-of-bag samples constructed through the implementation of the bootstrap method. The calculation of this index is well-suited to the estimation procedure of GSCA, which uses the bootstrap method to obtain the standard errors or confidence intervals of parameter estimates. We empirically evaluate the performance of the proposed index through the analyses of both simulated and real data.  相似文献   
140.
We introduce and extend the classical regression framework for conducting mediation analysis from the fit of only one model. Using the essential mediation components (EMCs) allows us to estimate causal mediation effects and their analytical variance. This single-equation approach reduces computation time and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations. Additionally, we extend this framework to non-nested mediation systems, provide a joint measure of mediation for complex mediation hypotheses, propose new visualizations for mediation effects, and explain why estimates of the total effect may differ depending on the approach used. Using data from social science studies, we also provide extensive illustrations of the usefulness of this framework and its advantages over traditional approaches to mediation analysis. The example data are freely available for download online and we include the R code necessary to reproduce our results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号