首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3006篇
  免费   533篇
  国内免费   197篇
  3736篇
  2025年   17篇
  2024年   51篇
  2023年   92篇
  2022年   109篇
  2021年   138篇
  2020年   177篇
  2019年   174篇
  2018年   147篇
  2017年   153篇
  2016年   158篇
  2015年   148篇
  2014年   154篇
  2013年   386篇
  2012年   123篇
  2011年   131篇
  2010年   117篇
  2009年   139篇
  2008年   148篇
  2007年   126篇
  2006年   139篇
  2005年   122篇
  2004年   107篇
  2003年   92篇
  2002年   78篇
  2001年   54篇
  2000年   55篇
  1999年   39篇
  1998年   30篇
  1997年   35篇
  1996年   31篇
  1995年   28篇
  1994年   18篇
  1993年   20篇
  1992年   28篇
  1991年   12篇
  1990年   14篇
  1989年   15篇
  1988年   16篇
  1987年   18篇
  1986年   12篇
  1985年   11篇
  1984年   13篇
  1983年   14篇
  1982年   8篇
  1981年   3篇
  1980年   9篇
  1979年   4篇
  1978年   13篇
  1977年   7篇
  1976年   3篇
排序方式: 共有3736条查询结果,搜索用时 0 毫秒
801.
    
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration‐based set of hypotheses containing equality constraints on the means, or a theory‐based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory‐based hypotheses) has advantages over exploration (i.e., examining all possible equality‐constrained hypotheses). Furthermore, examining reasonable order‐restricted hypotheses has more power to detect the true effect/non‐null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory‐based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number).  相似文献   
802.
    
In real testing, examinees may manifest different types of test‐taking behaviours. In this paper we focus on two types that appear to be among the more frequently occurring behaviours – solution behaviour and rapid guessing behaviour. Rapid guessing usually happens in high‐stakes tests when there is insufficient time, and in low‐stakes tests when there is lack of effort. These two qualitatively different test‐taking behaviours, if ignored, will lead to violation of the local independence assumption and, as a result, yield biased item/person parameter estimation. We propose a mixture hierarchical model to account for differences among item responses and response time patterns arising from these two behaviours. The model is also able to identify the specific behaviour an examinee engages in when answering an item. A Monte Carlo expectation maximization algorithm is proposed for model calibration. A simulation study shows that the new model yields more accurate item and person parameter estimates than a non‐mixture model when the data indeed come from two types of behaviour. The model also fits real, high‐stakes test data better than a non‐mixture model, and therefore the new model can better identify the underlying test‐taking behaviour an examinee engages in on a certain item.  相似文献   
803.
    
According to dual-process models of memory, recognition is subserved by two processes: recollection and familiarity. Many variants of these models assume that recollection and familiarity make stochastically independent contributions to performance in recognition tasks and that the variance of the familiarity signal is equal for targets and for lures. Here, we challenge these ‘common-currency’ assumptions. Using a model-comparison approach, featuring the Continuous Dual Process (CDP; Wixted & Mickes, 2010) model as the protagonist, we show that when these assumptions are relaxed, the model’s fits to individual participants’ data improve. Furthermore, our analyses reveal that across items, recollection and familiarity show a positive correlation. Interestingly, this across-items correlation was dissociated from an across-participants correlation between the sensitivities of these processes. We also find that the familiarity signal is significantly more variable for targets than for lures. One striking theoretical implication of these findings is that familiarity—rather than recollection, as most models assume—may be the main contributor responsible for one of the most influential findings of recognition memory, that of subunit zROC slopes. Additionally, we show that erroneously adopting the common-currency assumptions, introduces severe biases to estimates of recollection and familiarity.  相似文献   
804.
805.
    
The purpose of this study was to explore the degree of grain size of the attributes and the sample sizes that can support accurate parameter recovery with the General Diagnostic Model (GDM) for a large-scale international assessment. In this resampling study, bootstrap samples were obtained from the 2003 Grade 8 TIMSS in Mathematics at varying sample sizes from 500 to 4000 and grain sizes of the attributes from a unidimensional model to one with ten attributes. The results showed that the eight-attribute model was the one most consistently identified as best fitting. Parameter estimation for more than ten attributes and samples less than 500 failed. Furthermore, the precision of item parameter recovery decreased as the number of attributes measured by an item increased and sample size decreased. On the other hand, the distributions of latent classes were relatively stable across all models and sample sizes.  相似文献   
806.
    
ObjectivesA recent longitudinal study with junior athletes (Madigan, Stoeber, & Passfield, 2015) found perfectionism to predict changes in athlete burnout: evaluative concerns perfectionism predicted increases in burnout over a 3-month period, whereas personal standards perfectionism predicted decreases. The present study sought to expand on these findings by using the framework of the 2 × 2 model of perfectionism (Gaudreau & Thompson, 2010) to examine whether evaluative concerns perfectionism and personal standards perfectionism show interactions in predicting changes in athlete burnout.DesignTwo-wave longitudinal design.MethodThe present study examined self-reported evaluative concerns perfectionism, personal standards perfectionism, and athlete burnout in 111 athletes (mean age 24.8 years) over 3 months of active training.Results and conclusionWhen moderated regression analyses were employed, interactive effects of evaluative concerns perfectionism × personal standards perfectionism were found indicating that personal standards perfectionism buffered the effects of evaluative concerns perfectionism on total burnout and physical/emotional exhaustion. To interpret these effects, the 2 × 2 model of perfectionism provides a useful theoretical framework.  相似文献   
807.
    
This study evaluates the temporal structure of daily self-esteem and the relative contribution of a range of theoretically motivated predictors of daily self-esteem. To assess self-esteem stability, a daily version of the Rosenberg Self-Esteem scale (RSE, Rosenberg, 1965) was administered to 278 undergraduates for five consecutive days. These short-term longitudinal data were analysed using the Trait State Error (TSE) modelling framework. The TSE decomposes multi-wave data into three components: (1) a stable trait component, (2) a state component, and (3) an error component. Significant predictors of the trait component of self-esteem observed across five days were: (1) emotional stability, and (2) the congruence between implicit and explicit self-esteem. Significant predictors of the state components of self-esteem were daily positive and negative events. We discuss the implications of these results for future research concerning self-esteem stability.  相似文献   
808.
Is religion more of an integrative or a divisive force in contemporary societies? We use multilevel analyses of World Values Survey data from 77,409 individuals in 69 countries to examine how both the percent of the population that is religious and the religious heterogeneity of a country are related to generalized social trust, the willingness of individuals to trust “most people.” When we first examine the main effects of the percent religious and religious heterogeneity we find no evidence that either variable is related to trust in the ways predicted by major theories. However, the combination of these two variables has a huge negative relationship with trust. Countries that are both highly religious and religiously heterogeneous (diverse) have, on average, levels of trust that are only half the average levels of countries with other combinations of these two variables. The results have important implications for understanding the role of religion in modern societies.  相似文献   
809.
    
The generalized matching law (GML) is reconstructed as a logistic regression equation that privileges no particular value of the sensitivity parameter, a. That value will often approach 1 due to the feedback that drives switching that is intrinsic to most concurrent schedules. A model of that feedback reproduced some features of concurrent data. The GML is a law only in the strained sense that any equation that maps data is a law. The machine under the hood of matching is in all likelihood the very law that was displaced by the Matching Law. It is now time to return the Law of Effect to centrality in our science.  相似文献   
810.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号