首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   804篇
  免费   97篇
  国内免费   42篇
  2024年   6篇
  2023年   19篇
  2022年   7篇
  2021年   41篇
  2020年   53篇
  2019年   59篇
  2018年   58篇
  2017年   56篇
  2016年   63篇
  2015年   33篇
  2014年   39篇
  2013年   106篇
  2012年   23篇
  2011年   49篇
  2010年   30篇
  2009年   47篇
  2008年   46篇
  2007年   39篇
  2006年   18篇
  2005年   17篇
  2004年   22篇
  2003年   25篇
  2002年   21篇
  2001年   13篇
  2000年   7篇
  1999年   5篇
  1998年   4篇
  1997年   5篇
  1996年   4篇
  1995年   4篇
  1994年   1篇
  1993年   2篇
  1992年   5篇
  1991年   2篇
  1990年   2篇
  1989年   2篇
  1988年   1篇
  1987年   2篇
  1985年   2篇
  1982年   1篇
  1981年   1篇
  1980年   3篇
排序方式: 共有943条查询结果,搜索用时 15 毫秒
841.
The purpose of this study was to investigate the relationships between selected tests of spatial orientation ability. 2 tests which have been used as predictors of spatial ability (Guilford-Zimmerman Aptitude Survey, Parts V and VI) plus 2 newly developed tests designed to measure that ability were given to 202 Junior high school boys aged 11-15 yr. 3 of these tests were of the paper and pencil type while one was a physical performance (tumbling) test. The paper-and-pencil tests correlated significantly with each other (.62, .61, .44) but the physical performance test did not correlate significantly with any of the other tests.  相似文献   
842.
This paper presents confirmatory factor models with fixed factor loadings that enable the identification of deviations from the expected processing strategy. The instructions usually define the expected processing strategy to a considerable degree. Simplification is a deviation from instructions that is likely to occur in complex cognitive measures. Since simplification impairs the validity of the measure, its identification is important. Models representing simplicity and instruction-based processing strategies were considered in investigating the data of 345 participants obtained by a working memory measure in order to find out whether and how the use of these strategies influences model-data fit. As expected, the consideration of simplicity strategies improved the model-data fit achieved for the instruction-based strategy.  相似文献   
843.
The present study concerns a Dutch computer-based assessment, which includes an assessment process about information literacy and a feedback process for students. The assessment is concerned with the measurement of skills in information literacy and the feedback process with item-based support to improve student learning. To analyze students’ feedback behavior (i.e. feedback use and attention time), test performance, and speed of working, a multivariate hierarchical latent variable model is proposed. The model can handle multivariate mixed responses from multiple sources related to different processes and comprehends multiple measurement components for responses and response times. A flexible within-subject latent variable structure is defined to explore multiple individual latent characteristics related to students’ test performance and feedback behavior. Main results of the computer-based assessment showed that feedback-information pages were less visited by well-performing students when they relate to easy items. Students’ attention paid to feedback was positively related to working speed but not to the propensity to use feedback.  相似文献   
844.
A general comparison is made between the multinomial processing tree (MPT) approach and a strength-based approach for modeling recognition memory measurement. Strength models include the signal-detection model and the dual-process model. Existing MPT models for recognition memory and a new generic MPT model, called the Multistate (MS) model, are contrasted with the strength models. Although the ROC curves for the MS model and strength model are similar, there is a critical difference between existing strength models and MPT models that goes beyond the assessment of the ROC. This difference concerns the question of stochastic mixtures for foil test trials. The hazard function and the reverse hazard function are powerful methods for detecting the presence of a probabilistic mixture. Several new theorems establish a novel method for obtaining information about the hazard function and reverse hazard function for the latent continuous distributions that are assumed in the strength approach to recognition memory. Evidence is provided that foil test trials involve a stochastic mixture. This finding occurred for both short-term memory procedures, such as the Brown–Peterson task, and long-term list-learning procedures, such as the paired-associate task. The effect of mixtures on foil trials is problematic for existing strength models but can be readily handled by MPT models such as the MS model. Other phenomena, such as the mirror effect and the effect of target-foil similarity, are also predicted accurately by the MPT modeling framework.  相似文献   
845.
This article examines the role of 3 types of perceived support for creativity in moderating the relation between creative self-efficacy and self-perceived creativity. The findings suggest significant interaction effects for perceived work-group support and supervisor support, but not for perceived organizational support. This study is among the first to (a) examine the importance of perceived support for creativity in unlocking creative potential and increasing creativity in organizations and (b) use interaction terms in structural equation modeling (SEM) to investigate moderator effects in an applied research setting. These results imply that organizational interventions focused on training supervisors and work-group members to support creativity in the workplace may be more effective than broader and less focused interventions at the organizational level.  相似文献   
846.
In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker–Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.  相似文献   
847.
Over the last decade or two, multilevel structural equation modeling (ML-SEM) has become a prominent modeling approach in the social sciences because it allows researchers to correct for sampling and measurement errors and thus to estimate the effects of Level 2 (L2) constructs without bias. Because the latent variable modeling software Mplus uses maximum likelihood (ML) by default, many researchers in the social sciences have applied ML to obtain estimates of L2 regression coefficients. However, one drawback of ML is that covariance matrices of the predictor variables at L2 tend to be degenerate, and thus, estimates of L2 regression coefficients tend to be rather inaccurate when sample sizes are small. In this article, I show how an approach for stabilizing covariance matrices at L2 can be used to obtain more accurate estimates of L2 regression coefficients. A simulation study is conducted to compare the proposed approach with ML, and I illustrate its application with an example from organizational research.  相似文献   
848.
Official rates of child maltreatment vary considerably from community to community. Extensive research on the role of the community context in maltreatment suggests that the neighborhood in which families live could impact their maltreatment behavior. However, the vast majority of existing studies and literature reviews on the topic have not used statistical methods to appropriately control for the impact of these variables at the individual level. This literature review approaches the question of whether the community context affects maltreatment behavior above and beyond the individual-level variables by critiquing the studies that have successfully measured maltreatment risks at both the individual and community level using multilevel modeling.  相似文献   
849.
In the last half-century, significant advances have been made in directing sales force behavior with the use of optimization and decision models. The present paper both presents the current state-of-the art in sales force decision modeling, and also discusses key issues and trends in contemporary modeling of relevance to sales force researchers. The paper begins by exploring critical concepts regarding the estimation of the sales response function, and then discusses critical problems of endogeneity, heterogeneity, and temporal variation that are faced by modelers in this task. Modern approaches to dealing with these issues are presented. We then discuss areas of importance concerning finding model solutions, including closed form versus simulation, and optimization versus heuristic solutions. The paper next moves to areas of practical importance where models can help, including call planning, sales force size, territory allocation, and compensation design. Finally, we discuss trends that will likely impact on sales force modeling in coming years, including the use of big data and data mining, the possible breakdown of rationality, the rise of the Internet and social media, and the potential of agent-based modeling.  相似文献   
850.
Experimentation is traditionally considered a privileged means of confirmation. However, why and how experiments form a better confirmatory source relative to other strategies is unclear, and recent discussions have identified experiments with various modeling strategies on the one hand, and with ‘natural’ experiments on the other hand. We argue that experiments aiming to test theories are best understood as controlled investigations of specimens. ‘Control’ involves repeated, fine-grained causal manipulation of focal properties. This capacity generates rich knowledge of the object investigated. ‘Specimenhood’ involves possessing relevant properties given the investigative target and the hypothesis in question. Specimens are thus representative members of a class of systems, to which a hypothesis refers. It is in virtue of both control and specimenhood that experiments provide powerful confirmatory evidence. This explains the distinctive power of experiments: although modelers exert extensive control, they do not exert this control over specimens; although natural experiments utilize specimens, control is diminished.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号