首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   695篇
  免费   61篇
  国内免费   46篇
  2024年   3篇
  2023年   28篇
  2022年   26篇
  2021年   28篇
  2020年   45篇
  2019年   43篇
  2018年   39篇
  2017年   42篇
  2016年   43篇
  2015年   16篇
  2014年   22篇
  2013年   61篇
  2012年   12篇
  2011年   14篇
  2010年   10篇
  2009年   20篇
  2008年   21篇
  2007年   29篇
  2006年   21篇
  2005年   28篇
  2004年   19篇
  2003年   13篇
  2002年   21篇
  2001年   9篇
  2000年   11篇
  1999年   6篇
  1998年   9篇
  1997年   7篇
  1996年   6篇
  1995年   16篇
  1994年   3篇
  1993年   8篇
  1992年   6篇
  1991年   7篇
  1990年   6篇
  1989年   12篇
  1988年   10篇
  1987年   5篇
  1986年   5篇
  1985年   4篇
  1984年   3篇
  1983年   6篇
  1982年   6篇
  1981年   3篇
  1980年   5篇
  1979年   15篇
  1978年   9篇
  1977年   8篇
  1976年   7篇
  1974年   3篇
排序方式: 共有802条查询结果,搜索用时 15 毫秒
151.
Test items are often evaluated and compared by contrasting the shapes of their item characteristics curves (ICC's) or surfaces. The current paper develops and applies three general (i.e., nonparametric) comparisons of the shapes of two item characteristic surfaces: (i) proportional latent odds, (ii) uniform relative difficulty, and (iii) item sensitivity. Two items may be compared in these ways while making no assumption about the shapes of item characteristic surfaces for other items, and no assumption about the dimensionality of the latent variable. Also studied is a method for comparing the relative shapes of two item characteristic curves in two examinee populations.The author is grateful to Paul Holland, Robert Mislevy, Tue Tjur, Rebecca Zwick, the editor and reviewers for valuable comments on the subject of this paper, to Mari A. Pearlman for advice on the pairing of items in the examples, and to Dorothy Thayer for assistance with computing.  相似文献   
152.
The DEDICOM model is a model for representing asymmetric relations among a set of objects by means of a set of coordinates for the objects on a limited number of dimensions. The present paper offers an alternating least squares algorithm for fitting the DEDICOM model. The model can be generalized to represent any number of sets of relations among the same set of objects. An algorithm for fitting this three-way DEDICOM model is provided as well. Based on the algorithm for the three-way DEDICOM model an algorithm is developed for fitting the IDIOSCAL model in the least squares sense.The author is obliged to Jos ten Berge and Richard Harshman.  相似文献   
153.
Centering a matrix row-wise and rescaling it column-wise to a unit sum of squares requires an iterative procedure. It is shown that this procedure converges to a stable solution. This solution need not be centered row-wise if the limiting point of the interations is a matrix of rank one. The results of the present paper bear directly on several types of preprocessing methods in Parafac/Candecomp.  相似文献   
154.
155.
This paper examines a model and defines reasonable assumptions underlying different measures of observer agreement for categorical data collected in free operant situations. It is assumed that two or more observers classify operant behaviors of subjects into occurrences and nonoccurrences by recognition by validated response classes (categories) such that the rates of false positives and observer biases are acceptably low. Thus errors are mostly omissions, i.e., failing to observe events that occur. Four alternative cases are derived, together with formulas for calculating significance tests, variances, and standard errors, three of which do not depend on knowledge of the proportion of time points at which the event does not occur.We wish to acknowledge NICHD Grant HD-10570, The Neuropharmacology of Developmental Disorders, George Breese, Ph.D., and C. T. Gualtieri, M.D., Principal Investigators; NIEHS Grant ES-01104; USPHS Grant HD-03110; and MCH Project 916 to the Division for Disorders of Development and Learning.  相似文献   
156.
In this paper, modern statistics is considered as a branch of psychometrics and the question of how the central problems of statistics can be resolved using psychometric methods is investigated. Theories and methods developed in the fields of test theory, scaling, and factor analysis are related to the principle problems of modern statistical theory and method. Topics surveyed include assessment of probabilities, assessment of utilities, assessment of exchangeability, preposterior analysis, adversary analysis, multiple comparisons, the selection of predictor variables, and full-rank ANOVA. Reference is made to some literature from the field of cognitive psychology to indicate some of the difficulties encountered in probability and utility assessment. Some methods for resolving these difficulties using the Computer-Assisted Data Analysis (CADA) Monitor are described, as is some recent experimental work on utility assessment.1980 Psychometric Society presidential address.I am indebted to Paul Slovic and David Libby for valuable consultation on the issues discussed in this paper and to Nancy Turner and Laura Novick for assistance in preparation.Research reported herein was supported under contract number N00014-77-C-0428 from the Office of Naval Research to The University of Iowa, Melvin R. Novick, principal investigator. Opinions expressed herein reflect those of the author and not those of sponsoring agencies.  相似文献   
157.
To establish the existence of his abilities, a judge is given the task of classifying each ofN=rs subjects into one ofr known categories, each containings of the subjects. An incomplete design is proposed whereby the judge is presented withb groups, each one containingn=rs/b<r subjects. Then different categories corresponding to members of the group are known. Using the total number of correct classifications, this method of grouping is compared to that in which the group size is equal to the number of categories. The incomplete grouping is shown to yield a more powerful test for discriminating between the null hypothesis that the judge is guessing the classifications and the alternative hypothesis that he has some definite abilities. The incomplete design is found to be most effective (powerful) when the number of subjects in a group is limited to two or three.The author is grateful for the suggestions of the referees and the editor, which greatly improved the paper.  相似文献   
158.
Memory assessment is a key element in neuropsychological testing. Gold standard evaluation is based on updated normative data, but in many small countries (e.g. in Scandinavia) such data are sparse. In Denmark, reference data exist for non‐verbal memory tests and list‐learning tests but there is no normative data for memory tests which capture narrative recall and cued recall. In a nation‐wide study, Free and Cued Selective Reminding Test (FCSRT ), WMS ‐III Logical Memory (LM ) and a newly developed test Category Cued Memory Test (CCMT ‐48) were applied in 131 cognitively intact persons (aged 60–96 years). Regression‐based reference data for Danish versions of FCSRT , CCMT ‐48 and LM adjusted for age, education and gender are provided. Gender and age‐group had a significant impact on the expected scores, whereas the effect of education had a limited effect on expected scores. Test performances were significantly correlated in the range 0.21–0.51. Based on these findings and previous results it may be relevant to assess both free recall, cued recall and recognition to tap the earliest changes associated with neurodegeneration, and this study therefore provides an important supplement to existing Danish normative data. Future studies should investigate the discriminative validity of the tests and the clinical utility of the presented reference data.  相似文献   
159.
Process factor analysis (PFA) is a latent variable model for intensive longitudinal data. It combines P-technique factor analysis and time series analysis. The goodness-of-fit test in PFA is currently unavailable. In the paper, we propose a parametric bootstrap method for assessing model fit in PFA. We illustrate the test with an empirical data set in which 22 participants rated their effects everyday over a period of 90 days. We also explore Type I error and power of the parametric bootstrap test with simulated data.  相似文献   
160.
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号