首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   245篇
  免费   14篇
  国内免费   8篇
  267篇
  2024年   2篇
  2023年   4篇
  2022年   5篇
  2021年   7篇
  2020年   8篇
  2019年   4篇
  2018年   9篇
  2017年   8篇
  2016年   8篇
  2015年   5篇
  2014年   4篇
  2013年   18篇
  2012年   5篇
  2011年   5篇
  2010年   5篇
  2009年   7篇
  2008年   6篇
  2007年   3篇
  2006年   8篇
  2005年   11篇
  2004年   4篇
  2003年   1篇
  2002年   7篇
  2001年   5篇
  2000年   6篇
  1999年   1篇
  1998年   6篇
  1997年   6篇
  1996年   6篇
  1995年   3篇
  1994年   5篇
  1993年   5篇
  1992年   8篇
  1991年   7篇
  1990年   7篇
  1989年   9篇
  1988年   3篇
  1987年   7篇
  1986年   6篇
  1985年   7篇
  1984年   6篇
  1983年   5篇
  1982年   1篇
  1981年   2篇
  1980年   1篇
  1979年   4篇
  1978年   6篇
  1977年   1篇
排序方式: 共有267条查询结果,搜索用时 0 毫秒
131.
132.
The method of finding the maximum likelihood estimates of the parameters in a multivariate normal model with some of the component variables observable only in polytomous form is developed. The main stratagem used is a reparameterization which converts the corresponding log likelihood function to an easily handled one. The maximum likelihood estimates are found by a Fletcher-Powell algorithm, and their standard error estimates are obtained from the information matrix. When the dimension of the random vector observable only in polytomous form is large, obtaining the maximum likelihood estimates is computationally rather labor expensive. Therefore, a more efficient method, the partition maximum likelihood method, is proposed. These estimation methods are demonstrated by real and simulated data, and are compared by means of a simulation study.  相似文献   
133.
    
Computerized adaptive testing for cognitive diagnosis (CD-CAT) needs to be efficient and responsive in real time to meet practical applications' requirements. For high-dimensional data, the number of categories to be recognized in a test grows exponentially as the number of attributes increases, which can easily cause system reaction time to be too long such that it adversely affects the examinees and thus seriously impacts the measurement efficiency. More importantly, the long-time CPU operations and memory usage of item selection in CD-CAT due to intensive computation are impractical and cannot wholly meet practice needs. This paper proposed two new efficient selection strategies (HIA and CEL) for high-dimensional CD-CAT to address this issue by incorporating the max-marginals from the maximum a posteriori query and integrating the ensemble learning approach into the previous efficient selection methods, respectively. The performance of the proposed selection method was compared with the conventional selection method using simulated and real item pools. The results showed that the proposed methods could significantly improve the measurement efficiency with about 1/2–1/200 of the conventional methods' computation time while retaining similar measurement accuracy. With increasing number of attributes and size of the item pool, the computation time advantage of the proposed methods becomes more significant.  相似文献   
134.
    
Delay discounting reflects the rate at which a reward loses its subjective value as a function of delay to that reward. Many models have been proposed to measure delay discounting, and many comparisons have been made among these models. We highlight the two-parameter delay discounting model popularized by Howard Rachlin by demonstrating two key practical features of the Rachlin model. The first feature is flexibility; the Rachlin model fits empirical discounting data closely. Second, when compared with other available two-parameter discounting models, the Rachlin model has the advantage that unique best estimates for parameters are easy to obtain across a wide variety of potential discounting patterns. We focus this work on this second feature in the context of maximum likelihood, showing the relative ease with which the Rachlin model can be utilized compared with the extreme care that must be used with other models for discounting data, focusing on two illustrative cases that pass checks for data validity. Both of these features are demonstrated via a reanalysis of discounting data the authors have previously used for model selection purposes.  相似文献   
135.
    
Methods for the treatment of item non-response in attitudinal scales and in large-scale assessments under the pairwise likelihood (PL) estimation framework and under a missing at random (MAR) mechanism are proposed. Under a full information likelihood estimation framework and MAR, ignorability of the missing data mechanism does not lead to biased estimates. However, this is not the case for pseudo-likelihood approaches such as the PL. We develop and study the performance of three strategies for incorporating missing values into confirmatory factor analysis under the PL framework, the complete-pairs (CP), the available-cases (AC) and the doubly robust (DR) approaches. The CP and AC require only a model for the observed data and standard errors are easy to compute. Doubly-robust versions of the PL estimation require a predictive model for the missing responses given the observed ones and are computationally more demanding than the AC and CP. A simulation study is used to compare the proposed methods. The proposed methods are employed to analyze the UK data on numeracy and literacy collected as part of the OECD Survey of Adult Skills.  相似文献   
136.
    
This study proposes a multiple-group cognitive diagnosis model to account for the fact that students in different groups may use distinct attributes or use the same attributes but in different manners (e.g., conjunctive, disjunctive, and compensatory) to solve problems. Based on the proposed model, this study systematically investigates the performance of the likelihood ratio (LR) test and Wald test in detecting differential item functioning (DIF). A forward anchor item search procedure was also proposed to identify a set of anchor items with invariant item parameters across groups. Results showed that the LR and Wald tests with the forward anchor item search algorithm produced better calibrated Type I error rates than the ordinary LR and Wald tests, especially when items were of low quality. A set of real data were also analyzed to illustrate the use of these DIF detection procedures.  相似文献   
137.
    
Most partitioning methods used in psychological research seek to produce homogeneous groups (i.e., groups with low intra-group dissimilarity). However, there are also applications where the goal is to provide heterogeneous groups (i.e., groups with high intra-group dissimilarity). Examples of these anticlustering contexts include construction of stimulus sets, formation of student groups, assignment of employees to project work teams, and assembly of test forms from a bank of items. Unfortunately, most commercial software packages are not equipped to accommodate the objective criteria and constraints that commonly arise for anticlustering problems. Two important objective criteria for anticlustering based on information in a dissimilarity matrix are: a diversity measure based on within-cluster sums of dissimilarities; and a dispersion measure based on the within-cluster minimum dissimilarities. In many instances, it is possible to find a partition that provides a large improvement in one of these two criteria with little (or no) sacrifice in the other criterion. For this reason, it is of significant value to explore the trade-offs that arise between these two criteria. Accordingly, the key contribution of this paper is the formulation of a bicriterion optimization problem for anticlustering based on the diversity and dispersion criteria, along with heuristics to approximate the Pareto efficient set of partitions. A motivating example and computational study are provided within the framework of test assembly.  相似文献   
138.
    
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra–Bentler method remains the most viable method for most situations.  相似文献   
139.
When considering dyadic data, one of the questions is whether the roles of the two dyad members can be considered equal. This question may be answered empirically using indistinguishability tests in the actor–partner interdependence model. In this paper several issues related to such indistinguishability tests are discussed: the difference between maximum likelihood and restricted maximum likelihood based tests for equality in variance parameters; the choice between the structural equation modelling and multilevel modelling framework; and the use of sequential testing rather than one global test for a set of indistinguishability tests. Based on simulation studies, we provide guidelines for best practice. All different types of tests are illustrated with cross-sectional and longitudinal data, and corroborated with corresponding R code.  相似文献   
140.
    
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood estimation methods (conditional, marginal, and joint). Three information criteria fit indices (Akaike information criterion, Bayesian information criterion, and sample size adjusted BIC) were used in a simulation study and an empirical study. Findings of this study showed that the spurious latent class problem was observed with marginal maximum likelihood and joint maximum likelihood estimations. However, conditional maximum likelihood estimation showed no overextraction problem with non-normal ability distributions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号