首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   93篇
  免费   9篇
  国内免费   15篇
  2023年   2篇
  2022年   4篇
  2021年   3篇
  2020年   2篇
  2019年   4篇
  2018年   6篇
  2017年   5篇
  2016年   5篇
  2015年   3篇
  2014年   2篇
  2013年   14篇
  2012年   5篇
  2011年   7篇
  2010年   3篇
  2009年   5篇
  2008年   5篇
  2007年   2篇
  2006年   2篇
  2005年   4篇
  2004年   2篇
  2003年   4篇
  2001年   2篇
  2000年   1篇
  1995年   1篇
  1994年   1篇
  1992年   2篇
  1991年   2篇
  1990年   2篇
  1986年   1篇
  1985年   3篇
  1984年   2篇
  1982年   4篇
  1980年   2篇
  1978年   3篇
  1977年   1篇
  1975年   1篇
排序方式: 共有117条查询结果,搜索用时 15 毫秒
61.
This research concerns the estimation of polychoric correlations in the context of fitting structural equation models to observed ordinal variables by multistage estimation. The first main contribution of this research is to propose and evaluate a Monte Carlo estimator for the asymptotic covariance matrix (ACM) of the polychoric correlation estimates. In multistage estimation, the ACM plays a prominent role, as overall test statistics, derived fit indices, and parameter standard errors all depend on this quantity. The ACM, however, must itself be estimated. Established approaches to estimating the ACM use a sample-based version, which can yield poor estimates with small samples. A simulation study demonstrates that the proposed Monte Carlo estimator can be more efficient than its sample-based counterpart. This leads to better calibration for established test statistics, in particular with small samples. The second main contribution of this research is a further exploration of the consequences of violating the normality assumption for the underlying response variables. We show the consequences depend on the type of nonnormality, and the number and location of thresholds. The simulation study also demonstrates that overall test statistics have little power to detect the studied forms of nonnormality, regardless of the ACM estimator.  相似文献   
62.
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R. The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.  相似文献   
63.
Current modeling of response times on test items has been strongly influenced by the paradigm of experimental reaction-time research in psychology. For instance, some of the models have a parameter structure that was chosen to represent a speed-accuracy tradeoff, while others equate speed directly with response time. Also, several response-time models seem to be unclear as to the level of parametrization they represent. A hierarchical framework for modeling speed and accuracy on test items is presented as an alternative to these models. The framework allows a “plug-and-play approach” with alternative choices of models for the response and response-time distributions as well as the distributions of their parameters. Bayesian treatment of the framework with Markov chain Monte Carlo (MCMC) computation facilitates the approach. Use of the framework is illustrated for the choice of a normal-ogive response model, a lognormal model for the response times, and multivariate normal models for their parameters with Gibbs sampling from the joint posterior distribution. This study received funding from the Law School Admission Council (LSAC). The opinions and conclusions contained in this paper are those of the author and do not necessarily reflect the policy and position of LSAC. The author is indebted to the American Institute of Certified Public Accountants for the data set in the empirical example and to Rinke H. Klein Entink for his computational assistance  相似文献   
64.
Approximately counting and sampling knowledge states from a knowledge space is a problem that is of interest for both applied and theoretical reasons. However, many knowledge spaces used in practice are far too large for standard statistical counting and estimation techniques to be useful. Thus, in this work we use an alternative technique for counting and sampling knowledge states from a knowledge space. This technique is based on a procedure variously known as subset simulation, the Holmes–Diaconis–Ross method, or multilevel splitting. We make extensive use of Markov chain Monte Carlo methods and, in particular, Gibbs sampling, and we analyse and test the accuracy of our results in numerical experiments.  相似文献   
65.
The reporting of exaggerated effect size estimates may occur either through researchers accepting statistically significant results when power is inadequate and/or from repeated measures approaches aggregating, averaging multiple items, or multiple trials. Monte-Carlo simulations with input of a small, medium, or large effect size were conducted on multiple items or trials that were either averaged or aggregated to create a single dependent measure. Alpha was set at the .05 level, and the trials were assessed over item or trial correlations ranging from 0 to 1. Simulations showed a large increase in observed effect size averages and the power to accept these estimates as statistically significant increased over numbers of trials or items. Overestimation effects were mitigated as correlations between trials increased but still remained substantial in some cases. The implications of these findings for meta-analyses and different research scenarios are discussed.  相似文献   
66.
The Pearson r-from-Z approximation estimates the sample correlation (as an effect size measure) from the ratio of two quantities: the standard normal deviate equivalent (Z-score) corresponding to a one-tailed p-value divided by the square root of the total (pooled) sample size. The formula has utility in meta-analytic work when reports of research contain minimal statistical information. Although simple to implement, the accuracy of the Pearson r-from-Z approximation has not been empirically evaluated. To address this omission, we performed a series of Monte Carlo simulations. Results indicated that in some cases the formula did accurately estimate the sample correlation. However, when sample size was very small (N = 10) and effect sizes were small to small-moderate (ds of 0.1 and 0.3), the Pearson r-from-Z approximation was very inaccurate. Detailed figures that provide guidance as to when the Pearson r-from-Z formula will likely yield valid inferences are presented.  相似文献   
67.
A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members of an initial sample of network members are asked to nominate their network partners, their network partners are then traced and asked to nominate their network partners, and so on. We assume an exponential random graph model (ERGM) of a particular parametric form and outline a conditional maximum likelihood estimation procedure for obtaining estimates of ERGM parameters. This procedure is intended to complement the likelihood approach developed by  Handcock and Gile (2010) by providing a practical means of estimation when the size of the complete network is unknown and/or the complete network is very large. We report the outcome of a simulation study with a known model designed to assess the impact of initial sample size, population size, and number of sampling waves on properties of the estimates. We conclude with a discussion of the potential applications and further developments of the approach.  相似文献   
68.
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond.  相似文献   
69.
Practitioners in the sciences have used the “flow” of knowledge (post-test score minus pre-test score) to measure learning in the classroom for the past 50 years. Walstad and Wagner, and Smith and Wagner moved this practice forward by disaggregating the flow of knowledge and accounting for student guessing. These estimates are sensitive to misspecification of the probability of guessing correct. This work provides guidance to practitioners and researchers facing this problem. We introduce a transformed measure of true positive learning that under some knowable conditions performs better when students’ ability to guess correctly is misspecified and converges to Hake’s normalized learning gain estimator under certain conditions. We then use simulations to compare the accuracy of two estimation techniques under various violations of the assumptions of those techniques. Using recursive partitioning trees fitted to our simulation results, we provide the practitioner concrete guidance based on a set of yes/no questions.  相似文献   
70.
Relapse is the recovery of a previously suppressed response. Animal models have been useful in examining the mechanisms underlying relapse (e.g., reinstatement, renewal, reacquisition, resurgence). However, there are several challenges to analyzing relapse data using traditional approaches. For example, null hypothesis significance testing is commonly used to determine whether relapse has occurred. However, this method requires several a priori assumptions about the data, as well as a large sample size for between‐subjects comparisons or repeated testing for within‐subjects comparisons. Monte Carlo methods may represent an improved analytic technique, because these methods require no prior assumptions, permit smaller sample sizes, and can be tailored to account for all of the data from an experiment instead of some limited set. In the present study, we conducted reanalyses of three studies of relapse (Berry, Sweeney, & Odum, 2014 ; Galizio et al., 2018 ; Odum & Shahan, 2004 ) using Monte Carlo techniques to determine if relapse occurred and if there were differences in rate of response based on relevant independent variables (such as group membership or schedule of reinforcement). These reanalyses supported the previous findings. Finally, we provide general recommendations for using Monte Carlo methods in studies of relapse.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号