首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   163篇
  免费   11篇
  国内免费   11篇
  2023年   3篇
  2022年   4篇
  2021年   8篇
  2020年   10篇
  2019年   12篇
  2018年   9篇
  2017年   9篇
  2016年   6篇
  2015年   2篇
  2014年   9篇
  2013年   16篇
  2012年   3篇
  2011年   6篇
  2010年   4篇
  2009年   3篇
  2008年   7篇
  2007年   2篇
  2006年   7篇
  2005年   6篇
  2004年   2篇
  2003年   3篇
  2002年   3篇
  2001年   5篇
  2000年   1篇
  1999年   1篇
  1998年   3篇
  1997年   3篇
  1996年   1篇
  1995年   3篇
  1994年   3篇
  1993年   1篇
  1992年   2篇
  1990年   2篇
  1989年   1篇
  1988年   3篇
  1987年   2篇
  1986年   3篇
  1985年   1篇
  1984年   2篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1978年   3篇
  1977年   3篇
排序方式: 共有185条查询结果,搜索用时 31 毫秒
131.
A nonparametric technique based on the Hamming distance is proposed in this research by recognizing that once the attribute vector is known, or correctly estimated with high probability, one can determine the item-by-attribute vectors for new items undergoing calibration. We consider the setting where Q is known for a large item bank, and the q-vectors of additional items are estimated. The method is studied in simulation under a wide variety of conditions, and is illustrated with the Tatsuoka fraction subtraction data. A consistency theorem is developed giving conditions under which nonparametric Q calibration can be expected to work.  相似文献   
132.
Forensic evidence often involves an evaluation of whether two impressions were made by the same source, such as whether a fingerprint from a crime scene has detail in agreement with an impression taken from a suspect. Human experts currently outperform computer‐based comparison systems, but the strength of the evidence exemplified by the observed detail in agreement must be evaluated against the possibility that some other individual may have created the crime scene impression. Therefore, the strongest evidence comes from features in agreement that are also not shared with other impressions from other individuals. We characterize the nature of human expertise by applying two extant metrics to the images used in a fingerprint recognition task and use eye gaze data from experts to both tune and validate the models. The Attention via Information Maximization (AIM) model (Bruce & Tsotsos, 2009) quantifies the rarity of regions in the fingerprints to determine diagnosticity for purposes of excluding alternative sources. The CoVar model (Karklin & Lewicki, 2009) captures relationships between low‐level features, mimicking properties of the early visual system. Both models produced classification and generalization performance in the 75%–80% range when classifying where experts tend to look. A validation study using regions identified by the AIM model as diagnostic demonstrates that human experts perform better when given regions of high diagnosticity. The computational nature of the metrics may help guard against wrongful convictions, as well as provide a quantitative measure of the strength of evidence in casework.  相似文献   
133.
Computer simulation through an error-statistical lens   总被引:1,自引:1,他引:0  
Wendy S. Parker 《Synthese》2008,163(3):371-384
After showing how Deborah Mayo’s error-statistical philosophy of science might be applied to address important questions about the evidential status of computer simulation results, I argue that an error-statistical perspective offers an interesting new way of thinking about computer simulation models and has the potential to significantly improve the practice of simulation model evaluation. Though intended primarily as a contribution to the epistemology of simulation, the analysis also serves to fill in details of Mayo’s epistemology of experiment.  相似文献   
134.
Most agree that, if all else is equal, patients should be provided with enough information about proposed medical therapies to allow them to make an informed decision about what, if anything, they wish to receive. This is the principle of informed choice; it is closely related to the notion of informed consent. Contemporary clinical trials are analysed according to classical statistics. This paper puts forward the argument that classical statistics does not provide the right sort of information for informing choice. The notion of probability used by classical statistics is complex and difficult to communicate. Therapeutic decisions are best informed by statistical approaches that assign probabilities to hypotheses about the benefits and harms of therapies. Bayesian approaches to statistical inference provide such probabilities.
Adam La CazeEmail:
  相似文献   
135.
变点分析法(change point analysis, CPA)近些年才引入心理与教育测量学, 相较于传统方法, CPA不仅可以侦查异常作答被试, 还能自动精确地定位变点位置, 高效清洗作答数据。其原理在于:判断作答序列中是否存在可将该序列划分为具有不同统计学属性两部分的点(即变点), 并且需使用被试拟合统计量(person-fit statistic, PFS)来量化两个子序列之间的差异。未来可将单变点分析拓展至多变点, 结合反应时等信息, 构建非参数化指标以及将现有指标拓展至多级计分或多维测验, 以提高CPA的适用广度及效力。  相似文献   
136.
For the construction of tests and questionnaires that require multiple raters (e.g., a child behaviour checklist completed by both parents) a novel ordinal scaling technique is currently being further developed, called two-level Mokken scale analysis. The technique uses within-rater and between-rater coefficients to assess the scalability of the test. These coefficients are generalizations of Mokken's scalability coefficients. In this paper we derived standard errors for the two-level coefficients and for their ratios. The coefficients, the estimates, the estimated standard errors and the software implementation are discussed and illustrated using a real-data example, and a small-scale simulation study demonstrates the accuracy of the estimates.  相似文献   
137.
医学统计学在医学科研中占有重要地位,统计方法正确使用与否直接影响论文的结果和质量,但是具体使用过程中统计学方法或指标常常被混淆或使用不妥。本文就科研设计、数据描述、推演结论等方面容易混淆的统计学常见问题进行简要阐述,并举例分析可能出现误用的指标和方法,提高医务工作者对医学统计学的认识。  相似文献   
138.
Stevens’ theory of admissible statistics [Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677680] states that measurement levels should guide the choice of statistical test, such that the truth value of statements based on a statistical analysis remains invariant under admissible transformations of the data. Lord [Lord, F. M. (1953). On the statistical treatment of football numbers. American Psychologist, 8, 750-751] challenged this theory. In a thought experiment, a parametric test is performed on football numbers (identifying players: a nominal representation) to decide whether a sample from the machine issuing these numbers should be considered non-random. This is an apparently illegal test, since its outcomes are not invariant under admissible transformations for the nominal measurement level. Nevertheless, it results in a sensible conclusion: the number-issuing machine was tampered with. In the ensuing measurement-statistics debate Lord’s contribution has been influential, but has also led to much confusion. The present aim is to show that the thought experiment contains a serious flaw. First it is shown that the implicit assumption that the numbers are nominal is false. This disqualifies Lord’s argument as a valid counterexample to Stevens’ dictum. Second, it is argued that the football numbers do not represent just the nominal property of non-identity of the players; they also represent the amount of bias in the machine. It is a question about this property-not a property that relates to the identity of the football players-that the statistical test is concerned with. Therefore, only this property is relevant to Lord’s argument. We argue that the level of bias in the machine, indicated by the population mean, conforms to a bisymmetric structure, which means that it lies on an interval scale. In this light, Lord’s thought experiment-interpreted by many as a problematic counterexample to Stevens’ theory of admissible statistics-conforms perfectly to Stevens’ dictum.  相似文献   
139.
言语行为可以反映认知态度,利用心理学的“自由联想法”对公众与医护人员进行针对医护人员形象的调查,以获得他们心目中描述医护人员形象的词语.按照他们头脑中出现的顺序进行加权统计;计算积极、中性和消极词语的比例.调查发现,公众对医护人员形象的认知态度存在矛盾;同时医护人员对自身职业评价较低;二者通过联想获得贬义词分值较高,原因是医护人员普遍被认为劳动强度大、职业安全感小等.运用社会认知心理学的平衡理论进行分析以期获得提升认知态度,改善医患关系的新途径.  相似文献   
140.
We developed a supervised machine learning classifier to identify faking good by analyzing item response patterns of a Big Five personality self‐report. We used a between‐subject design, dividing participants (N = 548) into two groups and manipulated their faking behavior via instructions given prior to administering the self‐report. We implemented a simple classifier based on the Lie scale's cutoff score and several machine learning models fitted either to the personality scale scores or to the items response patterns. Results shown that the best machine learning classifier—based on the XGBoost algorithm and fitted to the item responses—was better at detecting faked profiles than the Lie scale classifier.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号