首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   670篇
  免费   97篇
  国内免费   89篇
  2024年   1篇
  2023年   3篇
  2022年   8篇
  2021年   18篇
  2020年   25篇
  2019年   39篇
  2018年   40篇
  2017年   46篇
  2016年   29篇
  2015年   34篇
  2014年   47篇
  2013年   71篇
  2012年   42篇
  2011年   34篇
  2010年   22篇
  2009年   49篇
  2008年   47篇
  2007年   42篇
  2006年   28篇
  2005年   28篇
  2004年   37篇
  2003年   21篇
  2002年   30篇
  2001年   22篇
  2000年   13篇
  1999年   10篇
  1998年   10篇
  1997年   10篇
  1996年   11篇
  1995年   8篇
  1994年   6篇
  1993年   3篇
  1992年   6篇
  1991年   2篇
  1989年   1篇
  1988年   3篇
  1987年   4篇
  1984年   1篇
  1982年   1篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1976年   1篇
排序方式: 共有856条查询结果,搜索用时 31 毫秒
1.
Previous studies showed that random error can explain overconfidence effects typically observed in the literature. One of these studies concluded that, after accounting for random error effects in the data, there is little support for cognitive‐processing biases in confidence elicitation. In this paper, we investigate more closely the random error explanation for overconfidence. We generated data from four models of confidence and then estimated the magnitude of random error in the data. Our results show that, in addition to the true magnitude of random error specified in the simulations, the error estimates are influenced by important cognitive‐processing biases in the confidence elicitation process. We found that random error in the response process can account for the degree of overconfidence found in calibration studies, even when that overconfidence is actually caused by other factors. Thus, the error models say little about whether cognitive biases are present in the confidence elicitation process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
2.
Binary programming models are presented to generate parallel tests from an itembank. The parallel tests are created to match item for item an existing seed test and match user supplied taxonomic specifications. The taxonomic specifications may be either obtained from the seed test or from some other user requirement. An algorithm is presented along with computational results to indicate the overall efficiency of the process. Empirical findings based on an itembank for the Arithmetic Reasoning section of the Armed Services Vocational Aptitude Battery are given.The Office of Naval Research, Program in Cognitive Science, N00014-87-C-0696 partially supported the work of Douglas H. Jones. The Rutgers Research Resource Committee of the Graduate School of Management partially supported the work of Douglas H. Jones and Ing-Long Wu. A Thomas and Betts research fellowship partially supported the work of Ing-Long Wu. The Human Resources Laboratory, United States Air Force, partially supported the work of Ronald Armstrong. The authors benefited from conversations with Dr. Wayne Shore, Operational Technologies, San Antonio, Texas. The order of authors' names is alphabetical and denotes equal authorship.  相似文献   
3.
4.
During the past several decades, computers have achieved increasing prominence in psychological assessment procedures. This is particularly true for computer-based test interpretation and diagnosis. This study reports on a study designed to compare the accuracy of computer-based diagnoses with clinician-generated diagnoses. The Millon Clinical Multiaxial Inventory (MCMI) was administered to 151 consecutively admitted inpatients at a large private psychiatric hospital. The computer-generated diagnoses were compared with those generated by admitting psychiatrists. The results indicated that the MCMI diagnostic impressions underestimated the severity of depressive disorders when compared with clinician diagnoses on Axis I. Specifically, clinicians diagnosed major depression much more frequently than did the MCMI. In addition, clinicians diagnosed anxiety disorders much less frequently than did the MCMI.  相似文献   
5.
Despite the importance of probability assessment methods in behavioral decision theory and decision analysis, little attention has been directed at evaluating their reliability and validity. In fact, no comprehensive study of reliability has been undertaken. Since reliability is a necessary condition for validity, this oversight is significant. The present study was motivated by that oversight. We investigated the reliability of probability measures derived from three response modes: numerical probabilities, pie diagrams, and odds. Unlike previous studies, the experiment was designed to distinguish systematic deviations in probability judgments, such as those due to experience or practice, from random deviations. It was found that subjects assessed probabilities reliably for all three assessment methods regardless of the reliability measures employed. However, a small but statistically significant decrease over time in the magnitudes of assessed probabilities was observed. This effect was linked to a decrease in subjects overconfidence during the course of the experiment.  相似文献   
6.
Frequency estimation of social facts in two methods of judgment elicitation was investigated. In the “narrow-range” condition, subjects answered questions in the format: “Out of 100 incidents, how many belong to category X?” In the “wide-range” condition, the frequency for the same event was assessed with respect to “Out of 10,000”. Judged frequencies in the wide-range condition were divided by 100, and were compared with the corresponding judgments in the narrow-range condition. Such comparisons were made for low-frequency and high-frequency events. Previous research has shown that, for low-frequency events, judged frequencies are proportionally greater in the narrow-range than in the wide-range condition. These results reflect cognitive processes of implicit anchoring, whereby judged frequencies lie close to small numbers within the response ranges provided. I call this process “downward anchoring,” and predicted that this tendency would be replicated in the present study. Moreover, I predicted that assessments about high-frequency events would evoke similar cognitive processes operating in the opposite direction. By such “upward anchoring,” judged frequencies would lie close to relatively larger numbers within the given response ranges. Consequently, I predicted that judged frequencies for high-frequency events would be proportionally greater in the wide-range condition than in the narrow-range condition. These predictions were confirmed.  相似文献   
7.
Various complexities that arise in the application of legal and/or clinical criteria to the actual assessment of competence/capacity are discussed, and a particular way of understanding the nature of such criteria is recommended.  相似文献   
8.
In a recent issue of this journal, Winman and Juslin (34 , 135–148, 1993) present a model of the calibration of subjective probability judgments for sensory discrimination tasks. They claim that the model predicts a pervasive underconfidence bias observed in such tasks, and present evidence from a training experiment that they interpret as supporting the notion that different models are needed to describe judgment of confidence in sensory and in cognitive tasks. The model is actually part of the more comprehensive decision variable partition model of subjective probability calibration that was originally proposed in Ferrell and McGoey (Organizational Behavior and Human Performance, 26 , 32–53, 1980). The characteristics of the model are described and it is demonstrated that the model does not predict underconfidence, that it is fully compatible with the overconfidence frequently found in calibration studies with cognitive tasks, and that it well represents experimental results from such studies. It is concluded that only a single model is needed for both types of task.  相似文献   
9.
时序信息提取机制的探索   总被引:1,自引:1,他引:0  
李宏翰  黄希庭 《心理学报》1996,29(2):180-191
对时序信息加工的经典研究是采用新近性判断范型,结果发现其提取机制是以新近性为基础的逆向串行搜索过程。本研究采用新近性判断范型和早远性判断范型对时序信息的提取机制进行了深入的考察,结果表明:(l)提取时序信息既存在逆向串行搜索,又存在顺向串行搜索;(2)早远性判断和新近性判断任务对不同部分时序信息恢复的效应不同,其中早远性判断易化早远部分,新近性判断易化新近部分──表现为对相应部分辨别力的提高和正确反应潜伏期的缩短;(3)在不同的时序信息提取任务中,被试会根据具体条件进行反转反应。  相似文献   
10.
Three approaches to the determination of behavioral stability were examined. In the first, a learning curve was fit to acquisition data (from Cumming and Schoenfeld, 1960), and the “experiment” stopped when the data approached sufficiently close to the theoretical asymptote. In the second, the data were analyzed for variability and linear and quadratic trend. In the third, the experiment was stopped when the magnitude of the daily changes in the data fell below a criterion. Accuracy was measured as deviation between the average value of the dependent variable when the experiment was stopped, and the average value over the last 100 sessions. The first approach was most accurate, but at the cost of requiring the most sessions and being the most difficult to apply. Both the second and third approaches provided acceptable criteria with a reasonable cost-accuracy tradeoff. The second approach permits a continuous adjustment of the criteria to accommodate the variability intrinsic in the experimental paradigm. The third, nomothetic, approach also takes into account the decreasing marginal utility of extended training sessions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号