首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   714篇
  免费   105篇
  国内免费   22篇
  841篇
  2024年   10篇
  2023年   7篇
  2022年   18篇
  2021年   28篇
  2020年   25篇
  2019年   28篇
  2018年   28篇
  2017年   32篇
  2016年   19篇
  2015年   21篇
  2014年   45篇
  2013年   60篇
  2012年   23篇
  2011年   30篇
  2010年   24篇
  2009年   38篇
  2008年   47篇
  2007年   43篇
  2006年   23篇
  2005年   24篇
  2004年   20篇
  2003年   19篇
  2002年   19篇
  2001年   16篇
  2000年   13篇
  1999年   22篇
  1998年   11篇
  1997年   13篇
  1996年   7篇
  1995年   10篇
  1994年   11篇
  1993年   12篇
  1992年   5篇
  1991年   15篇
  1990年   8篇
  1989年   9篇
  1988年   4篇
  1987年   2篇
  1986年   6篇
  1985年   6篇
  1984年   1篇
  1983年   8篇
  1982年   2篇
  1981年   6篇
  1980年   3篇
  1979年   6篇
  1978年   8篇
  1977年   1篇
  1976年   1篇
  1975年   3篇
排序方式: 共有841条查询结果,搜索用时 0 毫秒
11.
The credible intervals that people set around their point estimates are typically too narrow (cf. Lichtenstein, Fischhoff, & Phillips, 1982). That is, a set of many such intervals does not contain the actual values of the criterion variables as often as it should given the probability assigned to this event for each estimate. The typical interpretation of such data is that people are overconfident about the accuracy of their judgments. This paper presents data from two studies showing the typical levels of overconfidence for individual estimates of unknown quantities. However, data from the same subjects on a different measure of confidence for the same items, their own global assessment for the set of multiple estimates as a whole, showed significantly lower levels of confidence and overconfidence than their average individual assessment for items in the set. It is argued that the event and global assessments of judgment quality are fundamentally different and are affected by unique psychological processes. Finally, we discuss the implications of a difference between confidence in single and multiple estimates for confidence research and theory.  相似文献   
12.
Sik-Yum Lee 《Psychometrika》1981,46(2):153-160
Confirmatory factor analysis is considered from a Bayesian viewpoint, in which prior information on parameter is incorporated in the analysis. An iterative algorithm is developed to obtain the Bayes estimates. A numerical example based on longitudinal data is presented. A simulation study is designed to compare the Bayesian approach with the maximum likelihood method.Computer facilities were provided by the Computer Services Center, The Chinese University of Hong Kong.  相似文献   
13.
Drawing on Gollwitzer's deliberative–implemental mindset distinction (P. M. Gollwitzer, 1990), it was predicted that people who are deliberating on different actions or goals would be more cautious or more realistic in their expectation of success in subsequent tasks than people who are going to implement a chosen action or goal. Participants were given a choice between different test-materials. They were interrupted before (deliberative) or immediately after decision-making (implemental). They then either had to choose between various levels of difficulty within one type of task (Experiment 1) or they had to predict their own future performance (Experiment 2). The results showed that deliberative participants preferred less difficult tasks and overestimated their probability of success less than implemental participants. In addition, deliberative participants referred more than implemental participants to their past performance when selecting levels of difficulty or predicting future performance; however, the two groups did not differ in actual performance. Taken together, the findings suggest that people are more realistic in a deliberative than in an implemental state of mind. The present studies extend prior research because for the first time they document mindset effects on peoples' estimates concerning their future performance in the achievement domain.  相似文献   
14.
Test–retest reliability is a common indicator of response consistency. It is argued that using regression coefficients for detecting systematic response error is less appropriate than testing for shifts in the mean (median) and variance. This procedure is exemplified using probability response data. For this data, shifts in centrality were found to be about 2.5 times more likely than shifts in variability. The shifts in centrality did not favor any particular direction; however, variability tended to decrease over time in early sessions.  相似文献   
15.
The standard tobit or censored regression model is typically utilized for regression analysis when the dependent variable is censored. This model is generalized by developing a conditional mixture, maximum likelihood method for latent class censored regression. The proposed method simultaneously estimates separate regression functions and subject membership in K latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The proposed method is illustrated via a consumer psychology application.  相似文献   
16.
Despite the importance of probability assessment methods in behavioral decision theory and decision analysis, little attention has been directed at evaluating their reliability and validity. In fact, no comprehensive study of reliability has been undertaken. Since reliability is a necessary condition for validity, this oversight is significant. The present study was motivated by that oversight. We investigated the reliability of probability measures derived from three response modes: numerical probabilities, pie diagrams, and odds. Unlike previous studies, the experiment was designed to distinguish systematic deviations in probability judgments, such as those due to experience or practice, from random deviations. It was found that subjects assessed probabilities reliably for all three assessment methods regardless of the reliability measures employed. However, a small but statistically significant decrease over time in the magnitudes of assessed probabilities was observed. This effect was linked to a decrease in subjects overconfidence during the course of the experiment.  相似文献   
17.
Subjective probability and delay.   总被引:12,自引:12,他引:12       下载免费PDF全文
Human subjects indicated their preference between a hypothetical $1,000 reward available with various probabilities or delays and a certain reward of variable amount available immediately. The function relating the amount of the certain-immediate reward subjectively equivalent to the delayed $1,000 reward had the same general shape (hyperbolic) as the function found by Mazur (1987) to describe pigeons' delay discounting. The function relating the certain-immediate amount of money subjectively equivalent to the probabilistic $1,000 reward was also hyperbolic, provided that the stated probability was transformed to odds against winning. In a second experiment, when human subjects chose between a delayed $1,000 reward and a probabilistic $1,000 reward, delay was proportional to the same odds-against transformation of the probability to which it was subjectively equivalent.  相似文献   
18.
Item response theory (IT) models are now in common use for the analysis of dichotomous item responses. This paper examines the sampling theory foundations for statistical inference in these models. The discussion includes: some history on the stochastic subject versus the random sampling interpretations of the probability in IRT models; the relationship between three versions of maximum likelihood estimation for IRT models; estimating versus estimating -predictors; IRT models and loglinear models; the identifiability of IRT models; and the role of robustness and Bayesian statistics from the sampling theory perspective.A presidential address can serve many different functions. This one is a report of investigations I started at least ten years ago to understand what IRT was all about. It is a decidedly one-sided view, but I hope it stimulates controversy and further research. I have profited from discussions of this material with many people including: Brian Junker, Charles Lewis, Nicholas Longford, Robert Mislevy, Ivo Molenaar, Donald Rock, Donald Rubin, Lynne Steinberg, Martha Stocking, William Stout, Dorothy Thayer, David Thissen, Wim van der Linden, Howard Wainer, and Marilyn Wingersky. Of course, none of them is responsible for any errors or misstatements in this paper. The research was supported in part by the Cognitive Science Program, Office of Naval Research under Contract No. Nooo14-87-K-0730 and by the Program Statistics Research Project of Educational Testing Service.  相似文献   
19.
    
The article describes an approach, ensuring higher credibility of expert estimation results, based on specific order of pair-wise comparisons. The order of pair-wise comparisons is, in its turn, based on the distance between estimated objects in the ranking. According to the suggested approach (and some human psychophysiological features), the most ordinally distant objects should be compared before ordinally closer ones. In order to empirically confirm this assumption, a special experiment involving real experts has been conducted. The results of the experiment indicate that if objects are presented to the expert for comparison in the suggested order, then in the majority of cases relative weights of objects, obtained using eigenvector method, most adequately reflect this expert's priorities. Moreover, pair-wise comparison matrices constructed using the suggested comparison order tend to be slightly more consistent. The suggested approach to re-ordering of pair-wise comparisons can be applied as part of the AHP algorithm in weakly structured subject domains, influenced by multiple intangible criteria. It also provides conceptual basis for reduction of the number of pair-wise comparisons, required to obtain credible results, in AHP without loss or distortion of expert data. It can also be used for modification of combinatorial pair-wise comparison aggregation method, based on spanning tree enumeration. And, finally, it will improve the overall multi-criteria decision-making process in diverse subject domains, characterized by high uncertainty levels.  相似文献   
20.
The Wilcoxon–Mann–Whitney procedure is invariant under monotone transformations but its use as a test of location or shift is said not to be so. It tests location only under the shift model, the assumption of parallel cumulative distribution functions (cdfs). We show that infinitely many monotone transformations of the measured variable produce parallel cdfs, so long as the original cdfs intersect nowhere or everywhere. Thus there are infinitely many effect sizes measured as shifts of medians, invalidating the notion that there is one true shift parameter and thereby rendering any single estimate dubious. Measuring effect size using the probability of superiority alleviates this difficulty.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号