首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   192篇
  免费   5篇
  国内免费   2篇
  2022年   2篇
  2021年   4篇
  2020年   1篇
  2019年   6篇
  2018年   4篇
  2017年   4篇
  2016年   10篇
  2015年   3篇
  2014年   5篇
  2013年   30篇
  2012年   6篇
  2011年   11篇
  2010年   2篇
  2009年   16篇
  2008年   14篇
  2007年   22篇
  2006年   3篇
  2005年   3篇
  2004年   3篇
  2003年   7篇
  2002年   4篇
  2001年   4篇
  1999年   1篇
  1998年   2篇
  1996年   1篇
  1995年   2篇
  1994年   4篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1985年   3篇
  1983年   2篇
  1982年   2篇
  1979年   2篇
  1978年   3篇
  1977年   1篇
排序方式: 共有199条查询结果,搜索用时 15 毫秒
31.
A standard approach to distinguishing people’s risk preferences is to estimate a random utility model using a power utility function to characterize the preferences and a logit function to capture choice consistency. We demonstrate that with often-used choice situations, this model suffers from empirical underidentification, meaning that parameters cannot be estimated precisely. With simulations of estimation accuracy and Kullback–Leibler divergence measures we examined factors that potentially mitigate this problem. First, using a choice set that guarantees a switch in the utility order between two risky gambles in the range of plausible values leads to higher estimation accuracy than randomly created choice sets or the purpose-built choice sets common in the literature. Second, parameter estimates are regularly correlated, which contributes to empirical underidentification. Examining standardizations of the utility scale, we show that they mitigate this correlation and additionally improve the estimation accuracy for choice consistency. Yet, they can have detrimental effects on the estimation accuracy of risk preference. Finally, we also show how repeated versus distinct choice sets and an increase in observations affect estimation accuracy. Together, these results should help researchers make informed design choices to estimate parameters in the random utility model more precisely.  相似文献   
32.
The relation between worrying and individuals' concerns was examined in a sample of 197 college students. Participants described the five undesirable outcomes that they thought about most often, indicated how likely they thought the outcomes were, and how upset they would be by them. Worry severity was measured using the Penn State Worry Questionnaire. The relation between worry severity and the life domains about which individuals were concerned was quite weak. In contrast, as predicted, greater worry was associated with higher probability and cost estimates. In addition, cost estimates moderated the relation between worry severity and probability estimates. The potential importance of perceived threat for understanding worrying is discussed.  相似文献   
33.
The radial arm maze is one of the most commonly used tests for assessing working memory in laboratory animals. However, to date, there exists no quantitative method of estimating working memory capacity from performance on this task. Here, we present a mathematical model of performance on the radial arm maze from which we can derive estimates of capacity. We derive explicit results for the two most commonly used measures of performance as functions of number of arms in the maze and memory capacity, assuming a uniform random search. We simulate random non-uniform search strategies. Comparing our model to previous experiments, we show that our model predicts a working memory capacity in the range of 3-9 at the level of performance observed in these experiments. This estimate is within the typical estimate of human working memory capacity. Performance of rats on large mazes (e.g. 48 arms) has been used as evidence that the working memory capacity of rats may be significantly larger than that of humans. We report that memory capacity in the range 3-9 is sufficient to explain the performance of rats in very large radial mazes. Furthermore, when we simulate non-uniform random search strategies observed in the experiments, the resulting estimates do not differ significantly from those assuming a uniform random search. We conclude that a list-based model of working memory with modest capacity is more powerful than previously expected.  相似文献   
34.
In this study, 158 parents (79 fathers and 79 mothers) with a mean age of 38.3 yrs (SD = 8.2), estimated their own, and their children's, overall intelligence as well as their children's score on the 12 intelligence scales from the Wechsler's Intelligence Scale for Children (WISC-III). The sample included English (n = 122) and Icelandic parents (n = 36), and a comparison between them showed few differences except that Icelandic parents' estimates were lower than English parents' estimates. The results showed fathers estimated their own overall intelligence higher than mothers estimated theirs and sons were estimated higher than daughters on overall intelligence. Two factors (verbal, performance) of intelligence were identified through factor analysis of the ratings of the 12 WISC subscale score estimates. A hierarchical regression showed that these two factors explained most of the variance in the estimation of the child's overall intelligence; however, gender of child and parents' self-estimated own overall intelligence added incremental variance.  相似文献   
35.
Dennis Dieks 《Synthese》2007,156(3):427-439
According to the Doomsday Argument we have to rethink the probabilities we assign to a soon or not so soon extinction of mankind when we realize that we are living now, rather early in the history of mankind. Sleeping Beauty finds herself in a similar predicament: on learning the date of her first awakening, she is asked to re-evaluate the probabilities of her two possible future scenarios. In connection with Doom, I argue that it is wrong to assume that our ordinary probability judgements do not already reflect our place in history: we justify the predictive use we make of the probabilities yielded by science (or other sources of information) by our knowledge of the fact that we live now, a certain time before the possible occurrence of the events the probabilities refer to. Our degrees of belief should change drastically when we forget the date—importantly, this follows without invoking the “Self Indication Assumption”. Subsequent conditionalization on information about which year it is cancels this probability shift again. The Doomsday Argument is about such probability shifts, but tells us nothing about the concrete values of the probabilities—for these, experience provides the only basis. Essentially the same analysis applies to the Sleeping Beauty problem. I argue that Sleeping Beauty “thirders” should be committed to thinking that the Doomsday Argument is ineffective; whereas “halfers” should agree that doom is imminent—but they are wrong.  相似文献   
36.
Wouter Meijs  Igor Douven 《Synthese》2007,157(3):347-360
If coherence is to have justificatory status, as some analytical philosophers think it has, it must be truth-conducive, if perhaps only under certain specific conditions. This paper is a critical discussion of some recent arguments that seek to show that under no reasonable conditions can coherence be truth-conducive. More specifically, it considers Bovens and Hartmann’s and Olsson’s “impossibility results,” which attempt to show that coherence cannot possibly be a truth-conducive property. We point to various ways in which the advocates of a coherence theory of justification may attempt to divert the threat of these results.  相似文献   
37.
Drawing on Gollwitzer's deliberative–implemental mindset distinction (P. M. Gollwitzer, 1990), it was predicted that people who are deliberating on different actions or goals would be more cautious or more realistic in their expectation of success in subsequent tasks than people who are going to implement a chosen action or goal. Participants were given a choice between different test-materials. They were interrupted before (deliberative) or immediately after decision-making (implemental). They then either had to choose between various levels of difficulty within one type of task (Experiment 1) or they had to predict their own future performance (Experiment 2). The results showed that deliberative participants preferred less difficult tasks and overestimated their probability of success less than implemental participants. In addition, deliberative participants referred more than implemental participants to their past performance when selecting levels of difficulty or predicting future performance; however, the two groups did not differ in actual performance. Taken together, the findings suggest that people are more realistic in a deliberative than in an implemental state of mind. The present studies extend prior research because for the first time they document mindset effects on peoples' estimates concerning their future performance in the achievement domain.  相似文献   
38.
39.
Recent research has suggested that people prefer to use the most diagnostic available information as the basis for their choices and decisions, and are most confident in those decisions when information is highly diagnostic. However, the effect of information diagnosticity on the need for additional information has yet to be investigated; that is, in an optional stopping task, will the amount of information requested depend upon information diagnosticity? Three models of the role of diagnosticity in information use were examined; expected value, a confidence criterion, and information cost. Subjects attempted to categorize stimuli with the aid of information of varying costs and diagnosticity levels. They requested more information when it was obtained at a low cost. More importantly, across cost conditions, subjects consistently requested greater amounts of information when that information was of a low diagnosticity. These data seem most consistent with use of a confidence criterion that is adjusted for information costs.  相似文献   
40.
The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, or the two-parameter logistic model. Straightforward use of such statistics to make inferences with respect to true latent ability is not recommended, unless we account for the fact that the basic quantities are estimates. In this paper true score theory is used to account for the latter; the counterpart of observed/true score being estimated/true latent ability. It is shown that statistics based on the true score theory are virtually unbiased if the number of items presented to each examinee is larger than fifteen. Three types of estimators are compared: maximum likelihood, weighted maximum likelihood, and Bayes modal. Furthermore, the (dis)advantages of the true score method and direct modeling of latent ability is discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号