首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   77篇
  免费   16篇
  国内免费   6篇
  2021年   2篇
  2020年   7篇
  2019年   5篇
  2018年   3篇
  2017年   6篇
  2016年   4篇
  2015年   3篇
  2014年   6篇
  2013年   5篇
  2012年   1篇
  2011年   2篇
  2010年   3篇
  2009年   5篇
  2008年   5篇
  2007年   5篇
  2006年   2篇
  2005年   4篇
  2004年   3篇
  2003年   2篇
  2002年   3篇
  2001年   3篇
  2000年   3篇
  1999年   2篇
  1998年   1篇
  1997年   3篇
  1996年   1篇
  1995年   2篇
  1993年   2篇
  1992年   2篇
  1989年   1篇
  1987年   1篇
  1980年   1篇
  1978年   1篇
排序方式: 共有99条查询结果,搜索用时 62 毫秒
11.
This paper reports on three studies investigating how accurately bettors (=people who regularly bet on sports events) interpret the probabilistic information implied by betting odds. All studies were based on data collected by web surveys prompting a total of 186 experienced bettors to convert sets of representative odds into frequency judgments. Bayesian statistical methods were used to analyze the data. From the results, the following conclusions were made: (i) On the whole, the bettors produced well‐calibrated judgments, indicating that they have realistic perceptions of odds. (ii) Bettors were unable to consciously adjust judgments for different margins. (iii) Although their interval judgments often covered the estimates implied by the odds, the bettors tended to overestimate the variation of expected profitable bets between months. The results are consistent with prior research showing that people tend to make accurate probability judgments when faced with tasks characterized by constant and clear feedback. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
12.
In multiple‐cue probabilistic inference, people choose between alternatives based on several cues, each of which is differentially associated with an alternative's overall value. Various strategies have been proposed for probabilistic inference (e.g., weighted additive, tally, and take‐the‐best). These strategies differ in how many cue values they require to enact and in how they weight each cue. Do decision makers actually use any of these strategies? Ways to investigate this question include analyzing people's choices and the cues that they reveal. However, different strategies often predict the same decisions, and search behavior says nothing about whether or how people use the information that they acquire. In this research, we attempt to elucidate which strategies participants use in a multiple‐cue probabilistic inference task by examining verbal protocols, a high‐density source of process data. The promise of verbal data is in their utility for testing detailed information processing models. To that end, we apply protocol analysis in conjunction with computational simulations. We find converging evidence across outcome measures, search measures, and verbal reports that most participants use simplifying heuristics, namely take‐the‐best. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
13.
Traditionally, parameters of multiattribute utility models, representing a decision maker's preference judgements, are treated deterministically. This may be unrealistic, because assessment of such parameters is potentially fraught with imprecisions and errors. We thus treat such parameters as stochastic and investigate how their associated imprecision/errors are propagated in an additive multiattribute utility function in terms of the aggregate variance. Both a no information and a rank order case regarding the attribute weights are considered, assuming a uniform distribution over the feasible region of attribute weights constrained by the respective information assumption. In general, as the number of attributes increases, the variance of the aggregate utility in both cases decreases and approaches the same limit, which depends only on the variances as well as the correlations among the single-attribute utilities. However, the marginal change in aggregate utility variance decreases rather rapidly and hence decomposition as a variance reduction mechanism is generally useful but becomes relatively ineffective if the number of attributes exceed about 10. Moreover, it was found that utilities which are positively correlated increase the aggregate utility variance, hence every effort should be made to avoid positive correlations between the single-attribute utilities. We also provide guidelines for determining under what condition and to what extent a decision maker should decompose to obtain an aggregate utility variance that is smaller than that of holistic assessments. Extensions of the current model and empirical research to support some of our behavioural assumptions are discussed. © 1997 John Wiley & Sons, Ltd.  相似文献   
14.
This study compared the ability of seven statistical models to distinguish between linked and unlinked crimes. The seven models utilised geographical, temporal, and modus operandi information relating to residential burglaries (n = 180), commercial robberies, (n = 118), and car thefts (n = 376). Model performance was assessed using receiver operating characteristic analysis and by examining the success with which the seven models could successfully prioritise linked over unlinked crimes. The regression‐based and probabilistic models achieved comparable accuracy and were generally more accurate than the tree‐based models tested in this study. The Logistic algorithm achieved the highest area under the curve (AUC) for residential burglary (AUC = 0.903) and commercial robbery (AUC = 0.830) and the SimpleLogistic algorithm achieving the highest for car theft (AUC = 0.820). The findings also indicated that discrimination accuracy is maximised (in some situations) if behavioural domains are utilised rather than individual crime scene behaviours and that the AUC should not be used as the sole measure of accuracy in behavioural crime linkage research.  相似文献   
15.
Decisions regarding consumption over the lifespan require some estimate of how long that lifespan is likely to be. Payne et al. (2013) found that respondents' estimates of their own life expectancy are on average 8.6 years shorter when elicited using a die‐by frame than when elicited by a live‐to frame. If decision makers act on these life expectancies, then an arbitrary detail of framing will lead to drastically different choices. We propose that the framing effect is sensitive to the iterative probabilistic elicitation procedure employed by the previous literature. Study 1 compares the framing effect across the iterative probabilistic procedure and a point estimate procedure that simply asks respondents the age they will live to/die by. The iterative probabilistic procedure implies a life expectancy six years shorter in the die‐by frame than in the live‐to frame, replicating the results of Payne et al. (2013). With the point estimate procedure, however, the framing effect reverses: the die‐by frame increases life expectancy by three years. In Study 2, we test for the framing effect using a point estimate procedure on a representative sample of 2000 Britons. Again, and in contrast with the previous literature, we find that the die‐by frame implies longer life. Our results reinforce the previous literature that beliefs around life expectancy are constructed. We recommend caution when attempting to debias life expectancy estimates or using life expectancies in choice architecture. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
16.
In prediction, subset relations require that the probability of conjoined events is never higher than that of constituent events. However, people's judgments regularly violate this principle, producing conjunction errors. In diagnosis, the probability of a hypothesis normatively is often higher for conjoined cues. An online survey used a within‐subjects design to explore the degree to which participants (n = 347) differentiated diagnosis and prediction using matched scenarios and both choice and estimation responses. Conjunctions were judged more probable than a constituent in diagnosis (76%) more often than prediction (64%) and in choice (84%) more often than direct estimation (57%), with no interaction of type of task and response mode. Correlation, regression, and path analyses were used to determine the relationships among individual difference variables and the diagnosis and prediction tasks. Among the correlation findings was that time spent on the task predicted higher conjunction probabilities in diagnosis but not prediction and that class inclusion errors predicted increased conjunction errors in choice but not estimation. Need for cognition and numeracy were only minimally related to reasoning about conjunctions. Results are consistent with the idea that people may misapply diagnostic reasoning to the prediction task and consequently commit the conjunction error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
17.
Formal nonmonotonic systems try to model the phenomenon that common sense reasoners are able to “jump” in their reasoning from assumptions Δ to conclusions C without their being any deductive chain from Δ to C. Such jumps are done by various mechanisms which are strongly dependent on context and knowledge of how the actual world functions. Our aim is to motivate these jump rules as inference rules designed to optimise survival in an environment with scant resources of effort and time. We begin with a general discussion and quickly move to Section 3 where we introduce five resource principles. We show that these principles lead to some well known nonmonotonic systems such as Nute’s defeasible logic. We also give several examples of practical reasoning situations to illustrate our principles. Edited by Hannes Leitgeb  相似文献   
18.
As we navigate a world full of uncertainties and risks, dominated by statistics, we need to be able to think statistically. Very few studies investigating people's ability to understand simple concepts and rules from probability theory have drawn representative samples from the public. For this reason we investigated a representative sample of 1000 Swiss citizens, using six probabilistic problems. Most reasoned appropriately in problems representing pure applications of probability theory, but failed to do so in approximations of real‐world scenarios – a disparity we replicated in a sample of first‐year psychology students. Additionally, education is associated with probabilistic numeracy in the former but not the latter type of problems. We discuss possible reasons for these task disparities and suggest that gaining a comprehensive picture of citizens' probabilistic competence and its determinants requires using both types of tasks. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
19.
The fact that the standard probabilistic calculus does not define probabilities for sentences with embedded conditionals is a fundamental problem for the probabilistic theory of conditionals. Several authors have explored ways to assign probabilities to such sentences, but those proposals have come under criticism for making counterintuitive predictions. This paper examines the source of the problematic predictions and proposes an amendment which corrects them in a principled way. The account brings intuitions about counterfactual conditionals to bear on the interpretation of indicatives and relies on the notion of causal (in)dependence.  相似文献   
20.
The (univariate) isotonic psychometric (ISOP) model (Scheiblechner, 1995) is a nonparametric IRT model for dichotomous and polytomous (rating scale) psychological test data. A weak subject independence axiom W1 postulates that the subjects are ordered in the same way except for ties (i.e., similarly or isotonically) by all items of a psychological test. A weak item independence axiom W2 postulates that the order of the items is similar for all subjects. Local independence (LI or W3) is assumed in all models. With these axioms, sample-free unidimensional ordinal measurements of items and subjects become feasible. A cancellation axiom (Co) gives, as a result, the additive isotonic psychometric (ADISOP) model and interval scales for subjects and items, and an independence axiom (W4) gives the completely additive isotonic psychometric (CADISOP) model with an interval scale for the response variable (Scheiblechner, 1999). The d-ISOP, d-ADISOP, and d-CADISOP models are generalizations to d-dimensional dependent variables (e.g., speed and accuracy of response). The author would like to thank an Associate Editor and two anonymous referees and also Professor H.H. Schulze for their very valuable suggestions and corrections.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号