首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   76篇
  免费   15篇
  国内免费   6篇
  97篇
  2021年   2篇
  2020年   4篇
  2019年   5篇
  2018年   2篇
  2017年   4篇
  2016年   2篇
  2015年   3篇
  2014年   5篇
  2013年   4篇
  2012年   1篇
  2011年   2篇
  2010年   3篇
  2009年   7篇
  2008年   6篇
  2007年   5篇
  2006年   2篇
  2005年   4篇
  2004年   4篇
  2003年   3篇
  2002年   4篇
  2001年   3篇
  2000年   3篇
  1999年   3篇
  1998年   1篇
  1997年   3篇
  1996年   1篇
  1995年   2篇
  1993年   2篇
  1992年   2篇
  1989年   1篇
  1987年   1篇
  1985年   1篇
  1980年   1篇
  1978年   1篇
排序方式: 共有97条查询结果,搜索用时 0 毫秒
1.
In a recent paper, Knez (1991) showed an interaction of data and hypotheses in probabilistic inference tasks. The results illustrated two, earlier not obtained, significant main effects on subjects' hypothesis sampling , viz. the effect of different forms of data presentation and subjects' execution of cognitive control over their hypothesis pool throughout the series of trials. The present paper followed up these results in that the subjects' hypothesis testing , in Knez (1991) was subjected to an analysis. Hence, to see if the effects mentioned above significantly influenced the subjects' hypothesis testing, as they did for subjects' hypothesis sampling. The results showed a consistency with Knez (1991), i.e. the results emphasize the interaction of data and hypothesis in probabilistic inference tasks, as well as the subjects' execution of cognitive control over their hypothesis pool concerning both the subjects' hypothesis sampling and testing.  相似文献   
2.
Abstract: A probabilistic multidimensional scaling model is proposed. The model assumes that the coordinates of each stimulus are normally distributed with variance Σi = diag(σ21, … σ2Ri). The advantage of this model is that axes are determined uniquely. The distribution of the distance between two stimuli is obtained by polar coordinates transformation. The method of maximum likelihood estimation for means and variances using the EM algorithm is discussed. Further, simulated annealing is suggested as a means of obtaining initial values in order to avoid local maxima. A simulation study shows that the estimates are accurate, and a numerical example concerning the location of Japanese cities shows that natural axes can be obtained without introducing individual parameters.  相似文献   
3.
The concept of an ordinal instrumental probabilistic comparison is introduced. It relies on an ordinal scale given a priori and on the concept of stochastic dominance. It is used to define a weakly independently ordered system, or isotonic ordinal probabilistic (ISOP) model, which allows the construction of separate sample-free ordinal scales on a set of subjects and a set of items. The ISOP-model is a common nonparametric theoretical structure for unidimensional models for quantitative, ordinal and dichotomous variables.Fundamental theorems on dichotomous and polytomous weakly independently ordered systems are derived. It is shown that the raw score system has the same formal properties as the latent system, and therefore the latter can be tested at the observed empirical level.I wish to thank 3 reviewers and 2 editors who contributed a lot to the readability and precision of the article.  相似文献   
4.
Although research on language production has developed detailed maps of the brain basis of single word production in both time and space, little is known about the spatiotemporal dynamics of the processes that combine individual words into larger representations during production. Studying composition in production is challenging due to difficulties both in controlling produced utterances and in measuring the associated brain responses. Here, we circumvent both problems using a minimal composition paradigm combined with the high temporal resolution of magnetoencephalography (MEG). With MEG, we measured the planning stages of simple adjective–noun phrases (‘red tree’), matched list controls (‘red, blue’), and individual nouns (‘tree’) and adjectives (‘red’), with results indicating combinatorial processing in the ventro-medial prefrontal cortex (vmPFC) and left anterior temporal lobe (LATL), two regions previously implicated for the comprehension of similar phrases. These effects began relatively quickly (∼180 ms) after the presentation of a production prompt, suggesting that combination commences with initial lexical access. Further, while in comprehension, vmPFC effects have followed LATL effects, in this production paradigm vmPFC effects occurred mostly in parallel with LATL effects, suggesting that a late process in comprehension is an early process in production. Thus, our results provide a novel neural bridge between psycholinguistic models of comprehension and production that posit functionally similar combinatorial mechanisms operating in reversed order.  相似文献   
5.
ABSTRACT. We explored the utility of analyzing within- and between-balloon response patterns on a balloon analogue task (BAT) in relation to overall risk scores, and to a choice between a small guaranteed cash reward and an uncertain reward of the same expected value. Young adults (n = 61) played a BAT, and then were offered a choice between $5 in cash and betting to win $0 to $15. Between groups, pumping was differentially influenced by explosions and by the number of successive unexploded balloons, with risk takers responding increasingly on successive balloons after an explosion. Within-balloons, risk takers showed a characteristic pattern of constant high rate, while non-risk takers showed a characteristic variable lower rate. Overall, results show that the higher number of pumps and explosions that characterize risk takers at a molar level, result from particular forms of adaptation to the positive and negative outcomes of choices seen at a molecular level.

Color versions of one or more of the figures in the article can be found online at www.tandfonline.com/vgen.  相似文献   

6.
A tendency to overestimate threat has been shown in individuals with OCD. We tested the hypothesis that this bias in judgment is related to difficulties in learning probabilistic associations between events. Thirty participants with OCD and 30 matched healthy controls completed a learning experiment involving 2 variants of a probabilistic classification learning task. In the neutral weather-prediction task, rainy and sunny weather had to be predicted. In the emotional task danger of an epidemic from virus infection had to be predicted (epidemic-prediction task). Participants with OCD were as able as controls to improve their prediction of neutral events across learning trials but scored significantly below healthy controls on the epidemic-prediction task. Lower performance on the emotional task variant was significantly related to a heightened tendency to overestimate threat. Biased information processing in OCD might thus hamper corrective experiences regarding the probability of threatening events.  相似文献   
7.
8.
This paper reports on three studies investigating how accurately bettors (=people who regularly bet on sports events) interpret the probabilistic information implied by betting odds. All studies were based on data collected by web surveys prompting a total of 186 experienced bettors to convert sets of representative odds into frequency judgments. Bayesian statistical methods were used to analyze the data. From the results, the following conclusions were made: (i) On the whole, the bettors produced well‐calibrated judgments, indicating that they have realistic perceptions of odds. (ii) Bettors were unable to consciously adjust judgments for different margins. (iii) Although their interval judgments often covered the estimates implied by the odds, the bettors tended to overestimate the variation of expected profitable bets between months. The results are consistent with prior research showing that people tend to make accurate probability judgments when faced with tasks characterized by constant and clear feedback. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
9.
Traditionally, parameters of multiattribute utility models, representing a decision maker's preference judgements, are treated deterministically. This may be unrealistic, because assessment of such parameters is potentially fraught with imprecisions and errors. We thus treat such parameters as stochastic and investigate how their associated imprecision/errors are propagated in an additive multiattribute utility function in terms of the aggregate variance. Both a no information and a rank order case regarding the attribute weights are considered, assuming a uniform distribution over the feasible region of attribute weights constrained by the respective information assumption. In general, as the number of attributes increases, the variance of the aggregate utility in both cases decreases and approaches the same limit, which depends only on the variances as well as the correlations among the single-attribute utilities. However, the marginal change in aggregate utility variance decreases rather rapidly and hence decomposition as a variance reduction mechanism is generally useful but becomes relatively ineffective if the number of attributes exceed about 10. Moreover, it was found that utilities which are positively correlated increase the aggregate utility variance, hence every effort should be made to avoid positive correlations between the single-attribute utilities. We also provide guidelines for determining under what condition and to what extent a decision maker should decompose to obtain an aggregate utility variance that is smaller than that of holistic assessments. Extensions of the current model and empirical research to support some of our behavioural assumptions are discussed. © 1997 John Wiley & Sons, Ltd.  相似文献   
10.
As we navigate a world full of uncertainties and risks, dominated by statistics, we need to be able to think statistically. Very few studies investigating people's ability to understand simple concepts and rules from probability theory have drawn representative samples from the public. For this reason we investigated a representative sample of 1000 Swiss citizens, using six probabilistic problems. Most reasoned appropriately in problems representing pure applications of probability theory, but failed to do so in approximations of real‐world scenarios – a disparity we replicated in a sample of first‐year psychology students. Additionally, education is associated with probabilistic numeracy in the former but not the latter type of problems. We discuss possible reasons for these task disparities and suggest that gaining a comprehensive picture of citizens' probabilistic competence and its determinants requires using both types of tasks. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号