首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   199篇
  免费   2篇
  201篇
  2024年   2篇
  2022年   2篇
  2021年   4篇
  2020年   1篇
  2019年   6篇
  2018年   4篇
  2017年   4篇
  2016年   10篇
  2015年   3篇
  2014年   5篇
  2013年   30篇
  2012年   6篇
  2011年   11篇
  2010年   2篇
  2009年   16篇
  2008年   14篇
  2007年   22篇
  2006年   3篇
  2005年   3篇
  2004年   3篇
  2003年   7篇
  2002年   4篇
  2001年   4篇
  1999年   1篇
  1998年   2篇
  1996年   1篇
  1995年   2篇
  1994年   4篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1985年   3篇
  1983年   2篇
  1982年   2篇
  1979年   2篇
  1978年   3篇
  1977年   1篇
排序方式: 共有201条查询结果,搜索用时 15 毫秒
181.
It was investigated whether commonly used factor score estimates lead to the same reproduced covariance matrix of observed variables. This was achieved by means of Schönemann and Steiger’s (1976) regression component analysis, since it is possible to compute the reproduced covariance matrices of the regression components corresponding to different factor score estimates. It was shown that Thurstone’s, Ledermann’s, Bartlett’s, Anderson-Rubin’s, McDonald’s, Krijnen, Wansbeek, and Ten Berge’s, as well as Takeuchi, Yanai, and Mukherjee’s score estimates reproduce the same covariance matrix. In contrast, Harman’s ideal variables score estimates lead to a different reproduced covariance matrix.  相似文献   
182.
The reference class problem is your problem too   总被引:2,自引:0,他引:2  
Alan Hájek 《Synthese》2007,156(3):563-585
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains.  相似文献   
183.
One of the most widely used methods for probability encoding in decision analysis uses binary comparisons (choices) between two lotteries: one that depends on the values of the random variable of interest and another that is contingent on an external reference chance device (typically a probability wheel). This note investigates the degree to which differences in probability weighting functions between the two types of events could affect the practice of subjective probability encoding. We develop a general methodology to investigate this question and illustrate it with two popular probability weighting functions over the range of parameters reported in the literature. We use this methodology to (a) alert decision analysts and researchers to the possibility of reversals, (b) identify the circumstances under which overt preferences for one lottery over the other are not affected by the weighting function, (c) document the magnitude of the differences between choices based on probabilities and their corresponding weighting functions, and (d) offer practical recommendations for probability elicitation.  相似文献   
184.
Probability matching is a suboptimal behavior that often plagues human decision-making in simple repeated choice tasks. Despite decades of research, recent studies cannot find agreement on what choice strategies lead to probability matching. We propose a solution, showing that two distinct local choice strategies—which make different demands on executive resources—both result in probability-matching behavior on a global level. By placing participants in a simple binary prediction task under dual- versus single-task conditions, we find that individuals with compromised executive resources are driven away from a one-trial-back strategy (utilized by participants with intact executive resources) and towards a strategy that integrates a longer window of past outcomes into the current prediction. Crucially, both groups of participants exhibited probability-matching behavior to the same extent at a global level of analysis. We suggest that these two forms of probability matching are byproducts of the operation of explicit versus implicit systems.  相似文献   
185.
For a study with multinomial data where there are ng individuals and with each person having nr test trials, the question arises as to how to fit the parameters of a multinomial processing tree (MPT) model. Should each parameter be estimated for each individual and then averaged to obtain a group estimate, or should the frequencies in the multinomial categories be pooled so that the model is fit once for the entire group? This basic question is explored with a series of Monte Carlo simulations for some prototypical MPT models. There is a general finding of a pooling advantage for the case where there is a single experimental condition. Also when there are different experimental conditions, there is reduced bias for detecting condition differences for a method based on the pooled data. Although the focus of the paper is on multinomial models, a general theorem is advanced that establishes a basic condition that determines whether there is or is not a difference between the averaging of individual estimates and the estimate based on the pooled data.  相似文献   
186.
Bayesian estimation of a multilevel IRT model using gibbs sampling   总被引:3,自引:0,他引:3  
In this article, a two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that it offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and measurement error. Another advantage is that, contrary to observed scores, latent scores are test-independent, which offers the possibility of using results from different tests in one analysis where the parameters of the IRT model and the multilevel model can be concurrently estimated. The two-parameter normal ogive model is used for the IRT measurement model. It will be shown that the parameters of the two-parameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling. Examples using simulated and real data are given.  相似文献   
187.
Despite their widespread use, many self‐report mood scales have very limited normative data. To rectify this, Crawford et al. have recently provided percentile norms for a series of self‐report scales. The present study aimed to extend the work of Crawford et al. by providing percentile norms for additional mood scales based on samples drawn from the general Australian adult population. Participants completed a series of self‐report mood scales. The resultant normative data were incorporated into a computer programme that provides point and interval estimates of the percentile ranks corresponding to raw scores for each of the scales. The programme can be used to obtain point and interval estimates of the percentile ranks of an individual's raw scores on the Beck Anxiety Inventory, the Beck Depression Inventory, the Carroll Rating Scale for Depression, the Centre for Epidemiological Studies Rating Scale for Depression, the Depression, Anxiety, and Stress Scales (DASS), the short‐form version of the DASS (DASS‐21), the Self‐rating Scale for Anxiety, the Self‐rating Scale for Depression, the State–Trait Anxiety Inventory (STAI), form X, and the STAI, form Y, based on normative sample sizes ranging from 497 to 769. The interval estimates can be obtained using either classical or Bayesian methods as preferred. The programme (which can be downloaded at http://www.abdn.ac.uk/~psy086/dept/MoodScore_Aus.htm ) provides a convenient and reliable means of obtaining the percentile ranks of individuals' raw scores on self‐report mood scales.  相似文献   
188.
Lagnado DA  Shanks DR 《Cognition》2002,83(1):81-112
Why are people's judgments incoherent under probability formats? Research in an associative learning paradigm suggests that after structured learning participants give judgments based on predictiveness rather than normative probability. This is because people's learning mechanisms attune to statistical contingencies in the environment, and they use these learned associations as a basis for subsequent probability judgments. We introduced a hierarchical structure into a simulated medical diagnosis task, setting up a conflict between predictiveness and coherence. Thus, a target symptom was more predictive of a subordinate disease than of its superordinate category, even though the latter included the former. Under a probability format participants tended to violate coherence and make ratings in line with predictiveness; under a frequency format they were more normative. These results are difficult to explain within a unitary model of inference, whether associative or frequency-based. In the light of this, and other findings in the judgment and learning literature, a dual-component model is proposed.  相似文献   
189.
When natural language input contains grammatical forms that are used probabilistically and inconsistently, learners will sometimes reproduce the inconsistencies; but sometimes they will instead regularize the use of these forms, introducing consistency in the language that was not present in the input. In this paper we ask what produces such regularization. We conducted three artificial language experiments, varying the use of determiners in the types of inconsistency with which they are used, and also comparing adult and child learners. In Experiment 1 we presented adult learners with scattered inconsistency - the use of multiple determiners varying in frequency in the same context - and found that adults will reproduce these inconsistencies at low levels of scatter, but at very high levels of scatter will regularize the determiner system, producing the most frequent determiner form almost all the time. In Experiment 2 we showed that this is not merely the result of frequency: when determiners are used with low frequencies but in consistent contexts, adults will learn all of the determiners veridically. In Experiment 3 we compared adult and child learners, finding that children will almost always regularize inconsistent forms, whereas adult learners will only regularize the most complex inconsistencies. Taken together, these results suggest that regularization processes in natural language learning, such as those seen in the acquisition of language from non-native speakers or in the formation of young languages, may depend crucially on the nature of language learning by young children.  相似文献   
190.
A good representation can be crucial for finding the solution to a problem. Gigerenzer and Hoffrage (Psychol. Rev. 102 (1995) 684; Psychol. Rev. 106 (1999) 425) have shown that representations in terms of natural frequencies, rather than conditional probabilities, facilitate the computation of a cause's probability (or frequency) given an effect--a problem that is usually referred to as Bayesian reasoning. They also have shown that normalized frequencies--which are not natural frequencies--do not lead to computational facilitation, and consequently, do not enhance people's performance. Here, we correct two misconceptions propagated in recent work (Cognition 77 (2000) 197; Cognition 78 (2001) 247; Psychol. Rev. 106 (1999) 62; Organ. Behav. Hum. Decision Process. 82 (2000) 217): normalized frequencies have been mistaken for natural frequencies and, as a consequence, "nested sets" and the "subset principle" have been proposed as new explanations. These new terms, however, are nothing more than vague labels for the basic properties of natural frequencies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号