首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   33篇
  免费   0篇
  2019年   2篇
  2017年   1篇
  2014年   4篇
  2013年   3篇
  2012年   1篇
  2010年   3篇
  2008年   1篇
  2005年   1篇
  2003年   3篇
  2001年   2篇
  2000年   3篇
  1992年   1篇
  1990年   1篇
  1979年   1篇
  1977年   1篇
  1973年   1篇
  1971年   1篇
  1967年   1篇
  1966年   2篇
排序方式: 共有33条查询结果,搜索用时 586 毫秒
11.
12.
Fanconi anemia (FA) is the most common of the inherited bone marrow failure syndromes with an incidence of approximately 1/100,000 to 1/200,000 live births. FA is a genetically complex and phenotypically heterogeneous condition involving birth defects, bone marrow failure, and cancer predisposition. This rare disease became well known in the genetic counseling community in 2002, when it was identified that biallelic mutations in BRCA2 can cause FA. Knowledge gained from the growing association between FA and breast cancer pathways has brought even more light to the complex genetic issues that arise when counseling families affected by this disease. Genetic counseling issues surrounding a diagnosis of FA affect many different disciplines. This review will serve as a way to cross-link the various topics important to genetic counselors that arise throughout the life of a patient with FA. Issues covered will include: an overview of FA, phenotypic presentation, management and treatment, the genetics and inheritance of FA, cytogenetic and molecular testing options, and the risks to family members of an individual with FA.  相似文献   
13.
Models of intertemporal choice draw on three evaluation rules, which we compare in the restricted domain of choices between smaller sooner and larger later monetary outcomes. The hyperbolic discounting model proposes an alternative‐based rule, in which options are evaluated separately. The interval discounting model proposes a hybrid rule, in which the outcomes are evaluated separately, but the delays to those outcomes are evaluated in comparison with one another. The tradeoff model proposes an attribute‐based rule, in which both outcomes and delays are evaluated in comparison with one another: People consider both the intervals between the outcomes and the compensations received or paid over those intervals. We compare highly general parametric functional forms of these models by means of a Bayesian analysis, a method of analysis not previously used in intertemporal choice. We find that the hyperbolic discounting model is outperformed by the interval discounting model, which, in turn, is outperformed by the tradeoff model. Our cognitive modeling is among the first to offer quantitative evidence against the conventional view that people make intertemporal choices by discounting the value of future outcomes, and in favor of the view that they directly compare options along the time and outcome attributes.  相似文献   
14.
Established psychological results have been called into question by demonstrations that statistical significance is easy to achieve, even in the absence of an effect. One often-warned-against practice, choosing when to stop the experiment on the basis of the results, is guaranteed to produce significant results. In response to these demonstrations, Bayes factors have been proposed as an antidote to this practice, because they are invariant with respect to how an experiment was stopped. Should researchers only care about the resulting Bayes factor, without concern for how it was produced? Yu, Sprenger, Thomas, and Dougherty (2014) and Sanborn and Hills (2014) demonstrated that Bayes factors are sometimes strongly influenced by the stopping rules used. However, Rouder (2014) has provided a compelling demonstration that despite this influence, the evidence supplied by Bayes factors remains correct. Here we address why the ability to influence Bayes factors should still matter to researchers, despite the correctness of the evidence. We argue that good frequentist properties mean that results will more often agree with researchers’ statistical intuitions, and good frequentist properties control the number of studies that will later be refuted. Both help raise confidence in psychological results.  相似文献   
15.
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite—taking multiple parameter values—such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.  相似文献   
16.
Different levels of analysis provide different insights into behavior: computational-level analyses determine the problem an organism must solve and algorithmic-level analyses determine the mechanisms that drive behavior. However, many attempts to model behavior are pitched at a single level of analysis. Research into human and animal learning provides a prime example, with some researchers using computational-level models to understand the sensitivity organisms display to environmental statistics but other researchers using algorithmic-level models to understand organisms’ trial order effects, including effects of primacy and recency. Recently, attempts have been made to bridge these two levels of analysis. Locally Bayesian Learning (LBL) creates a bridge by taking a view inspired by evolutionary psychology: Our minds are composed of modules that are each individually Bayesian but communicate with restricted messages. A different inspiration comes from computer science and statistics: Our brains are implementing the algorithms developed for approximating complex probability distributions. We show that these different inspirations for how to bridge levels of analysis are not necessarily in conflict by developing a computational justification for LBL. We demonstrate that a scheme that maximizes computational fidelity while using a restricted factorized representation produces the trial order effects that motivated the development of LBL. This scheme uses the same modular motivation as LBL, passing messages about the attended cues between modules, but does not use the rapid shifts of attention considered key for the LBL approximation. This work illustrates a new way of tying together psychological and computational constraints.  相似文献   
17.
A key challenge for cognitive psychology is the investigation of mental representations, such as object categories, subjective probabilities, choice utilities, and memory traces. In many cases, these representations can be expressed as a non-negative function defined over a set of objects. We present a behavioral method for estimating these functions. Our approach uses people as components of a Markov chain Monte Carlo (MCMC) algorithm, a sophisticated sampling method originally developed in statistical physics. Experiments 1 and 2 verified the MCMC method by training participants on various category structures and then recovering those structures. Experiment 3 demonstrated that the MCMC method can be used estimate the structures of the real-world animal shape categories of giraffes, horses, dogs, and cats. Experiment 4 combined the MCMC method with multidimensional scaling to demonstrate how different accounts of the structure of categories, such as prototype and exemplar models, can be tested, producing samples from the categories of apples, oranges, and grapes.  相似文献   
18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号