全文获取类型
收费全文 | 163篇 |
免费 | 11篇 |
国内免费 | 11篇 |
出版年
2023年 | 3篇 |
2022年 | 4篇 |
2021年 | 8篇 |
2020年 | 10篇 |
2019年 | 12篇 |
2018年 | 9篇 |
2017年 | 9篇 |
2016年 | 6篇 |
2015年 | 2篇 |
2014年 | 9篇 |
2013年 | 16篇 |
2012年 | 3篇 |
2011年 | 6篇 |
2010年 | 4篇 |
2009年 | 3篇 |
2008年 | 7篇 |
2007年 | 2篇 |
2006年 | 7篇 |
2005年 | 6篇 |
2004年 | 2篇 |
2003年 | 3篇 |
2002年 | 3篇 |
2001年 | 5篇 |
2000年 | 1篇 |
1999年 | 1篇 |
1998年 | 3篇 |
1997年 | 3篇 |
1996年 | 1篇 |
1995年 | 3篇 |
1994年 | 3篇 |
1993年 | 1篇 |
1992年 | 2篇 |
1990年 | 2篇 |
1989年 | 1篇 |
1988年 | 3篇 |
1987年 | 2篇 |
1986年 | 3篇 |
1985年 | 1篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1982年 | 2篇 |
1981年 | 1篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1978年 | 3篇 |
1977年 | 3篇 |
排序方式: 共有185条查询结果,搜索用时 15 毫秒
81.
Antti Kangasrsi Jussi P. P. Jokinen Antti Oulasvirta Andrew Howes Samuel Kaski 《Cognitive Science》2019,43(6)
This paper addresses a common challenge with computational cognitive models: identifying parameter values that are both theoretically plausible and generate predictions that match well with empirical data. While computational models can offer deep explanations of cognition, they are computationally complex and often out of reach of traditional parameter fitting methods. Weak methodology may lead to premature rejection of valid models or to acceptance of models that might otherwise be falsified. Mathematically robust fitting methods are, therefore, essential to the progress of computational modeling in cognitive science. In this article, we investigate the capability and role of modern fitting methods—including Bayesian optimization and approximate Bayesian computation—and contrast them to some more commonly used methods: grid search and Nelder–Mead optimization. Our investigation consists of a reanalysis of the fitting of two previous computational models: an Adaptive Control of Thought—Rational model of skill acquisition and a computational rationality model of visual search. The results contrast the efficiency and informativeness of the methods. A key advantage of the Bayesian methods is the ability to estimate the uncertainty of fitted parameter values. We conclude that approximate Bayesian computation is (a) efficient, (b) informative, and (c) offers a path to reproducible results. 相似文献
82.
浅析心理统计学教学设计中的统计意识 总被引:9,自引:0,他引:9
本文从教学设计的视角对下列两方面进行了理性的分析:(1)心理实验设计与心理统计课程的教学应逐步整合起来;(2)由于学科的特殊性,必须对统计再认识。 相似文献
83.
Most psychological theories treat the features of objects as being fixed and immediately available to observers. However, novel objects have an infinite array of properties that could potentially be encoded as features, raising the question of how people learn which features to use in representing those objects. We focus on the effects of distributional information on feature learning, considering how a rational agent should use statistical information about the properties of objects in identifying features. Inspired by previous behavioral results on human feature learning, we present an ideal observer model based on nonparametric Bayesian statistics. This model balances the idea that objects have potentially infinitely many features with the goal of using a relatively small number of features to represent any finite set of objects. We then explore the predictions of this ideal observer model. In particular, we investigate whether people are sensitive to how parts co-vary over objects they observe. In a series of four behavioral experiments (three using visual stimuli, one using conceptual stimuli), we demonstrate that people infer different features to represent the same four objects depending on the distribution of parts over the objects they observe. Additionally in all four experiments, the features people infer have consequences for how they generalize properties to novel objects. We also show that simple models that use the raw sensory data as inputs and standard dimensionality reduction techniques (principal component analysis and independent component analysis) are insufficient to explain our results. 相似文献
84.
This paper investigates the classification of shapes into broad natural categories such as animal or leaf. We asked whether such coarse classifications can be achieved by a simple statistical classification of the shape skeleton. We surveyed databases of natural shapes, extracting shape skeletons and tabulating their parameters within each class, seeking shape statistics that effectively discriminated the classes. We conducted two experiments in which human subjects were asked to classify novel shapes into the same natural classes. We compared subjects’ classifications to those of a naive Bayesian classifier based on the natural shape statistics, and found good agreement. We conclude that human superordinate shape classifications can be well understood as involving a simple statistical classification of the shape skeleton that has been “tuned” to the natural statistics of shape. 相似文献
85.
86.
Andrew R. Craig Wayne W. Fisher 《Journal of the experimental analysis of behavior》2019,111(2):309-328
Randomization statistics offer alternatives to many of the statistical methods commonly used in behavior analysis and the psychological sciences, more generally. These methods are more flexible than conventional parametric and nonparametric statistical techniques in that they make no assumptions about the underlying distribution of outcome variables, are relatively robust when applied to small‐n data sets, and are generally applicable to between‐groups, within‐subjects, mixed, and single‐case research designs. In the present article, we first will provide a historical overview of randomization methods. Next, we will discuss the properties of randomization statistics that may make them particularly well suited for analysis of behavior‐analytic data. We will introduce readers to the major assumptions that undergird randomization methods, as well as some practical and computational considerations for their application. Finally, we will demonstrate how randomization statistics may be calculated for mixed and single‐case research designs. Throughout, we will direct readers toward resources that they may find useful in developing randomization tests for their own data. 相似文献
87.
Regularization, or shrinkage estimation, refers to a class of statistical methods that constrain the variability of parameter estimates when fitting models to data. These constraints move parameters toward a group mean or toward a fixed point (e.g., 0). Regularization has gained popularity across many fields for its ability to increase predictive power over classical techniques. However, articles published in JEAB and other behavioral journals have yet to adopt these methods. This paper reviews some common regularization schemes and speculates as to why articles published in JEAB do not use them. In response, we propose our own shrinkage estimator that avoids some of the possible objections associated with the reviewed regularization methods. Our estimator works by mixing weighted individual and group (WIG) data rather than by constraining parameters. We test this method on a problem of model selection. Specifically, we conduct a simulation study on the selection of matching‐law‐based punishment models, comparing WIG with ordinary least squares (OLS) regression, and find that, on average, WIG outperforms OLS in this context. 相似文献
88.
Maria-Pia Victoria-Feser 《Psychometrika》2002,67(1):21-32
In this paper robustness properties of the maximum likelihood estimator (MLE) and several robust estimators for the logistic regression model when the responses are binary are analysed. It is found that the MLE and the classical Rao's score test can be misleading in the presence of model misspecification which in the context of logistic regression means either misclassification's errors in the responses, or extreme data points in the design space. A general framework for robust estimation and testing is presented and a robust estimator as well as a robust testing procedure are presented. It is shown that they are less influenced by model misspecifications than their classical counterparts. They are finally applied to the analysis of binary data from a study on breastfeeding.The author is partially supported by the Swiss National Science Foundation. She would like to thank Rand Wilcox, Eva Cantoni and Elvezio Ronchetti for their helpful comments on earlier versions of the paper, as well as Stephane Heritier for providing the routine to compute the OBRE. 相似文献
89.
This paper proposes a general approach to accounting for individual differences in the extreme response style in statistical models for ordered response categories. This approach uses a hierarchical ordinal regression modeling framework with heterogeneous thresholds structures to account for individual differences in the response style. Markov chain Monte Carlo algorithms for Bayesian inference for models with heterogeneous thresholds structures are discussed in detail. A simulation and two examples based on ordinal probit models are given to illustrate the proposed methodology. The simulation and examples also demonstrate that failing to account for individual differences in the extreme response style can have adverse consequences for statistical inferences.The author is grateful to Ulf Böckenholt, an associate editor, and three anonymous reviewers for helpful comments, and Kristine Kuhn and Kshiti Joshi for providing the data. 相似文献
90.