首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
Replenishing item pools for on-line ability testing requires innovative and efficient data collection designs. By generating localD-optimal designs for selecting individual examinees, and consistently estimating item parameters in the presence of error in the design points, sequential procedures are efficient for on-line item calibration. The estimating error in the on-line ability values is accounted for with an item parameter estimate studied by Stefanski and Carroll. LocallyD-optimaln-point designs are derived using the branch-and-bound algorithm of Welch. In simulations, the overall sequential designs appear to be considerably more efficient than random seeding of items.This report was prepared under the Navy Manpower, Personnel, and Training R&D Program of the Office of the Chief of Naval Research under Contract N00014-87-0696. The authors wish to acknowledge the valuable advice and consultation given by Ronald Armstrong, Charles Davis, Bradford Sympson, Zhaobo Wang, Ing-Long Wu and three anonymous reviewers.  相似文献   

5.
6.
7.
8.
A Bayesian procedure is given for estimation in unrestricted common factor analysis. A choice of the form of the prior distribution is justified. It is shown empirically that the procedure achieves its objective of avoiding inadmissible estimates of unique variances, and is reasonably insensitive to certain variations in the shape of the prior distribution.  相似文献   

9.

Psychometric functions are typically estimated by fitting a parametric model to categorical subject responses. Procedures to estimate unidimensional psychometric functions (i.e., psychometric curves) have been subjected to the most research, with modern adaptive methods capable of quickly obtaining accurate estimates. These capabilities have been extended to some multidimensional psychometric functions (i.e., psychometric fields) that are easily parameterizable, but flexible procedures for general psychometric field estimation are lacking. This study introduces a nonparametric Bayesian psychometric field estimator operating on subject queries sequentially selected to improve the estimate in some targeted way. This estimator implements probabilistic classification using Gaussian processes trained by active learning. The accuracy and efficiency of two different actively sampled estimators were compared to two non-actively sampled estimators for simulations of one of the simplest psychometric fields in common use: the pure-tone audiogram. The actively sampled methods achieved estimate accuracy equivalent to the non-actively sampled methods with fewer observations. This trend held for a variety of audiogram phenotypes representative of the range of human auditory perception. Gaussian process classification is a general estimation procedure capable of extending to multiple input variables and response classes. Its success with a two-dimensional psychometric field informed by binary subject responses holds great promise for extension to complex perceptual models currently inaccessible to practical estimation.

  相似文献   

10.
Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and subjective probabilities. In practical applications of CPT, the model’s parameters are usually estimated using a single-participant maximum likelihood approach. The present study shows the advantages of an alternative, hierarchical Bayesian parameter estimation procedure. Performance of the procedure is illustrated with a parameter recovery study and application to a real data set. The work reveals that without particular constraints on the parameter space, CPT can produce loss aversion without the parameter that has traditionally been associated with loss aversion. In general, the results illustrate that inferences about people’s decision processes can crucially depend on the method used to estimate model parameters.  相似文献   

11.
The change detection paradigm has become an important tool for researchers studying working memory. Change detection is especially useful for studying visual working memory, because recall paradigms are difficult to employ in the visual modality. Pashler (Perception & Psychophysics, 44, 369–378, 1988) and Cowan (Behavioral and Brain Sciences, 24, 87–114, 2001) suggested formulas for estimating working memory capacity from change detection data. Although these formulas have become widely used, Morey (Journal of Mathematical Psychology, 55, 8–24, 2011) showed that the formulas suffer from a number of issues, including inefficient use of information, bias, volatility, uninterpretable parameter estimates, and violation of ANOVA assumptions. Morey presented a hierarchical Bayesian extension of Pashler’s and Cowan’s basic models that mitigates these issues. Here, we present WoMMBAT (Working Memory Modeling using Bayesian Analysis Techniques) software for fitting Morey’s model to data. WoMMBAT has a graphical user interface, is freely available, and is cross-platform, running on Windows, Linux, and Mac operating systems.  相似文献   

12.
Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the linear ballistic accumulator model, which has a known likelihood, and the leaky competing accumulator model, whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics—a feat that was not previously possible.  相似文献   

13.
This paper evaluates an adaptive staircase procedure for threshold estimation that is suitable for unforced-choice tasks—ones with the additional response alternativedon’t know. Within the framework of a theory of indecision, evidence is developed that fluctuations of the response criterion are much less detrimental to unforced-choice tasks than to yes/no tasks. An adaptive staircase procedure for unforced-choice tasks is presented. Computer simulations show a slight gain in efficiency ifdon’t know responses are allowed, even if response criteria vary. A behavioral comparison with forcedchoice and yes/no procedures shows that the new procedure outdoes the other two with respect to reliability. This is especially true for naive participants. For well-trained participants it is also slightly more efficient than the forced-choice procedure, and it produces a smaller systematic error than the yes/no procedure. Moreover, informal observations suggest that participants are more comfortable with unforced tasks than with forced ones.  相似文献   

14.
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.  相似文献   

15.
In [Kujala, J. V., Richardson, U., &; Lyytinen, H. (2010). A Bayesian-optimal principle for learner-friendly adaptation in learning games. Journal of Mathematical Psychology, 54(2), 247–255], we considered an extension of the conventional Bayesian adaptive estimation framework to situations where each observable variable is associated with a certain random cost of observation. We proposed an algorithm that chooses each placement by maximizing the expected gain in utility divided by the expected cost. In this paper, we formally justify this placement rule as an asymptotically optimal solution to the problem of maximizing the expected utility of an experiment that terminates when the total cost overruns a given budget. For example, the cost could be defined as the random time taken by each trial in an experiment, and one might wish to maximize the expected total information gain over as many trials as can be completed in 15 min. A simple, analytically tractable example is considered.  相似文献   

16.
Item response curves for a set of binary responses are studied from a Bayesian viewpoint of estimating the item parameters. For the two-parameter logistic model with normally distributed ability, restricted bivariate beta priors are used to illustrate the computation of the posterior mode via the EM algorithm. The procedure is illustrated by data from a mathematics test.This work was supported under Contract No. N00014-85-K-0113, NR 150-535, from Personnel and Training Research Programs, Psychological Sciences Division, Office of Naval Research. The authors wish to thank Mark D. Reckase for providing the ACT data used in the illustration and Michael J. Soltys for computational assistance. They also wish to thank the editor and four anonymous reviewers for many valuable suggestions.  相似文献   

17.
18.
A test score on a psychological test is usually expressed as a normed score, representing its position relative to test scores in a reference population. These typically depend on predictor(s) such as age. The test score distribution conditional on predictors is estimated using regression, which may need large normative samples to estimate the relationships between the predictor(s) and the distribution characteristics properly. In this study, we examine to what extent this burden can be alleviated by using prior information in the estimation of new norms with Bayesian Gaussian distributional regression. In a simulation study, we investigate to what extent this norm estimation is more efficient and how robust it is to prior model deviations. We varied the prior type, prior misspecification and sample size. In our simulated conditions, using a fixed effects prior resulted in more efficient norm estimation than a weakly informative prior as long as the prior misspecification was not age dependent. With the proposed method and reasonable prior information, the same norm precision can be achieved with a smaller normative sample, at least in empirical problems similar to our simulated conditions. This may help test developers to achieve cost-efficient high-quality norms. The method is illustrated using empirical normative data from the IDS-2 intelligence test.  相似文献   

19.
The Minkowski property of psychological space has long been of interest to researchers. A common strategy has been calculating the stress in multidimensional scaling for many Minkowski exponent values and choosing the one that results in the lowest stress. However, this strategy has an arbitrariness problem—that is, a loss function. Although a recently proposed Bayesian approach could solve this problem, the method was intended for individual subject data. It is unknown whether this method is directly applicable to averaged or single data, which are common in psychology and behavioral science. Therefore, we first conducted a simulation study to evaluate the applicability of the method to the averaged data problem and found that it failed to recover the true Minkowski exponent. Therefore, a new method is proposed that is a simple extension of the existing Euclidean Bayesian multidimensional scaling to the Minkowski metric. Another simulation study revealed that the proposed method could successfully recover the true Minkowski exponent. BUGS codes used in this study are given in the Appendix.  相似文献   

20.
A state-of-the-art data analysis procedure is presented to conduct hierarchical Bayesian inference and hypothesis testing on delay discounting data. The delay discounting task is a key experimental paradigm used across a wide range of disciplines from economics, cognitive science, and neuroscience, all of which seek to understand how humans or animals trade off the immediacy verses the magnitude of a reward. Bayesian estimation allows rich inferences to be drawn, along with measures of confidence, based upon limited and noisy behavioural data. Hierarchical modelling allows more precise inferences to be made, thus using sometimes expensive or difficult to obtain data in the most efficient way. The proposed probabilistic generative model describes how participants compare the present subjective value of reward choices on a trial-to-trial basis, estimates participant- and group-level parameters. We infer discount rate as a function of reward size, allowing the magnitude effect to be measured. Demonstrations are provided to show how this analysis approach can aid hypothesis testing. The analysis is demonstrated on data from the popular 27-item monetary choice questionnaire (Kirby, Psychonomic Bulletin & Review, 16(3), 457–462 2009), but will accept data from a range of protocols, including adaptive procedures. The software is made freely available to researchers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号