首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

2.
3.
4.
To date, utility analysis research has derived point estimates of the expected utility value for human resource management programs or interventions. Utility estimates are usually quite large, but they fail to reflect the size and shape of the utility distribution. The present study investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers in a medium-sized computer manufacturing organization. Utility calculations incorporated financial/economic factors as well as employee flows over time. The distributions for each utility parameter were empirically estimated, and these distribution estimates were combined through a Monte Carlo analysis to yield a distribution of total utility values. Monte Carlo results were compared to three other risk assessment approaches: (1) sensitivity analysis, (2) break-even analysis, and (3) algebraic derivation of the distribution. Results suggest that the distribution information provided by the Monte Carlo analysis more completely described the variability and riskiness associated with the expected utility value. Future research suggested by these findings is discussed.  相似文献   

5.
Schwarz (2001, 2002) proposed the ex-Wald distribution, obtained from the convolution of Wald and exponential random variables, as a model of simple and go/no-go response time. This article provides functions for the S-PLUS package that produce maximum likelihood estimates of the parameters for the ex-Wald, as well as for the shifted Wald and ex-Gaussian, distributions. In a Monte Carlo study, the efficiency and bias of parameter estimates were examined. Results indicated that samples of at least 400 are necessary to obtain adequate estimates of the ex-Wald and that, for some parameter ranges, much larger samples may be required. For shifted Wald estimation, smaller samples of around 100 were adequate, at least when fits identified by the software as having ill-conditioned maximums were excluded. The use of all functions is illustrated using data from Schwarz (2001). The S-PLUS functions and Schwarz's data may be downloaded from the Psychonomic Society's Web archive, www. psychonomic.org/archive/.  相似文献   

6.
Schwarz (2001, 2002) proposed the ex-Wald distribution, obtained from the convolution of Wald and exponential random variables, as a model of simple and go/no-go response time. This article provides functions for the S-PLUS package that produce maximum likelihood estimates of the parameters for the ex-Wald, as well as for the shifted Wald and ex-Gaussian, distributions. In a Monte Carlo study, the efficiency and bias of parameter estimates were examined. Results indicated that samples of at least 400 are necessary to obtain adequate estimates of the ex-Wald and that, for some parameter ranges, much larger samples may be required. For shifted Wald estimation, smaller samples of around 100 were adequate, at least when fits identified by the software as having ill-conditioned maximums were excluded. The use of all functions is illustrated using data from Schwarz (2001). The S-PLUS functions and Schwarz’s data may be downloaded from the Psychonomic Society’s Web archive, www. psychonomic.org/archive/.  相似文献   

7.
E. Maris 《Psychometrika》1998,63(1):65-71
In the context ofconditional maximum likelihood (CML) estimation, confidence intervals can be interpreted in three different ways, depending on the sampling distribution under which these confidence intervals contain the true parameter value with a certain probability. These sampling distributions are (a) the distribution of the data given theincidental parameters, (b) the marginal distribution of the data (i.e., with the incidental parameters integrated out), and (c) the conditional distribution of the data given the sufficient statistics for the incidental parameters. Results on the asymptotic distribution of CML estimates under sampling scheme (c) can be used to construct asymptotic confidence intervals using only the CML estimates. This is not possible for the results on the asymptotic distribution under sampling schemes (a) and (b). However, it is shown that theconditional asymptotic confidence intervals are also valid under the other two sampling schemes. I am indebted to Theo Eggen, Norman Verhelst and one of Psychometrika's reviewers for their helpful comments.  相似文献   

8.
Quantile maximum likelihood (QML) is an estimation technique, proposed by Heathcote, Brown, and Mewhort (2002), that provides robust and efficient estimates of distribution parameters, typically for response time data, in sample sizes as small as 40 observations. In view of the computational difficulty inherent in implementing QML, we provide open-source Fortran 90 code that calculates QML estimates for parameters of the ex-Gaussian distribution, as well as standard maximum likelihood estimates. We show that parameter estimates from QML are asymptotically unbiased and normally distributed. Our software provides asymptotically correct standard error and parameter intercorrelation estimates, as well as producing the outputs required for constructing quantile—quantile plots. The code is parallelizable and can easily be modified to estimate parameters from other distributions. Compiled binaries, as well as the source code, example analysis files, and a detailed manual, are available for free on the Internet.  相似文献   

9.
Queen’s University, Kingston, Ontario, Canada We introduce and evaluate via a Monte Carlo study a robust new estimation technique that fits distribution functions to grouped response time (RT) data, where the grouping is determined by sample quantiles. The new estimator, quantile maximum likelihood (QML), is more efficient and less biased than the best alternative estimation technique when fitting the commonly used ex-Gaussian distribution. Limitations of the Monte Carlo results are discussed and guidance provided for the practical application of the new technique. Because QML estimation can be computationally costly, we make fast open source code for fitting available that can be easily modified  相似文献   

10.
Recently, several authors have proposed the use of random graph theory to evaluate the adequacy of cluster analysis results. One such statistic is the minimum number of lines (edges) V needed to connect a random graph. Erdös and Rényi derived asymptotic distributions of V. Schultz and Hubert showed in a Monte Carlo study that the asymptotic approximations are poor for small sample sizes n typically used in data analysis applications. In this paper the exact probability distribution of V is given and the distributions for some values of n are tabulated and compared with existing Monte Carlo approximations.  相似文献   

11.
Abstract

Extended redundancy analysis (ERA) combines linear regression with dimension reduction to explore the directional relationships between multiple sets of predictors and outcome variables in a parsimonious manner. It aims to extract a component from each set of predictors in such a way that it accounts for the maximum variance of outcome variables. In this article, we extend ERA into the Bayesian framework, called Bayesian ERA (BERA). The advantages of BERA are threefold. First, BERA enables to make statistical inferences based on samples drawn from the joint posterior distribution of parameters obtained from a Markov chain Monte Carlo algorithm. As such, it does not necessitate any resampling method, which is on the other hand required for (frequentist’s) ordinary ERA to test the statistical significance of parameter estimates. Second, it formally incorporates relevant information obtained from previous research into analyses by specifying informative power prior distributions. Third, BERA handles missing data by implementing multiple imputation using a Markov Chain Monte Carlo algorithm, avoiding the potential bias of parameter estimates due to missing data. We assess the performance of BERA through simulation studies and apply BERA to real data regarding academic achievement.  相似文献   

12.
In most item response theory applications, model parameters need to be first calibrated from sample data. Latent variable (LV) scores calculated using estimated parameters are thus subject to sampling error inherited from the calibration stage. In this article, we propose a resampling-based method, namely bootstrap calibration (BC), to reduce the impact of the carryover sampling error on the interval estimates of LV scores. BC modifies the quantile of the plug-in posterior, i.e., the posterior distribution of the LV evaluated at the estimated model parameters, to better match the corresponding quantile of the true posterior, i.e., the posterior distribution evaluated at the true model parameters, over repeated sampling of calibration data. Furthermore, to achieve better coverage of the fixed true LV score, we explore the use of BC in conjunction with Jeffreys’ prior. We investigate the finite-sample performance of BC via Monte Carlo simulations and apply it to two empirical data examples.  相似文献   

13.
Growth curve models have been widely used to analyse longitudinal data in social and behavioural sciences. Although growth curve models with normality assumptions are relatively easy to estimate, practical data are rarely normal. Failing to account for non-normal data may lead to unreliable model estimation and misleading statistical inference. In this work, we propose a robust approach for growth curve modelling using conditional medians that are less sensitive to outlying observations. Bayesian methods are applied for model estimation and inference. Based on the existing work on Bayesian quantile regression using asymmetric Laplace distributions, we use asymmetric Laplace distributions to convert the problem of estimating a median growth curve model into a problem of obtaining the maximum likelihood estimator for a transformed model. Monte Carlo simulation studies have been conducted to evaluate the numerical performance of the proposed approach with data containing outliers or leverage observations. The results show that the proposed approach yields more accurate and efficient parameter estimates than traditional growth curve modelling. We illustrate the application of our robust approach using conditional medians based on a real data set from the Virginia Cognitive Aging Project.  相似文献   

14.
A constrained generalized maximum likelihood routine for fitting psychometric functions is proposed, which determines optimum values for the complete parameter set--that is, threshold and slope--as well as for guessing and lapsing probability. The constraints are realized by Bayesian prior distributions for each of these parameters. The fit itself results from maximizing the posterior distribution of the parameter values by a multidimensional simplex method. We present results from extensive Monte Carlo simulations by which we can approximate bias and variability of the estimated parameters of simulated psychometric functions. Furthermore, we have tested the routine with data gathered in real sessions of psychophysical experimenting.  相似文献   

15.
This paper provides a statistical framework for estimating higher-order characteristics of the response time distribution, such as the scale (variability) and shape. Consideration of these higher order characteristics often provides for more rigorous theory development in cognitive and perceptual psychology (e.g., Luce, 1986). RT distribution for a single participant depends on certain participant characteristics, which in turn can be thought of as arising from a distribution of latent variables. The present work focuses on the three-parameter Weibull distribution, with parameters for shape, scale, and shift (initial value). Bayesian estimation in a hierarchical framework is conceptually straightforward. Parameter estimates, both for participant quantities and population parameters, are obtained through Markov Chain Monte Carlo methods. The methods are illustrated with an application to response time data in an absolute identification task. The behavior of the Bayes estimates are compared to maximum likelihood (ML) estimates through Monte Carlo simulations. For small sample size, there is an occasional tendency for the ML estimates to be unreasonably extreme. In contrast, by borrowing strength across participants, Bayes estimation shrinks extreme estimates. The results are that the Bayes estimators are more accurate than the corresponding ML estimators.We are grateful to Michael Stadler who allowed us use of his data. This research is supported by (a) National Science Foundation Grant SES-0095919 to J. Rouder, D. Sun, and P. Speckman, (b) University of Missouri Research Board Grant 00-77 to J. Rouder, (c) National Science Foundation grant DMS-9972598 to Sun and Speckman, and (d) a grant from the Missouri Department of Conservation to D. Sun.  相似文献   

16.
To assess the effect of a manipulation on a response time distribution, psychologists often use Vincentizing or quantile averaging to construct group or “average” distributions. We provide a theorem characterizing the large sample properties of the averaged quantiles when the individual RT distributions all belong to the same location-scale family. We then apply the theorem to estimating parameters for the quantile-averaged distributions. From the theorem, it is shown that parameters of the group distribution can be estimated by generalized least squares. This method provides accurate estimates of standard errors of parameters and can therefore be used in formal inference. The method is benchmarked in a small simulation study against both a maximum likelihood method and an ordinary least-squares method. Generalized least squares essentially is the only method based on the averaged quantiles that is both unbiased and provides accurate estimates of parameter standard errors. It is also proved that for location-scale families, performing generalized least squares on quantile averages is formally equivalent to averaging parameter estimates from generalized least squares performed on individuals. A limitation on the method is that individual RT distributions must be members of the same location-scale family.  相似文献   

17.
Applications of item response theory, which depend upon its parameter invariance property, require that parameter estimates be unbiased. A new method, weighted likelihood estimation (WLE), is derived, and proved to be less biased than maximum likelihood estimation (MLE) with the same asymptotic variance and normal distribution. WLE removes the first order bias term from MLE. Two Monte Carlo studies compare WLE with MLE and Bayesian modal estimation (BME) of ability in conventional tests and tailored tests, assuming the item parameters are known constants. The Monte Carlo studies favor WLE over MLE and BME on several criteria over a wide range of the ability scale.  相似文献   

18.
Heathcote, Brown, and Mewhort (2002) have introduced a new, robust method of estimating response time distributions. Their method may have practical advantages over conventional maximum likelihood estimation. The basic idea is that the likelihood of parameters is maximized given a few quantiles from the data. We show that Heathcote et al.’s likelihood function is not correct and provide the appropriate correction. However, although our correction stands on firmer theoretical ground than Heathcote et al.’s, it appears to yield worse parameter estimates. This result further indicates that, at least for some distributions and situations, quantile maximum likelihood estimation may have better nonasymptotic properties than a more theoretically justified approach.  相似文献   

19.
Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch’s approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness of Rasch’s approach for several important testing problems through simulation studies. Our Monte Carlo methodology is shown to compare favorably to other Monte Carlo methods proposed for this problem in two respects: it is considerably faster and it provides more reliable estimates of the Monte Carlo standard error.This Research was supported in part by National Science Foundation grant DMS-0203762 and a University of Pennsylvania Research Foundation grant.The authors are grateful to Don Burdick for helpful comments. In addition, the authors wish to thank the editor, the associate editor, and the referees for their helpful suggestions.This revised article was published online in August 2005 with the PDF paginated correctly.  相似文献   

20.
A classic data-analytic problem is the statistical evaluation of the distributional form of interval-scale scores. The investigator may need to know whether the scores originate from a single Gaussian distribution or from a mixture of Gaussian distributions or from a different probability distribution. The relative merits of extant goodness-of-fit metrics are discussed. Monte Carlo power analyses are provided for several of the more powerful goodness-of-fit metrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号