首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The psychometric function relates an observer’s performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (orlapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function’s parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditionalX 2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.  相似文献   

2.
A constrained generalized maximum likelihood routine for fitting psychometric functions is proposed, which determines optimum values for the complete parameter set--that is, threshold and slope--as well as for guessing and lapsing probability. The constraints are realized by Bayesian prior distributions for each of these parameters. The fit itself results from maximizing the posterior distribution of the parameter values by a multidimensional simplex method. We present results from extensive Monte Carlo simulations by which we can approximate bias and variability of the estimated parameters of simulated psychometric functions. Furthermore, we have tested the routine with data gathered in real sessions of psychophysical experimenting.  相似文献   

3.

When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children’s poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to “easy” catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.

  相似文献   

4.
Confidence intervals for the parameters of psychometric functions   总被引:1,自引:0,他引:1  
A Monte Carlo method for computing the bias and standard deviation of estimates of the parameters of a psychometric function such as the Weibull/Quick is described. The method, based on Efron's parametric bootstrap, can also be used to estimate confidence intervals for these parameters. The method's ability to predict bias, standard deviation, and confidence intervals is evaluated in two ways. First, its predictions are compared to the outcomes of Monte Carlo simulations of psychophysical experiments. Second, its predicted confidence intervals were compared with the actual variability of human observers in a psychophysical task. Computer programs implementing the method are available from the author.  相似文献   

5.
The psychometric function relates an observer's performance to an independent variable, usually a physical quantity of an experimental stimulus. Even if a model is successfully fit to the data and its goodness of fit is acceptable, experimenters require an estimate of the variability of the parameters to assess whether differences across conditions are significant. Accurate estimates of variability are difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statistical techniques are only asymptotically correct and can be shown to be unreliable in some common situations. Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternative statistical techniques based on Monte Carlo resampling methods. The present paper's principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes. First, we outline the basic bootstrap procedure and argue in favor of the parametric, as opposed to the nonparametric, bootstrap. Second, we describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested. Third, we show how one's choice of sampling scheme (the placement of sample points on the stimulus axis) strongly affects the reliability of bootstrap confidence intervals, and we make recommendations on how to sample the psychometric function efficiently. Fourth, we show that, under certain circumstances, the (arbitrary) choice of the distribution function can exert an unwanted influence on the size of the bootstrap confidence intervals obtained, and we make recommendations on how to avoid this influence. Finally, we introduce improved confidence intervals (bias corrected and accelerated) that improve on the parametric and percentile-based bootstrap confidence intervals previously used. Software implementing our methods is available.  相似文献   

6.
The psychometric function relates an observer’s performance to an independent variable, usually a physical quantity of an experimental stimulus. Even if a model is successfully fit to the data and its goodness of fit is acceptable, experimenters require an estimate of the variability of the parameters to assess whether differences across conditions are significant. Accurate estimates of variability are difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statistical techniques are only asymptotically correct and can be shown to be unreliable in some common situations. Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternative statistical techniques based on Monte Carlo resampling methods. The present paper’s principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes. First, we outline the basic bootstrap procedure and argue in favor of the parametric, as opposed to the nonparametric, bootstrap. Second, we describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested. Third, we show how one’s choice of sampling scheme (the placement of sample points on the stimulus axis) strongly affects the reliability of bootstrap confidence intervals, and we make recommendations on how to sample the psychometric function efficiently. Fourth, we show that, under certain circumstances, the (arbitrary) choice of the distribution function can exert an unwanted influence on the size of the bootstrap confidence intervals obtained, and we make recommendations on how to avoid this influence. Finally, we introduce improved confidence intervals (bias corrected and accelerated) that improve on the parametric and percentile-based bootstrap confidence intervals previously used. Software implementing our methods is available.  相似文献   

7.
The psychometric function's slope provides information about the reliability of psychophysical threshold estimates. Furthermore, knowing the slope allows one to compare, across studies, thresholds that were obtained at different performance criterion levels. Unfortunately, the empirical validation of psychometric function slope estimates is hindered by the bewildering variety of slope measures that are in use. The present article provides conversion formulas for the most popular cases, including the logistic, Weibull, Quick, cumulative normal, and hyperbolic tangent functions as analytic representations, in both linear and log coordinates and to different log bases, the practical decilog unit, the empirically based interquartile range measure of slope, and slope in a d' representation of performance.  相似文献   

8.
Many psychophysical tasks in current use render nonmonotonic psychometric functions; these include the oddball task, the temporal generalization task, the binary synchrony judgment task, and other forms of the same–different task. Other tasks allow for ternary responses and render three psychometric functions, one of which is also nonmonotonic, like the ternary synchrony judgment task or the unforced choice task. In all of these cases, data are usually collected with the inefficient method of constant stimuli (MOCS), because extant adaptive methods are only applicable when the psychometric function is monotonic. This article develops stimulus placement criteria for adaptive methods designed for use with nonmonotonic psychometric functions or with ternary tasks. The methods are transformations of conventional up–down rules. Simulations under three alternative psychophysical tasks prove the validity of these methods, their superiority to MOCS, and the accuracy with which they recover direct estimates of the parameters determining the psychometric functions, as well as estimates of derived quantities such as the point of subjective equality or the difference limen. Practical recommendations and worked-out examples are provided to illustrate how to use these adaptive methods in empirical research.  相似文献   

9.
We demonstrate some procedures in the statistical computing environment R for obtaining maximum likelihood estimates of the parameters of a psychometric function by fitting a generalized nonlinear regression model to the data. A feature for fitting a linear model to the threshold (or other) parameters of several psychometric functions simultaneously provides a powerful tool for testing hypotheses about the data and, potentially, for reducing the number of parameters necessary to describe them. Finally, we illustrate procedures for treating one parameter as a random effect that would permit a simplified approach to modeling stimulus-independent variability due to factors such as lapses or interobserver differences. These tools will facilitate a more comprehensive and explicit approach to the modeling of psychometric data.  相似文献   

10.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

11.
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best‐fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.  相似文献   

12.
This Monte Carlo study examined the impact of misspecifying the 𝚺 matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the 𝚺 matrix usually resulted in overestimation of the variances of the random effects (e.g., τ00, ττ11 ) and standard errors of the corresponding growth parameter estimates (e.g., SEβ 0, SEβ 1). Overestimates of the standard errors led to lower statistical power in tests of the growth parameters. An unstructured 𝚺 matrix under the mixed model framework generally led to underestimates of standard errors of the growth parameter estimates. Underestimates of the standard errors led to inflation of the type I error rate in tests of the growth parameters. Implications of the compensatory relationship between the random effects of the growth parameters and the longitudinal error structure for model specification were discussed.  相似文献   

13.
Estimation of psychometric functions from adaptive tracking procedures.   总被引:1,自引:0,他引:1  
Because adaptive tracking procedures are designed to avoid stimulus levels far from a target threshold value, the psychometric function constructed from the trial-by-trial data in the track may be accurate near the target level but a poor reflection of performance at levels far removed from the target. A series of computer simulations was undertaken to assess the reliability and accuracy of psychometric functions generated from data collected in up-down adaptive tracking procedures. Estimates of psychometric function slopes were obtained from trial-by-trial data in simulated adaptive tracks and compared with the true characteristics of the functions used to generate the tracks. Simulations were carried out for three psychophysical procedures and two target performance levels, with tracks generated by psychometric functions with three different slopes. The functions reconstructed from the tracking data were, for the most part, accurate reflections of the true generating functions when at least 200 trials were included in the tracks. However, for 50- and 100-trial tracks, slope estimates were biased high for all simulated experimental conditions. Correction factors for slope estimates from these tracks are presented. There was no difference in the accuracy and reliability of slope estimation due to target level for the adaptive track, and only minor differences due to psychophysical procedure. It is recommended that, if both threshold and slope of psychometric functions are to be estimated from the trial-by-trial tracking data, at least 100 trials should be included in the tracks, and a three- or four-alternative forced-choice procedure should be used. However, good estimates can also be obtained using the two-alternative forced-choice procedure or less than 100 trials if appropriate corrections for bias are applied.  相似文献   

14.
Recent studies have reported that flanking stimuli broaden the psychometric function and lower detection thresholds. In the present study, we measured psychometric functions for detection and discrimination with and without flankers to investigate whether these effects occur throughout the contrast continuum. Our results confirm that lower detection thresholds with flankers are accompanied by broader psychometric functions. Psychometric functions for discrimination reveal that discrimination thresholds with and without flankers are similar across standard levels, and that the broadening of psychometric functions with flankers disappears as standard contrast increases, to the point that psychometric functions at high standard levels are virtually identical with or without flankers. Threshold-versus-contrast (TvC) curves with flankers only differ from TvC curves without flankers in occasional shallower dippers and lower branches on the left of the dipper, but they run virtually superimposed at high standard levels. We discuss differences between our results and other results in the literature, and how they are likely attributed to the differential vulnerability of alternative psychophysical procedures to the effects of presentation order. We show that different models of flanker facilitation can fit the data equally well, which stresses that succeeding at fitting a model does not validate it in any sense.  相似文献   

15.
This article uses Monte Carlo techniques to examine the effect of heterogeneity of variance in multilevel analyses in terms of relative bias, coverage probability, and root mean square error (RMSE). For all simulated data sets, the parameters were estimated using the restricted maximum-likelihood (REML) method both assuming homogeneity and incorporating heterogeneity into multilevel models. We find that (a) the estimates for the fixed parameters are unbiased, but the associated standard errors are frequently biased when heterogeneity is ignored; by contrast, the standard errors of the fixed effects are almost always accurate when heterogeneity is considered; (b) the estimates for the random parameters are slightly overestimated; (c) both the homogeneous and heterogeneous models produce standard errors of the variance component estimates that are underestimated; however, taking heterogeneity into account, the REML-estimations give correct estimates of the standard errors at the lowest level and lead to less underestimated standard errors at the highest level; and (d) from the RMSE point of view, REML accounting for heterogeneity outperforms REML assuming homogeneity; a considerable improvement has been particularly detected for the fixed parameters. Based on this, we conclude that the solution presented can be uniformly adopted. We illustrate the process using a real dataset.  相似文献   

16.
When a theoretical psychometric function is fitted to experimental data (as in the obtaining of a psychophysical threshold), maximum-likelihood or probit methods are generally used. In the present paper, the behavior of these curve-fitting methods is studied for the special case of forced-choice experiments, in which the probability of a subject's making a correct response by chance is not zero. A mathematical investigation of the variance of the threshold and slope estimators shows that, in this case, the accuracy of the methods is much worse, and their sensitivity to the way data are sampled is greater, than in the case in which chance level is zero. Further, Monte Carlo simulations show that, in practical situations in which only a finite number of observations are made, the mean threshold and slope estimates are significantly biased. The amount of bias depends on the curve-fitting method and on the range of intensity values, but it is always greater in forced-choice situations than when chance level is zero.  相似文献   

17.
Queen’s University, Kingston, Ontario, Canada We introduce and evaluate via a Monte Carlo study a robust new estimation technique that fits distribution functions to grouped response time (RT) data, where the grouping is determined by sample quantiles. The new estimator, quantile maximum likelihood (QML), is more efficient and less biased than the best alternative estimation technique when fitting the commonly used ex-Gaussian distribution. Limitations of the Monte Carlo results are discussed and guidance provided for the practical application of the new technique. Because QML estimation can be computationally costly, we make fast open source code for fitting available that can be easily modified  相似文献   

18.
Multilevel factor analysis models are widely used in the social sciences to account for heterogeneity in mean structures. In this paper we extend previous work on multilevel models to account for general forms of heterogeneity in confirmatory factor analysis models. We specify various models of mean and covariance heterogeneity in confirmatory factor analysis and develop Markov Chain Monte Carlo (MCMC) procedures to perform Bayesian inference, model checking, and model comparison.We test our methodology using synthetic data and data from a consumption emotion study. The results from synthetic data show that our Bayesian model perform well in recovering the true parameters and selecting the appropriate model. More importantly, the results clearly illustrate the consequences of ignoring heterogeneity. Specifically, we find that ignoring heterogeneity can lead to sign reversals of the factor covariances, inflation of factor variances and underappreciation of uncertainty in parameter estimates. The results from the emotion study show that subjects vary both in means and covariances. Thus traditional psychometric methods cannot fully capture the heterogeneity in our data.  相似文献   

19.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

20.
Latent class analysis (LCA) provides a means of identifying a mixture of subgroups in a population measured by multiple categorical indicators. Latent transition analysis (LTA) is a type of LCA that facilitates addressing research questions concerning stage-sequential change over time in longitudinal data. Both approaches have been used with increasing frequency in the social sciences. The objective of this article is to illustrate data augmentation (DA), a Markov chain Monte Carlo procedure that can be used to obtain parameter estimates and standard errors for LCA and LTA models. By use of DA it is possible to construct hypothesis tests concerning not only standard model parameters but also combinations of parameters, affording tremendous flexibility. DA is demonstrated with an example involving tests of ethnic differences, gender differences, and an Ethnicity x Gender interaction in the development of adolescent problem behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号