首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Luce introduced a family of learning models in which response probabilities are a function of some underlying continuous real variable. This variable can be represented as an additive function of the parameters of these learning models. Additive learning models have also been applied to signal-detection data. There are a wide variety of problems of contemporary psychophysics for which the assumption of a continuum of sensory states seems appropriate, and this family of learning models has a natural extension to such problems. One potential difficulty in the application of such models to data is that estimation of parameters requires the use of numerical procedures when the method of maximum likelihood is used. Given a likelihood function generated from an additive model, this paper gives sufficient conditions for log-concavity and strict log-concavity of the likelihood function. If a likelihood function is strictly log-concave, then any local maximum is a unique global maximum, and any solution to the likelihood equations is the unique global maximum point. These conditions are quite easy to evaluate in particular cases, and hence, the results should be quite useful. Some applications to Luce's beta model and to the signal-detection learning models of Dorfman and Biderman are presented.  相似文献   

2.
Applications of item response theory, which depend upon its parameter invariance property, require that parameter estimates be unbiased. A new method, weighted likelihood estimation (WLE), is derived, and proved to be less biased than maximum likelihood estimation (MLE) with the same asymptotic variance and normal distribution. WLE removes the first order bias term from MLE. Two Monte Carlo studies compare WLE with MLE and Bayesian modal estimation (BME) of ability in conventional tests and tailored tests, assuming the item parameters are known constants. The Monte Carlo studies favor WLE over MLE and BME on several criteria over a wide range of the ability scale.  相似文献   

3.
Gert Storms 《Psychometrika》1995,60(2):247-258
A Monte Carlo study was conducted to investigate the robustness of the assumed error distribution in maximum likelihood estimation models for multidimensional scaling. Data sets generated according to the lognormal, the normal, and the rectangular distribution were analysed with the log-normal error model in Ramsay's MULTISCALE program package. The results show that violations of the assumed error distribution have virtually no effect on the estimated distance parameters. In a comparison among several dimensionality tests, the corrected version of thex 2 test, as proposed by Ramsay, yielded the best results, and turned out to be quite robust against violations of the error model.  相似文献   

4.
Deterministic and probabilistic additive learning models forsignal detection/recognition replace the fixed criterion of classical detection models with one which shifts from trial to trial in the light of the preceding trial events. Data from a sinusoid in noise detection task without feedback and an auditory amplitude recognition task with feedback are used to test these models with respect to their predictions about asymptotic response frequency, and, where possible, by likelihood ratio tests. These and some previous experiments show that whether or not feedback is given subjects do not universally probability match, overmatch, undermatch, or keep response probability constant over discriminability, so that none of the testable special models can fit more than a proportion of subjects. The likelihood ratio tests confirm this conclusion for the special deterministic models. The six-parameter general deterministic model does nonsignificantly better than an ad hoc six-parameter response runs model in fitting the recognition data and significantly better than the five-parameter memory recognition model of Tanner, Rauk, andAtkinson (1970). Monte Carlo methods are used to confirm the applicability of asymptotic response frequency results to practically feasible sample sizes.  相似文献   

5.
A multi‐group factor model is suitable for data originating from different strata. However, it often requires a relatively large sample size to avoid numerical issues such as non‐convergence and non‐positive definite covariance matrices. An alternative is to pool data from different groups in which a single‐group factor model is fitted to the pooled data using maximum likelihood. In this paper, properties of pseudo‐maximum likelihood (PML) estimators for pooled data are studied. The pooled data are assumed to be normally distributed from a single group. The resulting asymptotic efficiency of the PML estimators of factor loadings is compared with that of the multi‐group maximum likelihood estimators. The effect of pooling is investigated through a two‐group factor model. The variances of factor loadings for the pooled data are underestimated under the normal theory when error variances in the smaller group are larger. Underestimation is due to dependence between the pooled factors and pooled error terms. Small‐sample properties of the PML estimators are also investigated using a Monte Carlo study.  相似文献   

6.
Recently, several authors have proposed the use of random graph theory to evaluate the adequacy of cluster analysis results. One such statistic is the minimum number of lines (edges) V needed to connect a random graph. Erdös and Rényi derived asymptotic distributions of V. Schultz and Hubert showed in a Monte Carlo study that the asymptotic approximations are poor for small sample sizes n typically used in data analysis applications. In this paper the exact probability distribution of V is given and the distributions for some values of n are tabulated and compared with existing Monte Carlo approximations.  相似文献   

7.
Formulas for the asymptotic biases of the parameter estimates in structural equation models are provided in the case of the Wishart maximum likelihood estimation for normally and nonnormally distributed variables. When multivariate normality is satisfied, considerable simplification is obtained for the models of unstandardized variables. Formulas for the models of standardized variables are also provided. Numerical examples with Monte Carlo simulations in factor analysis show the accuracy of the formulas and suggest the asymptotic robustness of the asymptotic biases with normality assumption against nonnormal data. Some relationships between the asymptotic biases and other asymptotic values are discussed.The author is indebted to the editor and anonymous reviewers for their comments, corrections, and suggestions on this paper, and to Yutaka Kano for discussion on biases.  相似文献   

8.
Testing the fit of finite mixture models is a difficult task, since asymptotic results on the distribution of likelihood ratio statistics do not hold; for this reason, alternative statistics are needed. This paper applies the π* goodness of fit statistic to finite mixture item response models. The π* statistic assumes that the population is composed of two subpopulations – those that follow a parametric model and a residual group outside the model; π* is defined as the proportion of population in the residual group. The population was divided into two or more groups, or classes. Several groups followed an item response model and there was also a residual group. The paper presents maximum likelihood algorithms for estimating item parameters, the probabilities of the groups and π*. The paper also includes a simulation study on goodness of recovery for the two‐ and three‐parameter logistic models and an example with real data from a multiple choice test.  相似文献   

9.
This paper proposes test statistics based on the likelihood ratio principle for testing equality of proportions in correlated data with additional incomplete samples. Powers of these tests are compared through Monte Carlo simulation with those of tests proposed recently by Ekbohm (based on an unbiased estimator) and Campbell (based on a Pearson-Chi-squared type statistic). Even though tests based on the maximum likelihood principle are theoretically expected to be superior to others, at least asymptotically, results from our simulations show that the gain in power could only be slight.  相似文献   

10.
The psychometric function relates an observer’s performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (orlapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function’s parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditionalX 2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.  相似文献   

11.
Nonlinear common factor models with polynomial regression functions, including interaction terms, are fitted by simultaneously estimating the factor loadings and common factor scores, using maximum-likelihood-ratio and ordinary-least-squares methods. A Monte Carlo study gives support to a conjecture about the form of the distribution of the likelihood-ratio criterion.The research reported in this paper was partly supported by Natural Sciences and Engineering Research Grant No. A6346.  相似文献   

12.
Networks of relationships between individuals influence individual and collective outcomes and are therefore of interest in social psychology, sociology, the health sciences, and other fields. We consider network panel data, a common form of longitudinal network data. In the framework of estimating functions, which includes the method of moments as well as the method of maximum likelihood, we propose score-type tests. The score-type tests share with other score-type tests, including the classic goodness-of-fit test of Pearson, the property that the score-type tests are based on comparing the observed value of a function of the data to values predicted by a model. The score-type tests are most useful in forward model selection and as tests of homogeneity assumptions, and possess substantial computational advantages. We derive one-step estimators which are useful as starting values of parameters in forward model selection and therefore complement the usefulness of the score-type tests. The finite-sample behaviour of the score-type tests is studied by Monte Carlo simulation and compared to t-type tests.  相似文献   

13.
The absence of operational disaggregate lexicographic decision models and Tversky's observation that choice behavior is often inconsistent, hierarchical, and context dependent motivate the development of a maximum likelihood hierarchical (MLH) choice model. This new disaggregate choice model requires few assumptions and accommodates the three aspects of choice behavior noted by A. Tversky (1972, Journal of Mathematical Psychology, 9, 341–367). The model has its foundation in a prototype model developed by the authors. Unlike the deterministic prototype, however, MLH is a probabilistic model which generates maximum likelihood estimators of the aggregate “cutoff values.” The model is formulated as a concave programming problem whose solutions are therefore globally optimal. Finally, the model is applied to data from three separate studies where it is demonstrated to have superior performance over the prototype model in its predictive performance.  相似文献   

14.
Queen’s University, Kingston, Ontario, Canada We introduce and evaluate via a Monte Carlo study a robust new estimation technique that fits distribution functions to grouped response time (RT) data, where the grouping is determined by sample quantiles. The new estimator, quantile maximum likelihood (QML), is more efficient and less biased than the best alternative estimation technique when fitting the commonly used ex-Gaussian distribution. Limitations of the Monte Carlo results are discussed and guidance provided for the practical application of the new technique. Because QML estimation can be computationally costly, we make fast open source code for fitting available that can be easily modified  相似文献   

15.
田伟  辛涛  康春花 《心理科学进展》2014,22(6):1036-1046
在心理与教育测量中, 项目反应理论(Item Response Theory, IRT)模型的参数估计方法是理论研究与实践应用的基本工具。最近, 由于IRT模型的不断扩展与EM (expectation-maximization)算法自身的固有问题, 参数估计方法的改进与发展显得尤为重要。这里介绍了IRT模型中边际极大似然估计的发展, 提出了它的阶段性特征, 即联合极大似然估计阶段、确定性潜在心理特质“填补”阶段、随机潜在心理特质“填补”阶段, 重点阐述了它的潜在心理特质“填补” (data augmentation)思想。EM算法与Metropolis-Hastings Robbins-Monro (MH-RM)算法作为不同的潜在心理特质“填补”方法, 都是边际极大似然估计的思想跨越。目前, 潜在心理特质“填补”的参数估计方法仍在不断发展与完善。  相似文献   

16.
Structural equation modeling is a well-known technique for studying relationships among multivariate data. In practice, high dimensional nonnormal data with small to medium sample sizes are very common, and large sample theory, on which almost all modeling statistics are based, cannot be invoked for model evaluation with test statistics. The most natural method for nonnormal data, the asymptotically distribution free procedure, is not defined when the sample size is less than the number of nonduplicated elements in the sample covariance. Since normal theory maximum likelihood estimation remains defined for intermediate to small sample size, it may be invoked but with the probable consequence of distorted performance in model evaluation. This article studies the small sample behavior of several test statistics that are based on maximum likelihood estimator, but are designed to perform better with nonnormal data. We aim to identify statistics that work reasonably well for a range of small sample sizes and distribution conditions. Monte Carlo results indicate that Yuan and Bentler's recently proposed F-statistic performs satisfactorily.  相似文献   

17.
Asymptotic distributions of the estimators of communalities are derived for the maximum likelihood method in factor analysis. It is shown that the common practice of equating the asymptotic standard error of the communality estimate to the unique variance estimate is correct for standardized communality but not correct for unstandardized communality. In a Monte Carlo simulation the accuracy of the normal approximation to the distributions of the estimators are assessed when the sample size is 150 or 300. This study was carried out in part under the ISM Cooperative Research Program (90-ISM-CRP-9).  相似文献   

18.
Generalized fiducial inference (GFI) has been proposed as an alternative to likelihood-based and Bayesian inference in mainstream statistics. Confidence intervals (CIs) can be constructed from a fiducial distribution on the parameter space in a fashion similar to those used with a Bayesian posterior distribution. However, no prior distribution needs to be specified, which renders GFI more suitable when no a priori information about model parameters is available. In the current paper, we apply GFI to a family of binary logistic item response theory models, which includes the two-parameter logistic (2PL), bifactor and exploratory item factor models as special cases. Asymptotic properties of the resulting fiducial distribution are discussed. Random draws from the fiducial distribution can be obtained by the proposed Markov chain Monte Carlo sampling algorithm. We investigate the finite-sample performance of our fiducial percentile CI and two commonly used Wald-type CIs associated with maximum likelihood (ML) estimation via Monte Carlo simulation. The use of GFI in high-dimensional exploratory item factor analysis was illustrated by the analysis of a set of the Eysenck Personality Questionnaire data.  相似文献   

19.
Three experiments were done to test the empirical relevance of Fishburn's (1967) additivity axiom, which says that people should be indifferent between pairs of gambles which satisfy certain conditions specified in the axiom. Each of the experiments consisted of two parts. In the first part, subjects had to evaluate consequences which were used in the second part as possible outcomes in a gamble. In the second part, subjects had to make choices among pairs of gambles. The experiments differed in respect to kinds of consequences and kinds of subjects used. Additivity analysis was applied to the data of the first part of each experiment, using a conjoint measurement model. A Monte Carlo study is included, which provides some hints for the evaluation of the stress coefficients obtained after applying additivity analysis to the empirical data matrices. The data of the second part of each experiment are discussed in respect to their relevance for Fishburn's (1967) additivity axiom. It was not strongly supported by the data, unless for a very restricted situation.  相似文献   

20.
Autocorrelation and partial autocorrelation, which provide a mathematical tool to understand repeating patterns in time series data, are often used to facilitate the identification of model orders of time series models (e.g., moving average and autoregressive models). Asymptotic methods for testing autocorrelation and partial autocorrelation such as the 1/T approximation method and the Bartlett's formula method may fail in finite samples and are vulnerable to non-normality. Resampling techniques such as the moving block bootstrap and the surrogate data method are competitive alternatives. In this study, we use a Monte Carlo simulation study and a real data example to compare asymptotic methods with the aforementioned resampling techniques. For each resampling technique, we consider both the percentile method and the bias-corrected and accelerated method for interval construction. Simulation results show that the surrogate data method with percentile intervals yields better performance than the other methods. An R package pautocorr is used to carry out tests evaluated in this study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号