首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Dorfman and Biderman evaluated an additive-operator learning model and some special cases of this model on data from a signal-detection experiment. They found that Kac's pure error-correction model gave the poorest fit of the special models when the predictions were generated from the maximum likelihood estimates and the initial cutoffs were set at an a priori value rather than estimated. First, this paper presents tests of an asymptotic theorem by Norman, which provide strong support for Kac's model. On the final 100 trials, every subject but one gave probability matching, and the response propcrtions appropriately normed were approximately normally distributed with variance π(1 ? π). Further analyses of the Dorfman-Biderman data based upon maximum likelihood and likelihood-ratio tests suggest that Kac's model gives a relatively good, but imperfect fit to the data. Some possible explanations for the apparent contradiction between the results of these new analyses and the original findings of Dorfman and Biderman were explored. The investigations led to the proposal that there may be nonsystematic, random drifts in the decision criterion after correct responses as well as after errors. The hypothesis gives a minor modification of the conclusions from Norman's theorem for Kac's model. It gives asymptotic probability matching for every subject, but a larger asymptotic variance than π(1 ? π), which agrees with the data. The paper also presents good Monte Carlo justification for the use of maximum likelihood and likelihood-ratio tests with these additive learning models. Results from Thomas' nonparametric test of error correction are presented, which are inconclusive. Computation of Thomas' p statistic on the Monte Carlo simulations showed that it is quite variable and insensitive to small deviations from error correction.  相似文献   

2.
This paper shows how to define probability distributions over linguistically realistic syntactic structures in a way that permits us to define language learning and language comprehension as statistical problems. We demonstrate our approach using lexical‐functional grammar (LFG), but our approach generalizes to virtually any linguistic theory. Our probabilistic models are maximum entropy models. In this paper we concentrate on statistical inference procedures for learning the parameters that define these probability distributions. We point out some of the practical problems that make straightforward ways of estimating these distributions infeasible, and develop a “pseudo‐likelihood” estimation procedure that overcomes some of these problems. This method raises interesting questions concerning the nature of the data available to a language learner and the modularity of language learning and processing.  相似文献   

3.
Ogilvie and Creelman have recently attempted to develop maximum likelihood estimates of the parameters of signal-detection theory from the data of yes-no ROC curves. Their method involved the assumption of a logistic distribution rather than the normal distribution in order to make the mathematics more tractable. The present paper presents a method of obtaining maximum likelihood estimates of these parameters using the assumption of underlying normal distributions.This research was supported in part by grants from the National Institutes of Health, MH-10449-02, and from the National Science Foundation, NSF GS-1466.  相似文献   

4.
The procedure of fitting parameterized models to experimental data is that of extremalizing a statistically meaningful scalar-valued vector function. The existence of multiple local extrema can greatly complicate the search for the global solution. Sufficient conditions for uniqueness of the parameter estimates are usually determined from the convexity of the criterion surface: the convexity properties are determined by the statistical criterion, the structure of the model, the underlying distribution, and the observations (data). In this paper, we seek the combinations of criteria, models, and distributions which yield sufficient conditions for unique parameter estimates regardless of the observed binary-response data values.Under mild sufficient conditions usually satisfied in practice, the Maximum Likelihood, Minimum Chi Square, and Minimum Transform Chi Square criteria are convex functions when the parameters appear linearly. These results are applied to equalvariance models of signal detection/recognition, sequential response, and additive learning models with implications on the experimental design. Unequal-variance models and models of discrete-sensory processing (rectilinear ROC curves) lead to nonconvex criteria for some observations (saddlepoints are demonstrated). Although convexity cannot be assured for these cases, the results suggest an efficient search procedure in a lower dimensional subspace to find global extrema. The extension of these results to more than two response levels is discussed.  相似文献   

5.
In the present paper a model for describing dynamic processes is constructed by combining the common Rasch model with the concept of structurally incomplete designs. This is accomplished by mapping each item on a collection of virtual items, one of which is assumed to be presented to the respondent dependent on the preceding responses and/or the feedback obtained. It is shown that, in the case of subject control, no unique conditional maximum likelihood (CML) estimates exist, whereas marginal maximum likelihood (MML) proves a suitable estimation procedure. A hierarchical family of dynamic models is presented, and it is shown how to test special cases against more general ones. Furthermore, it is shown that the model presented is a generalization of a class of mathematical learning models, known as Luce's beta-model.  相似文献   

6.
Abstract: At least two types of models, the vector model and the unfolding model can be used for the analysis of dichotomous choice data taken from, for example, the pick any/ n method. The previous vector threshold models have a difficulty with estimation of the nuisance parameters such as the individual vectors and thresholds. This paper proposes a new probabilistic vector threshold model, where, unlike the former vector models, the angle that defines an individual vector is a random variable, and where the marginal maximum likelihood estimation method using the expectation-maximization algorithm is adopted to avoid incidental parameters. The paper also attempts to discuss which of the two models is more appropriate to account for dichotomous choice data. Two sets of dichotomous choice data are analyzed by the model.  相似文献   

7.
Measurement invariance is a fundamental assumption in item response theory models, where the relationship between a latent construct (ability) and observed item responses is of interest. Violation of this assumption would render the scale misinterpreted or cause systematic bias against certain groups of persons. While a number of methods have been proposed to detect measurement invariance violations, they typically require advance definition of problematic item parameters and respondent grouping information. However, these pieces of information are typically unknown in practice. As an alternative, this paper focuses on a family of recently proposed tests based on stochastic processes of casewise derivatives of the likelihood function (i.e., scores). These score-based tests only require estimation of the null model (when measurement invariance is assumed to hold), and they have been previously applied in factor-analytic, continuous data contexts as well as in models of the Rasch family. In this paper, we aim to extend these tests to two-parameter item response models, with strong emphasis on pairwise maximum likelihood. The tests’ theoretical background and implementation are detailed, and the tests’ abilities to identify problematic item parameters are studied via simulation. An empirical example illustrating the tests’ use in practice is also provided.  相似文献   

8.
Group-level variance estimates of zero often arise when fitting multilevel or hierarchical linear models, especially when the number of groups is small. For situations where zero variances are implausible a priori, we propose a maximum penalized likelihood approach to avoid such boundary estimates. This approach is equivalent to estimating variance parameters by their posterior mode, given a weakly informative prior distribution. By choosing the penalty from the log-gamma family with shape parameter greater than 1, we ensure that the estimated variance will be positive. We suggest a default log-gamma(2,λ) penalty with λ→0, which ensures that the maximum penalized likelihood estimate is approximately one standard error from zero when the maximum likelihood estimate is zero, thus remaining consistent with the data while being nondegenerate. We also show that the maximum penalized likelihood estimator with this default penalty is a good approximation to the posterior median obtained under a noninformative prior. Our default method provides better estimates of model parameters and standard errors than the maximum likelihood or the restricted maximum likelihood estimators. The log-gamma family can also be used to convey substantive prior information. In either case—pure penalization or prior information—our recommended procedure gives nondegenerate estimates and in the limit coincides with maximum likelihood as the number of groups increases.  相似文献   

9.
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by interpreting the EM algorithm as alternating minimization of the Kullback–Leibler divergence between two convex sets. It is shown that these conditions are satisfied by an unconstrained latent class model, yielding an optimal bound against which more highly constrained models may be compared.  相似文献   

10.
In this paper it is shown that under the random effects generalized partial credit model for the measurement of a single latent variable by a set of polytomously scored items, the joint marginal probability distribution of the item scores has a closed-form expression in terms of item category location parameters, parameters that characterize the distribution of the latent variable in the subpopulation of examinees with a zero score on all items, and item-scaling parameters. Due to this closed-form expression, all parameters of the random effects generalized partial credit model can be estimated using marginal maximum likelihood estimation without assuming a particular distribution of the latent variable in the population of examinees and without using numerical integration. Also due to this closed-form expression, new special cases of the random effects generalized partial credit model can be identified. In addition to these new special cases, a slightly more general model than the random effects generalized partial credit model is presented. This slightly more general model is called the extended generalized partial credit model. Attention is paid to maximum likelihood estimation of the parameters of the extended generalized partial credit model and to assessing the goodness of fit of the model using generalized likelihood ratio tests. Attention is also paid to person parameter estimation under the random effects generalized partial credit model. It is shown that expected a posteriori estimates can be obtained for all possible score patterns. A simulation study is carried out to show the usefulness of the proposed models compared to the standard models that assume normality of the latent variable in the population of examinees. In an empirical example, some of the procedures proposed are demonstrated.  相似文献   

11.
Tversky (1972) has proposed a family of models for paired-comparison data that generalize the Bradley-Terry-Luce (BTL) model and can, therefore, apply to a diversity of situations in which the BTL model is doomed to fail. In this article, we present a Matlab function that makes it easy to specify any of these general models (EBA, Pretree, or BTL) and to estimate their parameters. The program eliminates the time-consuming task of constructing the likelihood function by hand for every single model. The usage of the program is illustrated by several examples. Features of the algorithm are outlined. The purpose of this article is to facilitate the use of probabilistic choice models in the analysis of data resulting from paired comparisons.  相似文献   

12.
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found. Conditions under which monotonic relationships do not exist are also identified. Such functional relationships allow researchers to better understand the problem when significant factor loading estimates are expected but not obtained, and vice versa. What will affect the likelihood for Heywood cases (negative unique variance estimates) is also explicit through these relationships. Empirical findings in the literature are discussed using the obtained results.  相似文献   

13.
Tversky (1972) has proposed a family of models for paired-comparison data that generalize the Bradley—Terry—Luce (BTL) model and can, therefore, apply to a diversity of situations in which the BTL model is doomed to fail. In this article, we present a Matlab function that makes it easy to specify any of these general models (EBA, Pretree, or BTL) and to estimate their parameters. The program eliminates the time-consuming task of constructing the likelihood function by hand for every single model. The usage of the program is illustratedby several examples. Features of the algorithm are outlined. The purpose of this article is to facilitate the use of probabilistic choice models in the analysis of data resulting from paired comparisons.  相似文献   

14.
Algebraic properties of the normal theory maximum likelihood solution in factor analysis regression are investigated. Two commonly employed measures of the within sample predictive accuracy of the factor analysis regression function are considered: the variance of the regression residuals and the squared correlation coefficient between the criterion variable and the regression function. It is shown that this within sample residual variance and within sample squared correlation may be obtained directly from the factor loading and unique variance estimates, without use of the original observations or the sample covariance matrix.  相似文献   

15.
Previous work on a general class of multidimensional latent variable models for analysing ordinal manifest variables is extended here to allow for direct covariate effects on the manifest ordinal variables and covariate effects on the latent variables. A full maximum likelihood estimation method is used to estimate all the model parameters simultaneously. Goodness‐of‐fit statistics and standard errors are discussed. Two examples from the 1996 British Social Attitudes Survey are used to illustrate the methodology.  相似文献   

16.
Two linearly constrained logistic models which are based on the well-known dichotomous Rasch model, the ‘linear logistic test model’ (LLTM) and the ‘linear logistic model with relaxed assumptions’ (LLRA), are discussed. Necessary and sufficient conditions for the existence of unique conditional maximum likelihood estimates of the structural model parameters are derived. Methods for testing composite hypotheses within the framework of these models and a number of typical applications to real data are mentioned.  相似文献   

17.
The paper addresses and discusses whether the tradition of accepting point-symmetric item characteristic curves is justified by uncovering the inconsistent relationship between the difficulties of items and the order of maximum likelihood estimates of ability. This inconsistency is intrinsic in models that provide point-symmetric item characteristic curves, and in this paper focus is put on the normal ogive model for observation. It is also questioned if in the logistic model the sufficient statistic has forfeited the rationale that is appropriate to the psychological reality. It is observed that the logistic model can be interpreted as the case in which the inconsistency in ordering the maximum likelihood estimates is degenerated.The paper proposes a family of models, called the logistic positive exponent family, which provides asymmetric item chacteristic curves. A model in this family has a consistent principle in ordering the maximum likelihood estimates of ability. The family is divided into two subsets each of which has its own principle, and includes the logistic model as a transition from one principle to the other. Rationale and some illustrative examples are given.  相似文献   

18.
This paper introduces a two‐parameter family of distributions for modelling random variables on the (0,1) interval by applying the cumulative distribution function of one ‘parent’ distribution to the quantile function of another. Family members have explicit probability density functions, cumulative distribution functions and quantiles in a location parameter and a dispersion parameter. They capture a wide variety of shapes that the beta and Kumaraswamy distributions cannot. They are amenable to likelihood inference, and enable a wide variety of quantile regression models, with predictors for both the location and dispersion parameters. We demonstrate their applicability to psychological research problems and their utility in modelling real data.  相似文献   

19.
Eric Maris 《Psychometrika》1993,58(3):445-469
A class of models for gamma distributed random variables is presented. These models are shown to be more flexible than the classical linear models with respect to the structure that can be imposed on the expected value. In particular, both additive, multiplicative, and combined additive-multiplicative models can be formulated. As a special case, a class of psychometric models for reaction times is presented, together with their psychological interpretation. By means of a comparison with existing models, this class of models is shown to offer some possibilities that are not available in existing methods. Parameter estimation by means of maximum likelihood (ML) is shown to have some attractive properties, since the models belong to the exponential family. Then, the results of a simulation study of the bias in the ML estimates are presented. Finally, the application of these models is illustrated by an analysis of the data from a mental rotation experiment. This analysis is preceded by an evaluation of the appropriateness of the gamma distribution for these data.  相似文献   

20.
Linear structural equations with latent variables   总被引:2,自引:0,他引:2  
An interdependent multivariate linear relations model based on manifest, measured variables as well as unmeasured and unmeasurable latent variables is developed. The latent variables include primary or residual common factors of any order as well as unique factors. The model has a simpler parametric structure than previous models, but it is designed to accommodate a wider range of applications via its structural equations, mean structure, covariance structure, and constraints on parameters. The parameters of the model may be estimated by gradient and quasi-Newton methods, or a Gauss-Newton algorithm that obtains least-squares, generalized least-squares, or maximum likelihood estimates. Large sample standard errors and goodness of fit tests are provided. The approach is illustrated by a test theory model and a longitudinal study of intelligence.This investigation was supported in part by a Research Scientist Development Award (KO2-DA00017) and a research grant (DA01070) from the U. S. Public Health Service.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号