首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Liang  Jiajuan  Bentler  Peter M. 《Psychometrika》2004,69(1):101-122
Maximum likelihood is an important approach to analysis of two-level structural equation models. Different algorithms for this purpose have been available in the literature. In this paper, we present a new formulation of two-level structural equation models and develop an EM algorithm for fitting this formulation. This new formulation covers a variety of two-level structural equation models. As a result, the proposed EM algorithm is widely applicable in practice. A practical example illustrates the performance of the EM algorithm and the maximum likelihood statistic.We are thankful to the reviewers for their constructive comments that have led to significant improvement on the first version of this paper. Special thanks are due to the reviewer who suggested a comparison with the LISREL program in the saturated means model, and provided its setup and output. This work was supported by National Institute on Drug Abuse grants DA01070, DA00017, and a UNH 2002 Summer Faculty Fellowship.  相似文献   

2.
The identifiability of item response models with nonparametrically specified item characteristic curves is considered. Strict identifiability is achieved, with a fixed latent trait distribution, when only a single set of item characteristic curves can possibly generate the manifest distribution of the item responses. When item characteristic curves belong to a very general class, this property cannot be achieved. However, for assessments with many items, it is shown that all models for the manifest distribution have item characteristic curves that are very near one another and pointwise differences between them converge to zero at all values of the latent trait as the number of items increases. An upper bound for the rate at which this convergence takes place is given. The main result provides theoretical support to the practice of nonparametric item response modeling, by showing that models for long assessments have the property of asymptotic identifiability. The research was partially supported by the National Institute of Health grant R01 CA81068-01.  相似文献   

3.
J. O. Ramsay 《Psychometrika》1978,43(2):145-160
Techniques are developed for surrounding each of the points in a multidimensional scaling solution with a region which will contain the population point with some level of confidence. Bayesian credibility regions are also discussed. A general theorem is proven which describes the asymptotic distribution of maximum likelihood estimates subject to identifiability constraints. This theorem is applied to a number of models to display asymptotic variance-covariance matrices for coordinate estimates under different rotational constraints. A technique is described for displaying Bayesian conditional credibility regions for any sample size.The research reported here was supported by grant number APA 320 to the author by the National Research Council of Canada.  相似文献   

4.
Multinomial processing tree (MPT) models are statistical models that allow for the prediction of categorical frequency data by sets of unobservable (cognitive) states. In MPT models, the probability that an event belongs to a certain category is a sum of products of state probabilities. AppleTree is a computer program for Macintosh for testing user-defined MPT models. It can fit model parameters to empirical frequency data, provide confidence intervals for the parameters, generate tree graphs for the models, and perform identifiability checks. In this article, the algorithms used by AppleTree and the handling of the program are described.  相似文献   

5.
Probabilistic multidimensional scaling: Complete and incomplete data   总被引:1,自引:0,他引:1  
Simple procedures are described for obtaining maximum likelihood estimates of the location and uncertainty parameters of the Hefner model. This model is a probabilistic, multidimensional scaling model, which assigns a multivariate normal distribution to each stimulus point. It is shown that for such a model, standard nonmetric and metric algorithms are not appropriate. A procedure is also described for constructing incomplete data sets, by taking into consideration the degree of familiarity the subject has for each stimulus. Maximum likelihood estimates are developed both for complete and incomplete data sets. This research was supported by National Science Grant No. SOC76-20517. The first author would especially like to express his gratitude to the Netherlands Institute for Advanced Study for its very substantial help with this research.  相似文献   

6.
In general, nonlinear models such as those commonly employed for the analysis of covariance structures, are not globally identifiable. Any investigation of local identifiability must either yield a mapping of identifiability onto the entire parameter space, which will rarely be feasible in any applications of interest, or confine itself to the neighbourhood of such points of special interest as the maximum likelihood point.The author would like to thank J. Jack McArdle and Colin Fraser for their comments on this paper.  相似文献   

7.
A matrical representation of a Markov chain consists of the initial vector and transition matrix of the chain, along with matrices that specify which observable response occurs for each state. The likelihood function based on a Markov model can be stated in a general way using the components of the model's matrical representation. It follows directly from that statement that two models are equivalent in likelihood if they are related through matrix operations that constitute a change of basis of the matrical representation. Two necessary properties of a change matrix associating two Markov models that are members of the same equivalence class with respect to likelihood are derived. Examples are provided, involving use of the results in analyzing identifiability of Markov models, including a useful application of diagonalization that provides a connection between the problem of identifiability and the eigenvalue problem.  相似文献   

8.
The three-parameter logistic model is widely used to model the responses to a proficiency test when the examinees can guess the correct response, as is the case for multiple-choice items. However, the weak identifiability of the parameters of the model results in large variability of the estimates and in convergence difficulties in the numerical maximization of the likelihood function. To overcome these issues, in this paper we explore various shrinkage estimation methods, following two main approaches. First, a ridge-type penalty on the guessing parameters is introduced in the likelihood function. The tuning parameter is then selected through various approaches: cross-validation, information criteria or using an empirical Bayes method. The second approach explored is based on the methodology developed to reduce the bias of the maximum likelihood estimator through an adjusted score equation. The performance of the methods is investigated through simulation studies and a real data example.  相似文献   

9.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

10.
Standard errors for rotated factor loadings   总被引:1,自引:0,他引:1  
Beginning with the results of Girshick on the asymptotic distribution of principal component loadings and those of Lawley on the distribution of unrotated maximum likelihood factor loadings, the asymptotic distribution of the corresponding analytically rotated loadings is obtained. The principal difficulty is the fact that the transformation matrix which produces the rotation is usually itself a function of the data. The approach is to use implicit differentiation to find the partial derivatives of an arbitrary orthogonal rotation algorithm. Specific details are given for the orthomax algorithms and an example involving maximum likelihood estimation and varimax rotation is presented.This research was supported in part by NIH Grant RR-3. The authors are grateful to Dorothy T. Thayer who implemented the algorithms discussed here as well as those of Lawley and Maxwell. We are particularly indebted to Michael Browne for convincing us of the significance of this work and for helping to guide its development and to Harry H. Harman who many years ago pointed out the need for standard errors of estimate.  相似文献   

11.
According to traditional models of deindividuation, lowered personal identifiability leads to a loss of identity and a loss of internalized control over behaviour This account has been challenged by arguing that manipulations of identifiability affect the relative salience of personal or social identity and hence the choice of standards to control behaviour The present study contributes to an extension of this argument according to which identifiability manipulations do not only affect the salience of social identity but also the strategic communication of social identity. Reicher and Lvine (1993) have shown that subjects who are more identifiable to a powerful outgroup will moderate the expression of those aspects of ingroup identity which differ from the outgroup position and which would be punished by the outgroup. Here we seek to show that in addition, subjects who are more identifiable to a powerful outgroup will accentuate the expression of those aspects of ingroup identity which differ from the outgroup position but which would not be punished by the outgroup. This is because, when identifiable, subjects may use such responses as a means of publicly presenting their adherence to group norms and hence as a means of establishing their right to group membership. A study is reported in which 102 physical education students are either identifiable (I) or not identifiable (NI) to their academic tutors. They are asked to respond on a number of dimensions where pilot interviews show the ingroup stereotype to differ from outgroup norms. Expressions of difference from the outgroup position would lead to punishment on some of these dimensions (P items) but would not lead to punishment for others (NP items) The predicted interaction between identifiability and item type is highly significant. As expected, for NP items identifiability accentuates responses which differentiate the ingroup stereotype from outgroup norms. All these results occur independently of shifts in the salience of social identity. The one unexpectedfinding is that, for P items, identifiability does lead to decreased expression of the ingroup stereotype, but the diference does not reach significance. Nonetheless, overall the results do provide further evidence for the complex effects of identifiability on strategic considerations underlying the expression of social identity in intergroup contexts.  相似文献   

12.
In a manner similar to that used in the orthogonal case, formulas for the aymptotic standard errors of analytically rotated oblique factor loading estimates are obtained. This is done by finding expressions for the partial derivatives of an oblique rotation algorithm and using previously derived results for unrotated loadings. These include the results of Lawley for maximum likelihood factor analysis and those of Girshick for principal components analysis. Details are given in cases including direct oblimin and direct Crawford-Ferguson rotation. Numerical results for an example involving maximum likelihood estimation with direct quartimin rotation are presented. They include simultaneous tests for significant loading estimates.This research was supported in part by NIH Grant RR-3. The author is indebted to Dorothy Thayer who implemented the algorithms required for the example and to Gunnar Gruvaeus and Allen Yates for reviewing an earlier version of this paper. Special thanks are extended to Michael Browne for many conversations devoted to clarifying the thoughts of the author.  相似文献   

13.
Jennrich  Robert I. 《Psychometrika》1986,51(2):277-284
It is shown that the scoring algorithm for maximum likelihood estimation in exploratory factor analysis can be developed in a way that is many times more efficient than a direct development based on information matrices and score vectors. The algorithm offers a simple alternative to current algorithms and when used in one-step mode provides the simplest and fastest method presently available for moving from consistent to efficient estimates. Perhaps of greater importance is its potential for extension to the confirmatory model. The algorithm is developed as a Gauss-Newton algorithm to facilitate its application to generalized least squares and to maximum likelihood estimation.This research was supported by NSF Grant MCS-8301587.  相似文献   

14.
The linear logistic test model (LLTM) specifies the item parameters as a weighted sum of basic parameters. The LLTM is a special case of a more general nonlinear logistic test model (NLTM) where the weights are partially unknown. This paper is about the identifiability of the NLTM. Sufficient and necessary conditions for global identifiability are presented for a NLTM where the weights are linear functions, while conditions for local identifiability are shown to require a model with less restrictions. It is also discussed how these conditions are checked using an algorithm due to Bekker, Merckens, and Wansbeek (1994). Several illustrations are given.This article was written while the first author was a post doctoral fellow at the university of Twente. He gratefully acknowledges the university's hospitality and the financial support by NWO (project nr. 30002).  相似文献   

15.
A general latent trait model for response processes   总被引:1,自引:0,他引:1  
The purpose of the current paper is to propose a general multicomponent latent trait model (GLTM) for response processes. The proposed model combines the linear logistic latent trait (LLTM) with the multicomponent latent trait model (MLTM). As with both LLTM and MLTM, the general multicomponent latent trait model can be used to (1) test hypotheses about the theoretical variables that underlie response difficulty and (2) estimate parameters that describe test items by basic substantive properties. However, GLTM contains both component outcomes and complexity factors in a single model and may be applied to data that neither LLTM nor MLTM can handle. Joint maximum likelihood estimators are presented for the parameters of GLTM and an application to cognitive test items is described.This research was partially supported by the National Institute of Education grant number NIE-6-7-0156 to Susan Embretson (Whitely), principal investigator. However the optinions expressed herein do not necessarily reflect the position or policy of the National Institute of Education, and no official endorsement by the National Institute of Education should be inferred.  相似文献   

16.
Anonymity promotes free speech by protecting the identity of people who might otherwise face negative consequences for expressing their ideas. Wrongdoers, however, often abuse this invisibility cloak. Defenders of anonymity online emphasise its value in advancing public debate and safeguarding political dissension. Critics emphasise the need for identifiability in order to achieve accountability for wrongdoers such as trolls. The problematic tension between anonymity and identifiability online lies in the desirability of having low costs (no repercussions) for desirable speech and high costs (appropriate repercussions) for undesirable speech. If we practice either full anonymity or identifiability, we end up having either low or high costs in all online contexts and for all kinds of speech. I argue that free speech is compatible with instituting costs in the form of repercussions and penalties for controversial and unacceptable speech. Costs can minimise the risks of anonymity by providing a reasonable degree of accountability. Pseudonymity is a tool that can help us regulate those costs while furthering free speech. This article argues that, in order to redesign the Internet to better serve free speech, we should shape much of it to resemble an online masquerade.  相似文献   

17.
Although self-enhancement is linked to psychological benefits, it is also associated with personal and interpersonal liabilities (e.g., excessive risk taking, social exclusion). Hence, structuring social situations that prompt people to keep their self-enhancing beliefs in check can confer personal and interpersonal advantages. The authors examined whether accountability can serve this purpose. Accountability was defined as the expectation to explain, justify, and defend one's self-evaluations (grades on an essay) to another person ("audience"). Experiment 1 showed that accountability curtails self-enhancement. Experiment 2 ruled out audience concreteness and status as explanations for this effect. Experiment 3 demonstrated that accountability-induced self-enhancement reduction is due to identifiability. Experiment 4 documented that identifiability decreases self-enhancement because of evaluation expectancy and an accompanying focus on one's weaknesses.  相似文献   

18.
A definition ofessential independence is proposed for sequences of polytomous items. For items satisfying the reasonable assumption that the expected amount of credit awarded increases with examinee ability, we develop a theory ofessential unidimensionality which closely parallels that of Stout. Essentially unidimensional item sequences can be shown to have a unique (up to change-of-scale) dominant underlying trait, which can be consistently estimated by a monotone transformation of the sum of the item scores. In more general polytomous-response latent trait models (with or without ordered responses), anM-estimator based upon maximum likelihood may be shown to be consistent for under essentially unidimensional violations of local independence and a variety of monotonicity/identifiability conditions. A rigorous proof of this fact is given, and the standard error of the estimator is explored. These results suggest that ability estimation methods that rely on the summation form of the log likelihood under local independence should generally be robust under essential independence, but standard errors may vary greatly from what is usually expected, depending on the degree of departure from local independence. An index of departure from local independence is also proposed.This work was supported in part by Office of Naval Research Grant N00014-87-K-0277 and National Science Foundation Grant NSF-DMS-88-02556. The author is grateful to William F. Stout for many helpful comments, and to an anonymous reviewer for raising the questions addressed in section 2. A preliminary version of section 6 appeared in the author's Ph.D. thesis.  相似文献   

19.
EM algorithms for ML factor analysis   总被引:11,自引:0,他引:11  
The details of EM algorithms for maximum likelihood factor analysis are presented for both the exploratory and confirmatory models. The algorithm is essentially the same for both cases and involves only simple least squares regression operations; the largest matrix inversion required is for aq ×q symmetric matrix whereq is the matrix of factors. The example that is used demonstrates that the likelihood for the factor analysis model may have multiple modes that are not simply rotations of each other; such behavior should concern users of maximum likelihood factor analysis and certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.  相似文献   

20.
The purpose of this study is to evaluate the extent to which a model of social support may help explain the low suicide rate of Black females. The data are taken from the National Institute of Mental Health's Epidemiologic Catchment Area Study 1980–1985 (United States). The LISREL model examines the direct and indirect effects of the background characteristics on attempted suicide as mediated by emotional state. Results indicate evidence that for Black and White males and females, finding emotional and psychological support in friends and family members helps to safeguard against suicide. The most substantial finding is that for all race/sex categories, seeking support from friendship and familial resources is negatively related to attempted suicide, whereas seeking support from professional resources is associated with an increase in the likelihood of a suicide attempt. This increased likelihood of attempted suicide may be reflecting populations members' resistance to seeking professional help until their emotional state has severely deteriorated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号