首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In van der Heijden and de Leeuw (1985) it was proposed to use loglinear analysis to detect interactions in a multiway contingency table, and to explore the form of these interactions with correspondence analysis. After performing the exploratory phase of the analysis, we will show here how the results found in this phase can be used for confirmation.This research was conducted while the authors were visiting the Laboratoire de Statistique et Probabilité, Universite Paul Sabatier, Toulouse. This visit was partly made possible by a joint grant of the Netherlands Organisation for the Advancement of Pure Research (Z.W.O.) and the French National Center for Scientific Research (C.N.R.S.). For helpful comments, the authors are indebted to H. Caussinus, J. de Leeuw, and two anonymous reviewers.  相似文献   

2.
Correspondence analysis of incomplete contingency tables   总被引:1,自引:0,他引:1  
Correspondence analysis can be described as a technique which decomposes the departure from independence in a two-way contingency table. In this paper a form of correspondence analysis is proposed which decomposes the departure from the quasi-independence model. This form seems to be a good alternative to ordinary correspondence analysis in cases where the use of the latter is either impossible or not recommended, for example, in case of missing data or structural zeros. It is shown that Nora's reconstitution of order zero, a procedure well-known in the French literature, is formally identical to our correspondence analysis of incomplete tables. Therefore, reconstitution of order zero can also be interpreted as providing a decomposition of the residuals from the quasi-independence model. Furthermore, correspondence analysis of incomplete tables can be performed using existing programs for ordinary correspondence analysis.  相似文献   

3.
4.
Goodman's (1979, 1981, 1985) loglinear formulation for bi-way contingency tables is extended to tables with or without missing cells and is used for exploratory purposes. A similar formulation is done for three-way tables and generalizations of correspondence analysis are deduced. A generalized version of Goodman's algorithm, based on Newton's elementary unidimensional method is used to estimate the scores in all cases.This research was partially supported by National Science and Engineering Research Council of Canada, Grant No. A8724. The author is grateful to the reviewers and the editor for helpful comments.  相似文献   

5.
Cross-classified data are frequently encountered in behavioral and social science research. The loglinear model and dual scaling (correspondence analysis) are two representative methods of analyzing such data. An alternative method, based on ideal point discriminant analysis (DA), is proposed for analysis of contingency tables, which in a certain sense encompasses the two existing methods. A variety of interesting structures can be imposed on rows and columns of the tables through manipulations of predictor variables and/or as direct constraints on model parameters. This, along with maximum likelihood estimation of the model parameters, allows interesting model comparisons. This is illustrated by the analysis of several data sets.Presented as the Presidential Address to the Psychometric Society's Annual and European Meetings, June, 1987. Preparation of this paper was supported by grant A6394 from the Natural Sciences and Engineering Research Council of Canada. Thanks are due to Chikio Hayashi of University of the Air in Japan for providing the ISM data, and to Jim Ramsay and Ivo Molenaar for their helpful comments on an earlier draft of this paper.  相似文献   

6.
A general framework is presented for the analysis of partially ordered set (poset) data. The work is motivated by the need to analyse poset data such as multi‐componential responses in psychological measurement and partially accomplished cognitive tasks in educational measurement. It is shown how the generalized loglinear model can be used to represent poset data that form a lattice and how latent‐variable models can be constructed by further specifying the canonical parameters of the loglinear representation. The approach generalizes a class of latent‐variable models for completely ordered data. We apply the methods to analyse data on the frequency and intensity of anger‐related feelings. Furthermore, we propose a trajectory analysis to gain insight into the response function of partially ordered emotional states.  相似文献   

7.
Joint correspondence analysis is a technique for constructing reduced-dimensional representations of pairwise relationships among categorical variables. The technique was proposed by Greenacre as an alternative to multiple correspondence analysis. Joint correspondence analysis differs from multiple correspondence analysis in that it focuses solely on between-variable relationships. Greenacre described one alternating least-squares algorithm for conducting joint correspondence analysis. Another alternating least-squares algorithm is described in this article. The algorithm is guaranteed to converge, and does so in fewer iterations than does the algorithm proposed by Greenacre. A modification of the algorithm for handling Heywood cases is described. The algorithm is illustrated on two data sets.  相似文献   

8.
Decompositions and biplots in three-way correspondence analysis   总被引:1,自引:0,他引:1  
In this paper correspondence analysis for three-way contingency tables is presented using three-way generalisations of the singular value decomposition. It is shown that in combination with Lancaster's (1951) additive decomposition of interactions in three-way tables, a detailed analysis is possible of the deviations from independence. Finally, biplots are shown to produce powerful graphical representations of the results from three-way correspondence analyses. An example from child development is used to illustrate the theoretical developments.  相似文献   

9.
A comprehensive approach for imposing both row and column constraints on multivariate discrete data is proposed that may be called generalized constrained multiple correspondence analysis (GCMCA). In this method each set of discrete data is first decomposed into several submatrices according to its row and column constraints, and then multiple correspondence analysis (MCA) is applied to the decomposed submatrices to explore relationships among them. This method subsumes existing constrained and unconstrained MCA methods as special cases and also generalizes various kinds of linearly constrained correspondence analysis methods. An example is given to illustrate the proposed method.Heungsun Hwang is now at Claes Fornell International Group. The work reported in this paper was supported by Grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the second author.  相似文献   

10.
In the first place, we present the definition and fundamental properties of information functions — functions which establish a correspondence between sets of formulas and the information contained in them. The intuitions for the notion of information stem from the conception of Bar-Hillel and Carnap in [3]. In § 2 we will briefly show how those notions can be applied to the logic of theory change. In § 3 we will use them for proving two theorems about the lattices of classical subtheories and their content.  相似文献   

11.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   

12.
Geometric representation of association between categories   总被引:1,自引:0,他引:1  
Categories can be counted, rated, or ranked, but they cannot be measured. Likewise, persons or individuals can be counted, rated, or ranked, but they cannot be measured either. Nevertheless, psychology has realized early on that it can take an indirect road to measurement: What can be measured is the strength of association between categories in samples or populations, and what can be quantitatively compared are counts, ratings, or rankings made under different circumstances, or originating from different persons. The strong demand for quantitative analysis of categorical data has thus created a variety of statistical methods, with substantial contributions from psychometrics and sociometrics. What is the common basis of these methods dealing with categories? The basic element they share is that the sample space has a special geometry, in which categories (or persons) are point masses forming a simplex, while distributions of counts or profiles of ratings are centers of gravity, which are also point masses. Rankings form a discrete subset in the interior of the simplex, known as the permutation polytope, and paired comparisons form another subset on the edges of the simplex. Distances between point masses form the basic tool of analysis. The paper gives some history of major concepts, which naturally leads to a new concept: the shadow point. It is then shown how loglinear models, Luce and Rasch models, unfolding models, correspondence analysis and homogeneity analysis, forced classification and classification trees, as well as other models and methods, fit into this particular geometrical framework.  相似文献   

13.
Reemployment chances for unemployed aged fifty and over are low compared to those of younger persons. To explain the different chances, the job search literature has largely focused on the job search behavior of unemployed individuals. Based on job search models, we propose that not only job search behavior, but also wage setting behavior, attitudinal variables and personal variables impact the difference in reemployment opportunities. Besides gaining insight into which variables explain the difference in reemployment opportunities, we also test how much of the difference each of these variables explains. We do this by drawing on a decomposition analysis. Using data from 647 recently unemployed, we find that about one third of the reemployment gap can be explained by the variables suggested by job search models, mostly in terms of age differences in search behavior, educational levels and reservation wage. Hence, about 70 % can be ascribed to other factors, such as employer preferences for those aged between 18 and 49. Implications of these results for theory, policy and practice are discussed.  相似文献   

14.
The aim of this paper is to consider that a complementary way to evaluate the acceptance of technology in the work is possible; the acceptance situated. From a critical review of the classical models of acceptability (social and practical) on the one hand, and relying on models of the activity and theories of appropriation on the other hand, we show that it is necessary to insert the ICT in its social thickness, that is to say in a more comprehensive and complex as richer activity system (real). We will discuss the four dimensions to be considered to evaluate this acceptance situated. We also indicate how this approach can become an instrument of the development of activity and can contribute to a process of (re)creation of technical instruments, laying the foundations of what we have named a clinical usage.  相似文献   

15.
We propose new measures of consistency of additive and multiplicative pairwise comparison matrices. These measures, the relative consistency and relative error, are easy to compute and have clear and simple algebraic and geometric meaning, interpretation and properties. The correspondence between these measures in the additive and multiplicative cases reflects the same correspondence which underpins the algebraic structure of the problem and relates naturally to the corresponding optimization models and axiom systems. The relative consistency and relative error are related to one another by the theorem of Pythagoras through the decomposition of comparison matrices into their consistent and error components. One of the conclusions of our analysis is that inconsistency is not a sufficient reason for revision of judgements. © 1998 John Wiley & Sons, Ltd.  相似文献   

16.
The ultimate concern of cognitive engineering is how complex sociotechnical systems might be designed so that humans can work within them and control them safely and effectively. Because of this, large amounts of observational data analysis and knowledge elicitation are incorporated in cognitive engineering. At many points, these two methodologies coalesce. In this paper, we describe two complementary cognitive engineering software tools—MacSHAPA and COGENT—that are being developed alongside each other. MacSHAPA is designed for observational data analysis, and COGENT is designed for knowledge elicitation and cognitive engineering, but both sup-port requirements gathering. We first outline current trends in cognitive engineering that have given rise to the need for tools like MacSHAPA and COGENT. We then describe the two tools in more detail, and point to their similarities and differences. Finally, we show how the two tools are complementary, and how they can be used together in engineering psychology research.  相似文献   

17.
Semi-sparse PCA     
Eldén  Lars  Trendafilov  Nickolay 《Psychometrika》2019,84(1):164-185

It is well known that the classical exploratory factor analysis (EFA) of data with more observations than variables has several types of indeterminacy. We study the factor indeterminacy and show some new aspects of this problem by considering EFA as a specific data matrix decomposition. We adopt a new approach to the EFA estimation and achieve a new characterization of the factor indeterminacy problem. A new alternative model is proposed, which gives determinate factors and can be seen as a semi-sparse principal component analysis (PCA). An alternating algorithm is developed, where in each step a Procrustes problem is solved. It is demonstrated that the new model/algorithm can act as a specific sparse PCA and as a low-rank-plus-sparse matrix decomposition. Numerical examples with several large data sets illustrate the versatility of the new model, and the performance and behaviour of its algorithmic implementation.

  相似文献   

18.
The aim of this paper is to show how correspondence analysis can be a useful aid in multiple-criteria decision making, particularly in the case of categorical criteria values. Under different types of input information, the technique is used to perform some preliminary analyses with a bonus of providing a simultaneous graphical representation of the alternatives and criteria. This picture can provide a better understanding of the structure of the two sets of variables to the decision maker before a decision is made.  相似文献   

19.
Multilevel modeling provides one approach to synthesizing single-case experimental design data. In this study, we present the multilevel model (the two-level and the three-level models) for summarizing single-case results over cases, over studies, or both. In addition to the basic multilevel models, we elaborate on several plausible alternative models. We apply the proposed models to real datasets and investigate to what extent the estimated treatment effect is dependent on the modeling specifications and the underlying assumptions. By considering a range of plausible models and assumptions, researchers can determine the degree to which the effect estimates and conclusions are sensitive to the specific assumptions made. If the same conclusions are reached across a range of plausible assumptions, confidence in the conclusions can be enhanced. We advise researchers not to focus on one model but conduct multiple plausible multilevel analyses and investigate whether the results depend on the modeling options.  相似文献   

20.
The core of the paper consists of the treatment of two special decompositions for correspondence analysis of two-way ordered contingency tables: the bivariate moment decomposition and the hybrid decomposition, both using orthogonal polynomials rather than the commonly used singular vectors. To this end, we will detail and explain the basic characteristics of a particular set of orthogonal polynomials, called Emerson polynomials. It is shown that such polynomials, when used as bases for the row and/or column spaces, can enhance the interpretations via linear, quadratic and higher-order moments of the ordered categories. To aid such interpretations, we propose a new type of graphical display—the polynomial biplot.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号