首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Consider a multivariate context withp variates andk independent samples, each of sizen. To test equality of thek population covariance matrices, the likelihood ratio test is commonly employed. Box'sF-approximation to the null distribution of the test statistic can be used to computep-values, if sample sizes are not too small. It is suggested to regard theF-approximation as accurate if the sample sizesn are greater than or equal to 1+0.0613p 2+2.7265p-1.4182p 0.5+0.235p 1.4* In (k), for 5p30,k20.This research was supported by the Deutsche Forschungsgemeinschaft through Ste 405/2-1.  相似文献   

2.
Krijnen  Wim P. 《Psychometrika》2006,71(2):395-409
For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix Γ p = Φ 1/2 Λ p Ψ p −1 Λ p Φ 1/2. This matrix increases with the number of observable variables p. A necessary and sufficient condition for mean square convergence of predictors is divergence of the smallest eigenvalue of Γ p or, equivalently, divergence of signal-to-noise (Schneeweiss & Mathes, 1995). The same condition is necessary and sufficient for convergence to zero of the positive definite MSE differences of factor predictors, convergence to zero of the distance between factor predictors, and convergence to the unit value of the relative efficiencies of predictors. Various illustrations and examples of the convergence are given as well as explicit recommendations on the problem of choosing between the three main factor score predictors. The author is obliged to Maarten Speekenbrink and Peter van Rijn for their assistance with plotting the figures. In addition, I am obliged to the referees for their stimulating remarks.  相似文献   

3.
Zellini (1979, Theorem 3.1) has shown how to decompose an arbitrary symmetric matrix of ordern ×n as a linear combination of 1/2n(n+1) fixed rank one matrices, thus constructing an explicit tensor basis for the set of symmetricn ×n matrices. Zellini's decomposition is based on properties of persymmetric matrices. In the present paper, a simplified tensor basis is given, by showing that a symmetric matrix can also be decomposed in terms of 1/2n(n+1) fixed binary matrices of rank one. The decomposition implies that ann ×n ×p array consisting ofp symmetricn ×n slabs has maximal rank 1/2n(n+1). Likewise, an unconstrained INDSCAL (symmetric CANDECOMP/PARAFAC) decomposition of such an array will yield a perfect fit in 1/2n(n+1) dimensions. When the fitting only pertains to the off-diagonal elements of the symmetric matrices, as is the case in a version of PARAFAC where communalities are involved, the maximal number of dimensions can be further reduced to 1/2n(n–1). However, when the saliences in INDSCAL are constrained to be nonnegative, the tensor basis result does not apply. In fact, it is shown that in this case the number of dimensions needed can be as large asp, the number of matrices analyzed.  相似文献   

4.
The conventional approach for testing the equality of two normal mean vectors is to test first the equality of covariance matrices, and if the equality assumption is tenable, then use the two-sample Hotelling T 2 test. Otherwise one can use one of the approximate tests for the multivariate Behrens–Fisher problem. In this article, we study the properties of the Hotelling T 2 test, the conventional approach, and one of the best approximate invariant tests (Krishnamoorthy & Yu, 2004) for the Behrens–Fisher problem. Our simulation studies indicated that the conventional approach often leads to inflated Type I error rates. The approximate test not only controls Type I error rates very satisfactorily when covariance matrices were arbitrary but was also comparable with the T 2 test when covariance matrices were equal.  相似文献   

5.
Consider two independent groups with K measures for each subject. For the jth group and kth measure, let μtjk be the population trimmed mean, j = 1, 2; k = 1, ..., K. This article compares several methods for testing H 0 : u1k = t2k such that the probability of at least one Type I error is, and simultaneous probability coverage is 1 - α when computing confidence intervals for μt1k - μt2k . The emphasis is on K = 4 and α = .05. For zero trimming the problem reduces to comparing means, but it is well known that when comparing means, arbitrarily small departures from normality can result in extremely low power relative to using say 20% trimming. Moreover, when skewed distributions are being compared, conventional methods for comparing means can be biased for reasons reviewed in the article. A consequence is that in some realistic situations, the probability of rejecting can be higher when the null hypothesis is true versus a situation where the means differ by a half standard deviation. Switching to robust measures of location is known to reduce this problem, and combining robust measures of location with some type of bootstrap method reduces the problem even more. Published articles suggest that for the problem at hand, the percentile t bootstrap, combined with a 20% trimmed mean, will perform relatively well, but there are known situations where it does not eliminate all problems. In this article we consider an extension of the percentile bootstrap approach that is found to give better results.  相似文献   

6.
Monotonically convergent algorithms are described for maximizing six (constrained) functions of vectors x, or matricesX with columns x1, ..., x r . These functions are h1(x)= k (xA kx)(xC kx)–1, H1(X)= k tr (XA k X)(XC k X)–1, h1(X)= k l (x l A kx l ) (x l C kx l )–1 withX constrained to be columnwise orthonormal, h2(x)= k (xA kx)2(xC kx)–1 subject to xx=1, H2(X)= k tr(XA kX)(XAkX)(XCkX)–1 subject toXX=I, and h2(X)= k l (x l A kx l )2 (x l C kX l )–1 subject toXX=I. In these functions the matricesC k are assumed to be positive definite. The matricesA k can be arbitrary square matrices. The general formulation of the functions and the algorithms allows for application of the algorithms in various problems that arise in multivariate analysis. Several applications of the general algorithms are given. Specifically, algorithms are given for reciprocal principal components analysis, binormamin rotation, generalized discriminant analysis, variants of generalized principal components analysis, simple structure rotation for one of the latter variants, and set component analysis. For most of these methods the algorithms appear to be new, for the others the existing algorithms turn out to be special cases of the newly derived general algorithms.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the author. The author is obliged to Jos ten Berge for stimulating this research and for helpful comments on an earlier version of this paper.  相似文献   

7.
Rigo  Michel 《Studia Logica》2004,76(3):407-426
For a given numeration system U, a set X of integers is said to be U-star-free if the language of the normalized U-representations of the elements in X is star-free. Adapting a result of McNaughton and Papert, we give a first-order logical characterization of these sets for various numeration systems including integer base systems and the Fibonacci system. For k-ary systems, the problem of the base dependence of this property is also studied. Finally, the case of k-adic systems is developed.  相似文献   

8.
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified modelsM 1 andM 2 when fitting the more restricted modelM 0. Assuming several regularity conditions, the rank deficiency of the Jacobian matrix, composed of derivatives of the covariance elements with respect to the union of the free parameters ofM 1 andM 2 (which characterizes modelM 12), is a necessary and sufficient condition for the local equivalence ofM 1 andM 2. This condition is satisfied, in practice, when the analysis dealing with the fitting ofM 0, predicts that the decreases in the chi-square goodness-of-fit statistic for the fitting ofM 1 orM 2, orM 12 are all equal for any set of sample data, except on differences due to rounding errors.This research was supported by the Foundation of Social-Cultural Sciences which is subsidized by the Dutch Scientific Organization (N.W.O.) under project number 500-278-003. The author wishes to thank Anne Boomsma, Ivo Molenaar, Albert Satorra, and Tom Snijders for their stimulating and crucial comments during the research, and the Editor, Paul Bekker, Henk Broer, and anonymous reviewers for their helpful suggestions.  相似文献   

9.
To test the agreement between two observers who categorize a number of objects when the categories have not been specified in advance, Brennan & Light (1974) developed a statistic A′ and suggested a normal approximation for its distribution. In this paper it is shown that this approximation is inadequate particularly when one, or both, of the observers place a fairly equal number of objects in all of their categories. A chi-squared approximation to the distribution of A′ is developed and is shown to work well in a variety of situations. The relative powers of A′ and the ordinary X2 test for association are dependent on the type of ‘agreement between the observers’ that is assumed. However a simulation for a fairly general type of agreement indicates that the X2 test is more powerful. As the X2 test is also much easier to apply, it would seem preferable in most situations.  相似文献   

10.
Particular analytical expressions based on a curve fitting on numerical results for primary dendritic growth have been proposed in the literature to encompass binary alloy systems whose redistribution coefficient (k 0) approaches zero. To date, these expressions have not been checked against experimental results of transient solidification. The present study developed a comparative analysis between the theoretical predictions furnished by such expressions and results from transient solidification experiments with low k 0 aluminum alloys. The analysis has shown that the proposed approach generally describes the primary dendritic spacing during solidification of Al–Sn alloys which are characterized by very low k 0 values as well as by very large solidification ranges.  相似文献   

11.
A remarkable difference between the concept of rank for matrices and that for three-way arrays has to do with the occurrence of non-maximal rank. The set ofn×n matrices that have a rank less thann has zero volume. Kruskal pointed out that a 2×2×2 array has rank three or less, and that the subsets of those 2×2×2 arrays for which the rank is two or three both have positive volume. These subsets can be distinguished by the roots of a certain polynomial. The present paper generalizes Kruskal's results to 2×n×n arrays. Incidentally, it is shown that twon ×n matrices can be diagonalized simultaneously with positive probability.The author is obliged to Joe Kruskal and Henk Kiers for commenting on an earlier draft, and to Tom Wansbeek for raising stimulating questions.  相似文献   

12.
A measure of multiple rank correlation,T y.12 2, is proposed for the situation with no tied observations in the variables. The measure is a weighted average of two squared Kendall taus. It is shown thatT y.12 2 is equivalent to a statistic previously proposed by Moran and thus a new interpretation is given to Moran's statistic.The author wishes to thank Nancy Anderson, Willard Larkin, and Kent Norman for their helpful comments.  相似文献   

13.
Sections 1, 2 and 3 contain the main result, the strong finite axiomatizability of all 2-valued matrices. Since non-strongly finitely axiomatizable 3-element matrices are easily constructed the result reveals once again the gap between 2-valued and multiple-valued logic. Sec. 2 deals with the basic cases which include the important F i from Post's classification. The procedure in Sec. 3 reduces the general problem to these cases. Sec. 4 is a study of basic algebraic properties of 2-element algebras. In particular, we show that equational completeness is equivalent to the Stone-property and that each 2-element algebra generates a minimal quasivariety. The results of Sec. 4 will be applied in Sec. 5 to maximality questions and to a matrix free characterization of 2-valued consequences in the lattice of structural consequences in any language. Sec. 6 takes a look at related axiomatization. problems for finite algebras and matrices. We study the notion of a propositional consequence with equality and, among other things, present explicit axiomatizations of 2-valued consequences with equality.  相似文献   

14.
Orthogonal rotation to congruence   总被引:1,自引:0,他引:1  
Two problems are considered. The first is that of rotating two factor solutions orthogonally to a position where corresponding factors are as similar as possible. A least-squares solution for transformations of the two factor matrices is developed. The second problem is that of rotating a factor matrix orthogonally to a specified target matrix. The solution to the second problem is related to the first. Applications are discussed.This research was supported in part by contract Nonr 228 (22) between the office of Naval Research and the University of Southern California. Portions of this paper were presented at the American Psychological Association Convention, Los Angeles, September, 1964.  相似文献   

15.
Contemporary approaches for evaluating the demand for reinforcers use either the Exponential or the Exponentiated model of operant demand, both derived from the framework of Hursh and Silberberg (2008). This report summarizes the strengths and complications of this framework and proposes a novel implementation. This novel implementation incorporates earlier strengths and resolves existing shortcomings that are due to the use of a logarithmic scale for consumption. The Inverse Hyperbolic Sine (IHS) transformation is reviewed and evaluated as a replacement for the logarithmic scale in models of operant demand. Modeling consumption in the “log10-like” IHS scale reflects relative changes in consumption (as with a log scale) and accommodates a true zero bound (i.e., zero consumption values). The presence of a zero bound obviates the need for a separate span parameter (i.e., k) and the span of the model may be more simply defined by maximum demand at zero price (i.e., Q0). Further, this reformulated model serves to decouple the exponential rate constant (i.e., α) from variations in span, thus normalizing the rate constant to the span of consumption in IHS units and permitting comparisons when spans vary. This model, called the Zero-bounded Exponential (ZBE), is evaluated using simulated and real-world data. The direct reinstatement ZBE model showed strong correspondence with empirical indicators of demand and with a normalization of α (ZBEn) across empirical data that varied in reinforcing efficacy (dose, time to onset of peak effects). Future directions in demand curve analysis are discussed with recommendations for additional replication and exploration of scales beyond the logarithm when accommodating zero consumption data.  相似文献   

16.
Summary A new, elaborated version of a time-quantum model (TQM) is outlined and illustrated by applying it to different experimental paradigms. As a basic prerequisite TQM adopts the coexistence of different discrete time units or (perceptual) intermittencies as constituent elements of the temporal architecture of mental processes. Unlike similar other approaches, TQM assumes the existence of an absolute lower bound for intermittencies, the time-quantum T, as an (approximately) universal constant and which has a duration of approximately 4.5 ms. Intermittencies of TQM must be multiples T k=k·T * within the interval T *T kL·T *M·T * with T *=q·T and integer q, k, L, and M. Here M denotes an upper bound for multipliers characteristic of individuals, the so-called coherence length; q and L may depend on task, individual and other factors. A second constraint is that admissible intermittencies must be integer fractions of L, the operative upper bound. In addition, M is assumed to determine the number of elementary information units to be stored in short-term memory.  相似文献   

17.
Persi Diaconis 《Synthese》1977,36(2):271-281
A geometrical interpretation of independence and exchangeability leads to understanding the failure of de Finetti's theorem for a finite exchangeable sequence. In particular an exchangeable sequence of length r which can be extended to an exchangeable sequence of length k is almost a mixture of independent experiments, the error going to zero like 1/k.  相似文献   

18.
Ayala Cohen 《Psychometrika》1986,51(3):379-391
A test is proposed for the equality of the variances ofk 2 correlated variables. Pitman's test fork = 2 reduces the null hypothesis to zero correlation between their sum and their difference. Its extension, eliminating nuisance parameters by a bootstrap procedure, is valid for any correlation structure between thek normally distributed variables. A Monte Carlo study for several combinations of sample sizes and number of variables is presented, comparing the level and power of the new method with previously published tests. Some nonnormal data are included, for which the empirical level tends to be slightly higher than the nominal one. The results show that our method is close in power to the asymptotic tests which are extremely sensitive to nonnormality, yet it is robust and much more powerful than other robust tests.This research was supported by the fund for the promotion of research at the Technion.  相似文献   

19.
A variance-components analysis is presented for paired comparisons in terms of three components:s, the scale value of the stimuli;d, a deviation from the linear model specified by the law of comparative judgment; andb, a binomial error component. Estimates are given for each of the three variances, s 2 , d 2 , and b 2 . Several coefficients, analogous to reliability coefficients, based on these three variances are indicated. The techniques are illustrated in a replicated comparison of handwriting specimens.This research was jointly supported in part by Princeton University, the Office of Naval Research under contract Nonr-1858(15), and the National Science Foundation under grant NSF G-642, and in part by Educational Testing Service. Reproduction in whole or in part is permitted for any purpose of the United States Government.Thanks are due to Ledyard Tucker and Frederic Lord for valuable suggestions on the development presented here.  相似文献   

20.
Score tests for identifying locally dependent item pairs have been proposed for binary item response models. In this article, both the bifactor and the threshold shift score tests are generalized to the graded response model. For the bifactor test, the generalization is straightforward; it adds one secondary dimension associated only with one pair of items. For the threshold shift test, however, multiple generalizations are possible: in particular, conditional, uniform, and linear shift tests are discussed in this article. Simulation studies show that all of the score tests have accurate Type I error rates given large enough samples, although their small‐sample behaviour is not as good as that of Pearson's Χ2 and M2 as proposed in other studies for the purpose of local dependence (LD) detection. All score tests have the highest power to detect the LD which is consistent with their parametric form, and in this case they are uniformly more powerful than Χ2 and M2; even wrongly specified score tests are more powerful than Χ2 and M2 in most conditions. An example using empirical data is provided for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号