首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We derive several relationships between communalities and the eigenvalues for ap ×p correlation matrix under the usual factor analysis model. For suitable choices ofj, j (), where j () is thej-th largest eigenvalue of , provides either a lower or an upper bound to the communalities for some of the variables. We show that for at least one variable, 1 - p () improves on the use of squared mulitiple correlation coefficient as a lower bound.This research was done while the second author was at Tokyo Institute of Technology.  相似文献   

2.
Jürgen Humburg 《Topoi》1986,5(1):39-50
The aim of my book is to explain the content of the different notions of probability.Based on a concept of logical probability, which is modified as compared with Carnap, we succeed by means of the mathematical results of de Finetti in defining the concept of statistical probability.The starting point is the fundamental concept that certain phenomena are of the same kind, that certain occurrences can be repeated, that certain experiments are identical. We introduce for this idea the notion: concept K of similarity. From concept K of similarity we derive logically some probability-theoretic conclusions:If the events E() are similar —of the same kind - on the basis of such a concept K, it holds good that intersections of n of these events are equiprobable on the basis of K; in formulae: E(1)...E( n K E('1)...E(' n , i j ,' j ' j for ij On the basis of some further axioms a partial comparative probability structure results from K, which forms the starting point of our further investigations and which we call logical probability on the basis of K.We investigate a metrisation of this partial comparative structure, i.e. normed -additive functions m K, which are compatible with this structure; we call these functions m K measure-functions in relation to K.The measure-functions may be interpreted as subjective probabilities of individuals, who accept the concept K.Now it holds good: For each measure-function there exists with measure one the limit of relative frequencies in a sequence of the E().In such an event, where all measure-functions coincide, we speak of a quantitative logical probability, which is the common measure of this event. In formulae we have: l K (h n lim h n )=1 in words: There is the quantitative logical probability one that the limit of the relative frequencies exists. Another way of saying this is that the event * (hn lim h n) is a maximal element in the comparative structure resulting from K.Therefore we are entitled to introduce this limit and call it statistical probability P.With the aid of the measure-functions it is possible to calculate the velocity of this convergence. The analog of the Bernoulli inequation holds true: m K h n –P¦)1–1/4n2.It is further possible in the work to obtain relationships for the concept of statistical independence which are expressed in terms of the comparative probability.The theory has a special significance for quantum mechanics: The similarity of the phenomena in the domain of quantum mechanics explains the statistical behaviour of the phenomena.The usual mathematical statistics are explained in my book. But it seems more expedient on the basis of this new theory to use besides the notion of statistical probability also the notion of logical probability; the notion of subjective probability has only a heuristic function in my system.The following dualism is to be noted: The statistical behaviour of similar phenomena may be described on the one hand according to the model of the classical probability theory by means of a figure called statistical probability, on the other hand we may express all formulae by means of a function, called statistical probability function. This function is defined as the limit of the relative frequencies depending on the respective state of the universe. The statistical probability function is the primary notion, the notion of statistical probability is derived from it; it is defined as the value of the statistical probability function for the true unknown state of the universe.As far as the Hume problem, the problem of inductive inference, is concerned, the book seems to give an example of how to solve it.The developed notions such as concept, measure-function, logical probability, etc. seem to be important beyond the concept of similarity.The present work represents a summary of my book Grundzüge zu einem neuen Aufbau der Wahrscheinlich-keitstheorie [5], For this reason, I have frequently dispensed with providing proof and in this connection refer the interested reader to my book.  相似文献   

3.
We define a subhierarchy of the infinitely deep languagesN described by Jaakko Hintikka and Veikko Rantala. We shall show that some model theoretic results well-known in the model theory of the ordinary infinitary languages can be generalized for these new languages. Among these are the downward Löwenheim-Skolem and o's theorems as well as some compactness properties.  相似文献   

4.
A principal type-scheme of a -term is the most general type-scheme for the term. The converse principal type-scheme theorem (J.R. Hindley, The principal typescheme of an object in combinatory logic, Trans. Amer. Math. Soc. 146 (1969) 29–60) states that every type-scheme of a combinatory term is a principal type-scheme of some combinatory term.This paper shows a simple proof for the theorem in -calculus, by constructing an algorithm which transforms a type assignment to a -term into a principal type assignment to another -term that has the type as its principal type-scheme. The clearness of the algorithm is due to the characterization theorem of principal type-assignment figures. The algorithm is applicable to BCIW--terms as well. Thus a uniform proof is presented for the converse principal type-scheme theorem for general -terms and BCIW--terms.  相似文献   

5.
Fujita  Ken-etsu 《Studia Logica》1998,61(2):199-221
There is an intimate connection between proofs of the natural deduction systems and typed lambda calculus. It is well-known that in simply typed lambda calculus, the notion of formulae-as-types makes it possible to find fine structure of the implicational fragment of intuitionistic logic, i.e., relevant logic, BCK-logic and linear logic. In this paper, we investigate three classical substructural logics (GL, GLc, GLw) of Gentzen's sequent calculus consisting of implication and negation, which contain some of the right structural rules. In terms of Parigot's -calculus with proper restrictions, we introduce a proof term assignment to these classical substructural logics. According to these notions, we can classify the -terms into four categories. It is proved that well-typed GLx--terms correspond to GLx proofs, and that a GLx--term has a principal type if stratified where x is nil, c, w or cw. Moreover, we investigate embeddings of classical substructural logics into the corresponding intuitionistic substructural logics. It is proved that the Gödel-style translations of GLx--terms are embeddings preserving substructural logics. As by-products, it is obtained that an inhabitation problem is decidable and well-typed GLx--terms are strongly normalizable.  相似文献   

6.
This paper gives a method of estimating the reliability of a test which has been divided into three parts. The parts do not have to satisfy any statistical criteria like parallelism or-equivalence. If the parts are homogeneous in content (congeneric),i.e., if their true scores are linearly related and if sample size is large then the method described in this paper will give the precise value of the reliability parameter. If the homogeneity condition is violated then underestimation will typically result. However, the estimate will always be at least as accurate as coefficient and Guttman's lower bound 3 when the same data are used. An application to real data is presented by way of illustration. Seven different splits of the same test are analyzed. The new method yields remarkably stable reliability estimates across splits as predicted by the theory. One deviating value can be accounted for by a certain unsuspected peculiarity of the test composition. Both coefficient and 3 would not have led to the same discovery.Expanded version of a paper given at the Psychometric Society Meeting in Stanford, California, March 1974.  相似文献   

7.
Montague [7] translates English into a tensed intensional logic, an extension of the typed -calculus. We prove that each translation reduces to a formula without -applications, unique to within change of bound variable. The proof has two main steps. We first prove that translations of English phrases have the special property that arguments to functions are modally closed. We then show that formulas in which arguments are modally closed have a unique fully reduced -normal form. As a corollary, translations of English phrases are contained in a simply defined proper subclass of the formulas of the intensional logic.This research is supported in part by National Science Foundation Grant BNS 76-23840.  相似文献   

8.
The introduction of Linear Logic extends the Curry-Howard Isomorphism to intensional aspects of the typed functional programming. In particular, every formula of Linear Logic tells whether the term it is a type for, can be either erased/duplicated or not, during a computation. So, Linear Logic can be seen as a model of a computational environment with an explicit control about the management of resources.This paper introduces a typed functional language ! and a categorical model for it.The terms of ! encode a version of natural deduction for Intuitionistic Linear Logic such that linear and non linear assumptions are managed multiplicatively and additively, respectively. Correspondingly, the terms of ! are built out of two disjoint sets of variables. Moreover, the -abstractions of ! bind variables and patterns. The use of two different kinds of variables and the patterns allow a very compact definition of the one-step operational semantics of !, unlike all other extensions of Curry-Howard Isomorphism to Intuitionistic Linear Logic. The language ! is Church-Rosser and enjoys both Strong Normalizability and Subject Reduction.The categorical model induces operational equivalences like, for example, a set of extensional equivalences.The paper presents also an untyped version of ! and a type assignment for it, using formulas of Linear Logic as types. The type assignment inherits from ! all the good computational properties and enjoys also the Principal-Type Property.  相似文献   

9.
Minari  Pierluigi 《Studia Logica》1999,62(2):215-242
We introduce a certain extension of -calculus, and show that it has the Church-Rosser property. The associated open-term extensional combinatory algebra is used as a basis to construct models for theories of Explict Mathematics (formulated in the language of "types and names") with positive stratified comprehension. In such models, types are interpreted as collections of solutions (of terms) w.r. to a set of numerals. Exploiting extensionality, we prove some consistency results for special ontological axioms which are refutable under elementary comprehension.  相似文献   

10.
This paper is a continuation of the investigation from [13]. The main theorem states that the general and the existential quantifiers are (, -reducible in some Grothendieck toposes. Using this result and Theorems 4.1, 4.2 [13] we get the downward Skolem-Löwenheim theorem for semantics in these toposes.  相似文献   

11.
Hoben Thomas 《Psychometrika》1989,54(3):523-530
An old problem in personnel psychology is to characterize distributions of test validity correlation coefficients. The proposed model views histograms of correlation coefficients as observations from a mixture distribution which, for a fixed sample sizen, is a conditional mixture distributionh(r|n) = j j h(r; j ,n), whereR is the correlation coefficient, j are population correlation coefficients and j are the mixing weights. The associated marginal distribution ofR is regarded as the parent distribution underlying histograms of empirical correlation coefficients. Maximum likelihood estimates of the parameters j and j can be obtained with an EM algorithm solution and tests for the number of componentst are achieved after the (one-component) density ofR is replaced with a tractable modeling densityh(r; j ,n). Two illustrative examples are provided.  相似文献   

12.
13.
Bimbó  Katalin 《Studia Logica》2000,66(2):285-296
Combinatory logic is known to be related to substructural logics. Algebraic considerations of the latter, in particular, algebraic considerations of two distinct implications (, ), led to the introduction of dual combinators in Dunn & Meyer 1997. Dual combinators are "mirror images" of the usual combinators and as such do not constitute an interesting subject of investigation by themselves. However, when combined with the usual combinators (e.g., in order to recover associativity in a sequent calculus), the whole system exhibits new features. A dual combinatory system with weak equality typically lacks the Church-Rosser property, and in general it is inconsistent. In many subsystems terms "unexpectedly" turn out to be weakly equivalent. The paper is a preliminary attempt to investigate some of these issues, as well as, briefly compare function application in symmetric -calculus (cf. Barbanera & Berardi 1996) and dual combinatory logic.  相似文献   

14.
A system of modal logic with the operator is proposed, and proved complete. In contrast with a previous one by Stalnaker and Thomason, this system does not require two categories of singular terms.  相似文献   

15.
The-continuum of inductive methods was derived from an assumption, called-condition, which says that the probability of finding an individual having propertyx j depends only on the number of observed individuals having propertyx j and on the total number of observed individuals. So, according to that assumption, all individuals with properties which are different fromx j have equal weight with respect to that probability and, in particular, it does not matter whether any individual was observed having some propertysimilar tox j (the most complete proof of this result is presented in Carnap, 1980).The problem thus remained open to find some general condition, weaker than the-condition, which would allow for thederivation of probability functions which might be sensitive to similarity. Carnap himself suggested a weakening of the-condition which might allow for similarity sensitive probability functions (Carnap, 1980, p. 45) but he did not find the family of probability functions derivable from that principle. The aim of this paper is to present the family of probability functions derivable from Carnap's suggestion and to show how it is derived.In Section 1 the general problem of analogy by similarity in inductive logic is presented, Section 2 outlines the notation and the conceptual background involved in the proof, Section 3 gives the proof, Section 4 discusses Carnap's principle and the result, Section 5 is a brief review of the solutions which have previously been proposed.  相似文献   

16.
Why just turquoise? Remarks on the evolution of color terms   总被引:1,自引:0,他引:1  
Summary The location of the foci of green and blue in the perceptual color solid indicates that there is space for a derived color term between these two hues. Diachronic and synchronic linguistic studies on color term lexica explain that a term for turquoise is likely to develop into a derived basic color term (in the Kay and McDaniel definition), at present in languages of industrialized countries. In addition to the hypothesis of Zimmer (1982) that türkis (in German) is the result of universal production system for color terms, cultural, social, and psychological factors influence the evolution of new basic color terms.  相似文献   

17.
Summary In this study on Wilde's phenomenon (Wilde 1950) the two components of disparity, one of them processing displacement, and the other one apparent rotation, are analysed in terms of dependence on the disparity of the end-lines of the pattern (), and on the percentage of magnification (M) of one of the monocular patterns in relation to the other one. It was found that the component of disparity for displacement ' can be expressed as a linear regression equation '=–a+b.The component of disparity for rotation, expressed as a percentage of magnification effective for rotation (M) can be expressed as M=a–b1+b2M.It was concluded that the two components of disparity are processed through independent parallel channels, the processing of the component of disparity for displacement being the faster process, accounting for the larger part of the total disparity.  相似文献   

18.
David J. Pym 《Studia Logica》1995,54(2):199-230
The II-calculus, a theory of first-order dependent function types in Curry-Howard-de Bruijn correspondence with a fragment of minimal first-order logic, is defined as a system of (linearized) natural deduction. In this paper, we present a Gentzen-style sequent calculus for the II-calculus and prove the cut-elimination theorem.The cut-elimination result builds upon the existence of normal forms for the natural deduction system and can be considered to be analogous to a proof provided by Prawitz for first-order logic. The type-theoretic setting considered here elegantly illustrates the distinction between the processes of normalization in a natural deduction system and cut-elimination in a Gentzen-style sequent calculus.We consider an application of the cut-free calculus, via the subformula property, to proof-search in the II-calculus. For this application, the normalization result for the natural deduction calculus alone is inadequate, a (cut-free) calculus with the subformula property being required.This paper was written whilst the author was affiliated to the University of Edinburgh, Scotland, U.K. and revised for publication whilst he was affiliated to the University of Birmingham, England, U.K.Presented byDaniele Mundici  相似文献   

19.
Brogden's coefficient of selective efficiency and Clemans' are more efficient than biserialr when the correlation is high. Some empirical sampling studies are reported comparing the standard errors of these statistics.  相似文献   

20.
A reduction rule is introduced as a transformation of proof figures in implicational classical logic. Proof figures are represented as typed terms in a -calculus with a new constant P (()). It is shown that all terms with the same type are equivalent with respect to -reduction augmented by this P-reduction rule. Hence all the proofs of the same implicational formula are equivalent. It is also shown that strong normalization fails for P-reduction. Weak normalization is shown for P-reduction with another reduction rule which simplifies of (( ) ) into an atomic type.This work was partially supported by a Grant-in-Aid for General Scientific Research No. 05680276 of the Ministry of Education, Science and Culture, Japan and by Japan Society for the Promotion of Science. Hiroakira Ono  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号