首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A coefficient of association is described for a contingency table containing data classified into two sets of ordered categories. Within each of the two sets the number of categories or the number of cases in each category need not be the same.=+1 for perfect positive association and has an expectation of 0 for chance association. In many cases also has –1 as a lower limit. The limitations of Kendall's a and b and Stuart's c are discussed, as is the identity of these coefficients to' under certain conditions. Computational procedure for is given.  相似文献   

2.
Coming fromI andCl, i.e. from intuitionistic and classical propositional calculi with the substitution rule postulated, and using the sign to add a new connective there have been considered here: Grzegorozyk's logicGrz, the proof logicG and the proof-intuitionistic logicI set up correspondingly by the calculiFor any calculus we denote by the set of all formulae of the calculus and by the lattice of all logics that are the extensions of the logic of the calculus, i.e. sets of formulae containing the axioms of and closed with respect to its rules of inference. In the logiclG the sign is decoded as follows: A = (A & A). The result of placing in the formulaA before each of its subformula is denoted byTrA. The maps are defined (in the definitions of x and the decoding of is meant), by virtue of which the diagram is constructedIn this diagram the maps, x and are isomorphisms, thereforex –1 = ; and the maps and are the semilattice epimorphisms that are not commutative with lattice operation +. Besides, the given diagram is commutative, and the next equalities take place: –1 = –1 and = –1 x. The latter implies in particular that any superintuitionistic logic is a superintuitionistic fragment of some proof logic extension.  相似文献   

3.
Monotonically convergent algorithms are described for maximizing six (constrained) functions of vectors x, or matricesX with columns x1, ..., x r . These functions are h1(x)= k (xA kx)(xC kx)–1, H1(X)= k tr (XA k X)(XC k X)–1, h1(X)= k l (x l A kx l ) (x l C kx l )–1 withX constrained to be columnwise orthonormal, h2(x)= k (xA kx)2(xC kx)–1 subject to xx=1, H2(X)= k tr(XA kX)(XAkX)(XCkX)–1 subject toXX=I, and h2(X)= k l (x l A kx l )2 (x l C kX l )–1 subject toXX=I. In these functions the matricesC k are assumed to be positive definite. The matricesA k can be arbitrary square matrices. The general formulation of the functions and the algorithms allows for application of the algorithms in various problems that arise in multivariate analysis. Several applications of the general algorithms are given. Specifically, algorithms are given for reciprocal principal components analysis, binormamin rotation, generalized discriminant analysis, variants of generalized principal components analysis, simple structure rotation for one of the latter variants, and set component analysis. For most of these methods the algorithms appear to be new, for the others the existing algorithms turn out to be special cases of the newly derived general algorithms.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the author. The author is obliged to Jos ten Berge for stimulating this research and for helpful comments on an earlier version of this paper.  相似文献   

4.
The basic models of signal detection theory involve the parametric measure,d, generally interpreted as a detectability index. Given two observers, one might wish to know whether their detectability indices are equal or unequal. Gourevitch and Galanter (1967) proposed a large sample statistical test that could be used to test the hypothesis of equald values. In this paper, their large two sample test is extended to aK-sample detection test. If the null hypothesisd 1=d 2=...=d K is rejected, one can employ the post hoc confidence interval procedure described in this paper to locate possible statistically significant sources of variance and differences. In addition, it is shown how one can use the Gourevitch and Galanter statistics to testd=0 for a single individual.This paper was written while the author was associated with the Institute of Human Learning at the University of California at Berkeley.  相似文献   

5.
Summary An attempt was made to specify the relationship between perceived surface lightness and perceived illumination under the stimulus conditions where different combinations of albedo and illuminance gave 10 approximately equal-luminance levels of a test field (TF) on three different (black-, gray- and white-appearing) backgrounds. Two types of category judgments for TF-lightness and overall illumination were made on the total of 94 TFs by 5 Ss. The results indicated that under the condition where achromatic surface colors appear, the perceptual scission which produces two different perceptual dimensions (lightness and perceived illumination) from one sort of stimulation (luminance) was clearly observed. The relation between the two judgments was consistent with the lightness-illumination invariance hypothesis: As lightness judgments (A) changed from darker to lighter, illumination judgments (I) shifted form brighter to dimmer, the sum of A and I being kept nearly invariant for a given luminance of the TF; the psychological relationship between lightness and illumination, A+I, changed as a linear function of the photometric combination of albedo and illuminance, log A+log I. It was also found that the albedo of the background was an important factor determining the extent of perceptual scission between lightness and illumination.  相似文献   

6.
Latent trait models for binary responses to a set of test items are considered from the point of view of estimating latent trait parameters=( 1, , n ) and item parameters=( 1, , k ), where j may be vector valued. With considered a random sample from a prior distribution with parameter, the estimation of (, ) is studied under the theory of the EM algorithm. An example and computational details are presented for the Rasch model.This work was supported by Contract No. N00014-81-K-0265, Modification No. P00002, from Personnel and Training Research Programs, Psychological Sciences Division, Office of Naval Research. The authors wish to thank an anonymous reviewer for several valuable suggestions.  相似文献   

7.
Deficient sustained attention is a symptom of hyperactivity that can be improved by stimulant medication. Recently, amphetamine has been shown to increase detections during a vigilance task in both normal and hyperactive boys. The present study applied signal detection analysis to the vigilance performance of 15 hyperactive and 14 normal boys divided into two age groups (6–9 and 10–12). A computerized continuous performance test was administered under amphetamine and placebo. Overall group comparisons indicated that perceptual sensitivity or d was higher for the normal boys and the older groups, and analysis of drug treatments showed that amphetamine significantly increased d. Interactions between drugs and age groups demonstrated that amphetamine affected the younger boys to a significantly greater degree than the older children for both d and response bias or . It is notable that the results were essentially parallel for both normal and hyperactive children.  相似文献   

8.
Summary An attempt was made to examine how the photometric equation: luminance (L)=albedo (A)×illuminance (I) could be solved perceptually when a test field (TF) was not seen as figure, but as ground. A gray disk with two black or white patches was used as the TF. Illuminance of the TF was changed over 2.3 log units and TF albedo was varied from 2.5 to 8.0 in Munsell value. Albedos of the black- and white-appearing patches were 1.5 and 9.5 in Munsell values, respectively. Two types of category judgments for apparent TF lightness (A) and apparent overall illumination (I) were made on the total of 40 TFs (5 illuminances×4 TF-albedos×2 patch-albedos). The results indicated that when the black patches were added to the TF, A was indistinguishable from I and when the white patches were placed on the TF, A and I could be distinguished from each other. The Gelb effect was interpreted as a manifestation of such A–I scission. It was concluded, therefore, that as far as the Gelb effect was observed, the perceptual system could solve the equation, L=A×I, in the sense that for a fixed L, the product of A and I would be constant.  相似文献   

9.
In this paper, maximum-likelihood estimates have been obtained for covariance matrices which have the Guttman quasi-simplex structure under each of the following null hypotheses: (a) The covariance matrix , can be written asTT + where and are both diagonal matrices with unknown elements andT is a known lower triangular matrix, and (b) the covariance matrix *, is expressible asT*T + I where is an unknown scalar. The linear models from which these covariance structures arise are also stated along with the underlying assumptions. Two likelihood-ratio tests have been constructed, one each for the above null hypotheses, against the alternative hypothesis that the population covariance matrix is simply positive definite and has no particular pattern. A numerical example is provided to illustrate the test procedure. Possible applications of the proposed test are also suggested.Adapted from portions of the author's dissertation under the same title submitted to the Department of Psychology, University of North Carolina, in partial fulfillment of the requirements for the Ph.D. degree. The author wishes to express his gratitude to his thesis chairman Dr. R. Darrell Bock and to his committee members Professors Samarendra Nath Roy, Lyle V. Jones, Thelma G. Thurstone, and Dorothy Adkins. Indebtedness is also acknowledged to Dr. Somesh Das Gupta who was quite helpful during the initial stage of the study.Formerly at the Department of Psychology, Indiana University. The author is grateful both to Indiana University and University of North Carolina for the support extended to him during his doctoral studies.  相似文献   

10.
This paper surveys the various forms of Deduction Theorem for a broad range of relevant logics. The logics range from the basic system B of Routley-Meyer through to the system R of relevant implication, and the forms of Deduction Theorem are characterized by the various formula representations of rules that are either unrestricted or restricted in certain ways. The formula representations cover the iterated form,A 1 .A 2 . ... .A n B, the conjunctive form,A 1&A 2 & ...A n B, the combined conjunctive and iterated form, enthymematic version of these three forms, and the classical implicational form,A 1&A 2& ...A n B. The concept of general enthymeme is introduced and the Deduction Theorem is shown to apply for rules essentially derived using Modus Ponens and Adjunction only, with logics containing either (A B)&(B C) .A C orA B .B C .A C.I acknowledge help from anonymous referees for guidance in preparing Part II, and especially for the suggestion that Theorem 9 could be expanded to fully contraction-less logics.  相似文献   

11.
Conclusion It follows from the proved theorems that ifM =Q, (whereQ={0,q 1,q 2,...,q }) is a machine of the classM F then there exist machinesM i such thatM i(1,c)=M (q i,c) andQ i={0, 1, 2, ..., +1} (i=1, 2, ..., ).And thus, if the way in which to an initial function of content of memorycC a machine assigns a final onecC is regarded as the only essential property of the machine then we can deal with the machines of the formM ={0, 1, 2, ..., }, and processes (t) (wheret=1,c,cC) only.Such approach can simplify the problem of defining particular machines of the classM F , composing and simplifying them.Allatum est die 19 Januarii 1970  相似文献   

12.
The linear regression modely=x+ is reanalyzed. Taking the modest position that x is an approximation of the best predictor ofy we derive the asymptotic distribution ofb andR 2, under mild assumptions.The method of derivation yields an easy answer to the estimation of from a data set which contains incomplete observations, where the incompleteness is random.  相似文献   

13.
In an earlier paper [Psychometrika,31, 1966, p. 147], Srivastava obtained a test for the HypothesisH 0 : = 00 + ... + ll, where i are known matrices,i are unknown constants and is the unknown (p ×p) covariance matrix of a random variablex (withp components) having ap-variate normal distribution. The test therein was obtained under (p ×p) covariance matrix of a random variablex (withp components) the condition that 0, 1, ..., l form a commutative linear associative algebra and a certain vector, dependent on these, has non-negative elements. In this paper it is shown that this last condition is always satisfied in the special situation (of importance in structural analysis in psychometrics) where 0, 1, ..., l are the association matrices of a partially balanced association scheme.This research was partially supported by the U. S. Air Force under Grant No. AF33(615)-3231, monitored by the Aero Space Research Labs.Now at Colorado State University.  相似文献   

14.
In this paper we show that some standard topological constructions may be fruitfully used in the theory of closure spaces (see [5], [4]). These possibilities are exemplified by the classical theorem on the universality of the Alexandroff's cube for T 0-closure spaces. It turns out that the closure space of all filters in the lattice of all subsets forms a generalized Alexandroff's cube that is universal for T 0-closure spaces. By this theorem we obtain the following characterization of the consequence operator of the classical logic: If is a countable set and C: P() P() is a closure operator on X, then C satisfies the compactness theorem iff the closure space ,C is homeomorphically embeddable in the closure space of the consequence operator of the classical logic.We also prove that for every closure space X with a countable base such that the cardinality of X is not greater than 2 there exists a subset X of irrationals and a subset X of the Cantor's set such that X is both a continuous image of X and a continuous image of X.We assume the reader is familiar with notions in [5].  相似文献   

15.
In this note, we will study four implicational logicsB, BI, BB and BBI. In [5], Martin and Meyer proved that a formula is provable inBB if and only if is provable inBBI and is not of the form of » . Though it gave a positive solution to theP - W problem, their method was semantical and not easy to grasp. We shall give a syntactical proof of the syntactical relation betweenBB andBBI logics. It also includes a syntactical proof of Powers and Dwyer's theorem that is proved semantically in [5]. Moreover, we shall establish the same relation betweenB andBI logics asBB andBBI logics. This relation seems to say thatB logic is meaningful, and so we think thatB logic is the weakest among meaningful logics. Therefore, by Theorem 1.1, our Gentzentype system forBI logic may be regarded as the most basic among all meaningful logics. It should be mentioned here that the first syntactical proof ofP - W problem is given by Misao Nagayama [6].Presented byHiroakira Ono  相似文献   

16.
The relationship between psychological stress and lymphocytic 5-ectonucleotidase, an enzyme marker for lymphocyte differentiation, was studied. Lymphocytic 5-ectonucleotidase was decreased significantly by about twofold in persons experiencing psychological stress, with a corresponding change in Total Mood Disturbance scores of the Profile of Mood States. Enzyme values were reversible in that they returned to normal once the stress had been reduced. Administration of high doses of ascorbate to severely depressed patients also normalized 5-ectonucleotidase activities, and implied that low enzyme values in stressed persons may be mediated by oxygen radical damage. This finding was consistent with previous reports of heightened inflammatory responses occurring in depressed patients. The primary cause of lowered 5-ectonucleotidase during stress may be the breakdown in the homeostatic mechanisms of the hypothalamic-pituitary-adrenal axis and immune system resulting in lymphoid tissue resistance to corticosteroids. It is suggested that this lowering of lymphocyte 5-ectonucleotidase may contribute to stress-mediated immune suppression by inhibiting lymphocyte maturation.  相似文献   

17.
My thesis is that some methodological ideas of the Pozna school, i.e., the principles of idealization and concretization (factualization), and the correspondence principle can be represented rather successfully using the relations of theoretization and specialization of revised structuralism.Let <n(i), t(j)> (i=1,...m, j=1,...k) denote the conceptual apparatus of a theory T, and a class M={} (i=1,...m, j=1,...k) the models of T. The n-components refer to the values of dependent variables and t-components to the values of independent variables of the theory. The n- and t-components in turn represent appropriate concepts. Consider T * as a conceptual enrichment of T with concepts <n(i *), t(j *)> (i<i * or j<j *) and models M *={<D *, n(i *), t(j *)>}. If the classes M and M * are suitably related, then the situation illustrates both the case of the theoretization-relation of (revised) structuralism and of the factualization-principle of the Pozna school.Assume now that the concepts n(i), t(j) of T for some i, j are operationalized using some special assumptions generating appropriate empirical values n and t for these concepts. Let M denote the class {<D,...n,...t,...>} which is formed by substituting n and t for values of concepts n(i), t(j) in the elements of M. If the classes M and M are related in a suitable way then the situation is an example of both the specialization-relation of (revised) structuralism and the concretization-principle of the Pozna school. The correspondence principle in turn can be represented as a limiting case of the theoretization-relation of (revised) structuralism.Many thanks to my anonymous referees for critical and fruitful comments and special thanks to Dr. Carol Norris for correcting the language of this paper.  相似文献   

18.
Predicate modal formulas with non-modalized quantifiers (call them Q-formulas) are considered as schemata of arithmetical formulas, where is interpreted as the provability predicate of some fixed correct extension T of arithmetic. A method of constructing 1) non-provable in T and 2) false arithmetical examples for Q-formulas by Kripke-like countermodels of certain type is given. Assuming the means of T to be strong enough to solve the (undecidable) problem of derivability in QGL, the Q-fragment of the predicate version of the logic GL, we prove the recursive enumerability of the sets of Q-formulas all arithmetical examples of which are: 1) T-provable, 2) true. In. particular, the first one is shown to be exactly QGL and the second one to be exactly the Q-fragment of the predicate version of Solovay's logic S.  相似文献   

19.
Evandro Agazzi 《Erkenntnis》1985,22(1-3):51-77
Until the middle of the present century it was a commonly accepted opinion that theory change in science was the expression of cumulative progress consisting in the acquisition of new truths and the elimination of old errors. Logical empiricists developed this idea through a deductive model, saying that a theory T superseding a theory T must be able logically to explain whatever T explained and something more as well. Popper too shared this model, but stressed that T explains the old known facts in its own new way. The further pursual of this line quickly led to the thesis of the non-comparability or incommensurability of theories: if T and T are different, then the very concepts which have the same denomination in both actually have different meanings; in such a way any sentence whatever has different meanings in T and in T and cannot serve to compare them. owing to this, the deductive model was abandoned as a tool for understanding theory change and scientific progress, and other models were proposed by people such as Lakatos, Kuhn, Feyerabend, Sneed and Stegmüller. The common feature of all these new positions may be seen in the claim that no possibility exists of interpreting theory change in terms of the cumulative acquisition of truth. It seems to us that the older and the newer positions are one-sided, and, in order to eliminate their respective shortcomings, we propose to interpret theory change in a new way.The starting point consists in recognizing that every scientific discipline singles out its specific domain of objects by selecting a few specific predicates for its discourse. Some of these predicates must be operational (that is, directly bound to testing operations) and they determine the objects of the theory concerned. In the case of a transition from T to T, we must consider whether or not the operational predicates remain unchanged, in the sense of being still related to the same operations. If they do not change in their relation to operations, then T and T are comparable (and may sometimes appear as compatible, sometimes as incompatible). If the operational predicates are not all identical in T and T, the two theories show a rather high degree of incommensurability, and this happens because they do not refer to the same objects. Theory change means in this case change of objects. But now we can see that even incommensurability is compatible with progress conceived as the accumulation of truth. Indeed, T and T remain true about their respective objects (T does not disprove T), and the global amount of truth acquired is increased.In other words, scientific progress does not consist in a purely logical relationship between theories, and moreover it is not linear. Yet it exists and may even be interpreted as an accumulation of truth, provided we do not forget that every scientific theory is true only about its own specific objects.It may be pointed out that the solution advocated here relies upon a limitation of the theory-ladeness of scientific concepts, which involves a reconsideration of their semantic status and a new approach to the question of theoretical concepts. First of all, the feature of being theoretical is attributed to a concept not absolutely, but relatively, yet in a sense different from Sneeds's: indeed every theory is basically characterized by its operational concepts, and the non-operational are said to be theoretical, this distinction clearly depending on every particular theory. For the operational concepts it happens that their mean-  相似文献   

20.
Five different ability estimators—maximum likelihood [MLE ()], weighted likelihood [WLE ()], Bayesian modal [BME ()], expected a posteriori [EAP ()] and the standardized number-right score [Z ()]—were used as scores for conventional, multiple-choice tests. The bias, standard error and reliability of the five ability estimators were evaluated using Monte Carlo estimates of the unknown conditional means and variances of the estimators. The results indicated that ability estimates based on BME (), EAP () or WLE () were reasonably unbiased for the range of abilities corresponding to the difficulty of a test, and that their standard errors were relatively small. Also, they were as reliable as the old standby—the number-right score.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号