首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper seeks to meet the need for a general treatment of the problem of error in classification. Within an m-attribute classificatory system, an object's typical subclass is that subclass to which it is most often allocated under repeated experimentally independent applications of the classificatory criteria. In these terms, an error of classification is an atypical subclass allocation. This leads to definition of probabilitiesO of occasional subclass membership, probabilitiesT of typical subclass membership, and probabilitiesE of error or, more generally, occasional subclass membership conditional upon typical subclass membership. In the relationshipf: (O, T, E) the relative incidence of independentO, T, andE values is such that generally one can specifyO values givenT andE, but one cannot generally specifyT andE values givenO. Under the restrictions of homogeneity ofE values for all members of a given typical subclass, mutual stochastic independence of errors of classification, and suitable conditions of replication, one can find particular systemsO =f(T, E) which are solvable forT andE givenO. A minimum of three replications of occasional classification is necessary for a solution of systems for marginal attributes, and a minimum of two replications is needed with any cross-classification. Although for such systems one can always specifyT andE values givenO values, the solution is unique for dichotomous systems only.With grateful acknowledgement to the Rockefeller Foundation; and to the United States Department of Health, Education, and Welfare, Public Health Service, for N. I. M. H. Grant M-3950.  相似文献   

2.
Abstract

The temperature and time dependence of the d.c. conductivity of undoped hydrogenated amorphous silicon is presented. Measurements of the electronic transport are reported, with particular emphasis on the effects of annealing and cooling the samples. Two regimes of behaviour are observed. When samples are rapidly cooled from 200°C below a temperature T E~145°C a non-equilibrium dark conductivity, higher than that corresponding to slow cooling, is observed. The electronic and atomic structure then slowly relax and the time dependence of the excess conductivity is well described by a stretched exponential function. The second regime above T E corresponds to a relaxation time short compared to experimental times and the conductivity is independent of which order the annealing temperature is chosen. Thus the thermal equilibrium processes observed in undoped samples are qualitatively very similar to those observed in doped samples as recently reported in the literature.  相似文献   

3.
Eric Barnes 《Erkenntnis》1996,45(1):69-89
Predictivism holds that, where evidence E confirms theory T, E confirms T more strongly when E is predicted on the basis of T and subsequently confirmed than when E is known in advance of T's formulation and used, in some sense, in the formulation of T. Predictivism has lately enjoyed some strong supporting arguments from Maher (1988, 1990, 1993) and Kahn, Landsberg, and Stockman (1992). Despite the many virtues of the analyses these authors provide it is my view that they (along with all other authors on this subject) have failed to understand a fundamental truth about predictivism: the existence of a scientist who predicted T prior to the establishment that E is true has epistemic import for T (once E is established) only in connection with information regarding the social milieu in which the T-predictor is located and information regarding how the T-predictor was located. The aim of this paper is to show that predictivism is ultimately a social phenomenon that requires a social level of analysis, a thesis I deem social predictivism.For comments and criticisms I am indebted to Doug Ehring, Mark Heller, Jean Kazez, Patrick Maher, and Alastair Noreross. Special thanks are due to Wayne Woodword for help with the proof in Section 7.  相似文献   

4.
A widespread assumption in the contemporary discussion of probabilistic models of cognition, often attributed to the Bayesian program, is that inference is optimal when the observer's priors match the true priors in the world—the actual “statistics of the environment.” But in fact the idea of a “true” prior plays no role in traditional Bayesian philosophy, which regards probability as a quantification of belief, not an objective characteristic of the world. In this paper I discuss the significance of the traditional Bayesian epistemic view of probability and its mismatch with the more objectivist assumptions about probability that are widely held in contemporary cognitive science. I then introduce a novel mathematical framework, the observer lattice, that aims to clarify this issue while avoiding philosophically tendentious assumptions. The mathematical argument shows that even if we assume that “ground truth” probabilities actually do exist, there is no objective way to tell what they are. Different observers, conditioning on different information, will inevitably have different probability estimates, and there is no general procedure to determine which one is right. The argument sheds light on the use of probabilistic models in cognitive science, and in particular on what exactly it means for the mind to be “tuned” to its environment.  相似文献   

5.
Conclusion The systems T N and T M show that necessity can be consistently construed as a predicate of syntactical objects, if the expressive/deductive power of the system is deliberately engineered to reflect the power of the original object language operator. The system T N relies on salient limitations on the expressive power of the language L N through the construction of a quotational hierarchy, while the system T Mrelies on limiting the scope of the modal axioms schemas to the sublanguage L infM +, which corresponds exactly with the restrictive hierarchy of L N. The fact that L infM + is identical to the image of the metalinguistic mapping C + from the normal operator system into L M reveals that iterated operator modality is implicitly hierarchical, and that inconsistency is produced by applying the principles of the modal logic to formulas which have no natural analogues in the operator development. Thus the contradiction discovered by Montague can be diagnosed as the result of instantiating the axiom schemas with modally ungrounded formulas, and thereby adding radically new modal axioms to the predicate system.The predicate treatment of necessity differs significantly from that of the operator in that the cumulative models for the predicate system are strictly first-order. Possible worlds are not used as model-theoretic primitives, but rather alternate models are appealed to in order to specify the extension of N, which is semantically construed as a first-order predicate. In this manner, the intensional aspects of modality are built into the mode of specifying the particular set of objects which the denotation function assigns to N, rather than in the specification of the basic truth conditions for modal formulas. Intensional phenomena are thereby localised to the special requirements for determining the extension of a particular predicate, and this does not constitute a structural modification of the first-order models, but rather limits the relevant class of models to those which possess an appropriate denotation function.  相似文献   

6.
Objective: Compensatory health beliefs (CHBs), defined as beliefs that healthy behaviours can compensate for unhealthy behaviours, may be one possible factor hindering people in adopting a healthier lifestyle. This study examined the contribution of CHBs to the prediction of adolescents’ physical activity within the theoretical framework of the Health Action Process Approach (HAPA).

Design: The study followed a prospective survey design with assessments at baseline (T1) and two weeks later (T2).

Method: Questionnaire data on physical activity, HAPA variables and CHBs were obtained twice from 430 adolescents of four different Swiss schools. Multilevel modelling was applied.

Results: CHBs added significantly to the prediction of intentions and change in intentions, in that higher CHBs were associated with lower intentions to be physically active at T2 and a reduction in intentions from T1 to T2. No effect of CHBs emerged for the prediction of self-reported levels of physical activity at T2 and change in physical activity from T1 to T2.

Conclusion: Findings emphasise the relevance of examining CHBs in the context of an established health behaviour change model and suggest that CHBs are of particular importance in the process of intention formation.  相似文献   

7.
John Bacon 《Synthese》1987,71(1):1-18
Conclusion My aim has been to adapt Quine's criterion of the ontological commitment of theories couched in standard quantificational idiom to a much broader class of theories by focusing on the set-theoretic structure of the models of those theories. For standard first-order theories, the two criteria coincide on simple entities. Divergences appear as they are applied to higher-order theories and as composite entities are taken into account. In support of the extended criterion, I appeal to its fruits in treating the various examples considered above and to the healthy intuitions of the non-noneists among us. Don't O(m) and E(m) comprise just the things we should have though existed according to a particular interpretation m of a language or a theory? Whatever the answer (and it will hardly be unanimous), I hope to have pointed the way towards a recognition of ontology as a worthwhile branch of modern theory.Earlier versions of parts of this paper were read at New York University, at the Australasian Association of Philosophy, and at the University of Sydney. I am very grateful to William Barrett, Keith Campbell, Gregory Currie, Kenneth Gemes, Toomas Karmo, and Stephen Read for comments and criticisms.  相似文献   

8.
Kevin Nelson 《Synthese》2009,166(1):91-111
Gott (Nature 363:315–319, 1993) considers the problem of obtaining a probabilistic prediction for the duration of a process, given the observation that the process is currently underway and began a time t ago. He uses a temporal Copernican principle according to which the observation time can be treated as a random variable with uniform probability density. A simple rule follows: with a 95% probability,
where T is the unknown total duration of the process and hence T  −  t is its unknown future duration. Gott claims that this rule is of very general application. In response, I argue that we are usually only entitled to assume approximate temporal Copernicanism. That amounts to taking a probability distribution for the observation time that is, while not necessarily uniform, at least a smooth function. I work from that assumption to carry out Bayesian updating of the probability for process duration, as expressed by my Eq. 11. I find that for a wide range of conditions, processes that have already been underway a long time are likely to last a long time into the future—a qualitative conclusion that is intuitively plausible. Otherwise, however, too much depends on the specifics of various circumstances to permit any simple general rule. In particular, the simple rule proposed by Gott holds only under a very restricted set of conditions.  相似文献   

9.
Eighty-one female students at a German university were asked to indicate in writing (a) how they would come to like, love, and be in love with someone, and (b) how in their case liking, loving, and being in love with someone would come to an end. The responses were analyzed using a comprehensive list of 117 determinants developed for this study, which were grouped into four causal categories–P (person), O (other), P×O (relational), and E (environmental) conditions. Regarding the rise of attraction, the most frequent determinant for liking and for being in love was the existence of positive attributes of O (69% and 63%); for love, it was the existence of positive feelings from O (29%). Regarding the decline of attraction, the most frequent determinant for liking was negative behavior on O's part (42%); for love, abuse of one's trust by O (25%); and for being in love, disillusionment with regard to O (44%). Further analyses (including ANOVAs) involved the mean frequencies for the four causal categories. Concerning the rise of attraction sentiments, P causes were predominant for love and O causes were predominant for liking and for being in love; P×O causes were particularly infrequent for being in love. Concerning the decline of attraction sentiments, only for liking was one causal category predominant (O causes). E causes were hardly mentioned for both the rise and the decline focus. The findings are discussed in the context of both the more traditional research on “objective” determinants of attraction and, in particular, of recent research on the subjective (common-sense or implicit) understanding of liking, love, and being in love.  相似文献   

10.

This paper introduces the logic QLETF, a quantified extension of the logic of evidence and truth LETF, together with a corresponding sound and complete first-order non-deterministic valuation semantics. LETF is a paraconsistent and paracomplete sentential logic that extends the logic of first-degree entailment (FDE) with a classicality operator ∘ and a non-classicality operator ∙, dual to each other: while ∘A entails that A behaves classically, ∙A follows from A’s violating some classically valid inferences. The semantics of QLETF combines structures that interpret negated predicates in terms of anti-extensions with first-order non-deterministic valuations, and completeness is obtained through a generalization of Henkin’s method. By providing sound and complete semantics for first-order extensions of FDE, K3, and LP, we show how these tools, which we call here the method of anti-extensions + valuations, can be naturally applied to a number of non-classical logics.

  相似文献   

11.
The aim of this study was to compare the coordination dynamics of discrete and rhythmical reaching and grasping movements from a dynamical systems perspective. Previous research from this theoretical perspective had focused on rhythmical actions and it is unclear to what extent discrete movements are amenable to a similar dynamical systems analysis. Six adult subjects performed prehension in two conditions: a discrete, non-continuous mode and a rhythmical, continuous mode. A `scanning procedure' was implemented between pre- and post-tests in which the required time of final relative hand closure (Trfc) was systematically varied. It was shown that the error in the reaching and grasping pattern was least at an attractor region and systematically increased with deviation from the attractor. Results also indicated that there were no differences between condition or trial block for the group. However, there were several within-subject effects of interest. The validity of the scanning procedure was found to be questionable in the discrete condition, where four subjects showed differences in Trfc between pre- or post-test and the predicted Trfc of the scanning procedure. Four out of six subjects also had different preferred Trfc values for discrete and rhythmical movement, indicating that individual specific models might need to be constructed for future dynamical modelling of discrete movement.PsycINFO classification: 2330  相似文献   

12.
When target accuracy is defined as the probability that an individual will respond to an accuracy task within a fixed distance around the target, then the composite error measures, E and AE, are shown to be fairly strong indicators of target accuracy in a relative sense. When AE and E are compared, AE is shown to be an even stronger accuracy indicator than E for most reasonable accuracy requirements. This, plus the fact that AE has certain desirable properties in ANOVA procedures, suggests that AE is a good, composite measure of target accuracy and should be analyzed first to determine if target accuracy differences exist. Subsequent analyses of bias and/or variability are then recommended.  相似文献   

13.
A generalized solution of the orthogonal procrustes problem   总被引:12,自引:0,他引:12  
A solutionT of the least-squares problemAT=B +E, givenA andB so that trace (EE)= minimum andTT=I is presented. It is compared with a less general solution of the same problem which was given by Green [5]. The present solution, in contrast to Green's, is applicable to matricesA andB which are of less than full column rank. Some technical suggestions for the numerical computation ofT and an illustrative example are given.This paper is based on parts of a thesis submitted to the Graduate College of the University of Illinois in partial fulfillment of the requirements for a Ph.D. degree in Psychology.The work reported here was carried out while the author was employed by the Statistical Service Unit Research, U. of Illinois. It is a pleasure to express my appreciation to Prof. K. W. Dickman, director of this unit, for his continuous support and encouragement in this and other work. I also gratefully acknowledge my debt to Prof. L. Humphreys for suggesting the problem and to Prof. L. R. Tucker, who derived (1.7) and (1.8) in summation notation, suggested an iterative solution (not reported here) and who provided generous help and direction at all stages of the project.  相似文献   

14.
Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework for inductive probabilistic reasoning is developed: syntactically it consists of an extension of the language of first-order predicate logic that allows to express statements about both statistical and subjective probabilities. Semantics for this representation language are developed that give rise to two distinct entailment relations: a relation ⊨ that models strict, probabilistically valid, inferences, and a relation that models inductive probabilistic inferences. The inductive entailment relation is obtained by implementing cross-entropy minimization in a preferred model semantics. A main objective of our approach is to ensure that for both entailment relations complete proof systems exist. This is achieved by allowing probability distributions in our semantic models that use non-standard probability values. A number of results are presented that show that in several important aspects the resulting logic behaves just like a logic based on real-valued probabilities alone.  相似文献   

15.
This approach does not define a probability measure by syntactical structures. It reveals a link between modal logic and mathematical probability theory. This is shown (1) by adding an operator (and two further connectives and constants) to a system of lower predicate calculus and (2) regarding the models of that extended system. These models are models of the modal systemS 5 (without the Barcan formula), where a usual probability measure is defined on their set of possible worlds. Mathematical probability models can be seen as models ofS 5.  相似文献   

16.
17.
Occupational skin disease (OSD) is common, associated with poor prognosis and poses a significant burden to the individual and society. We applied the theory of planned behaviour (TPB), the prototype-willingness model (PWM) and the health action process approach (HAPA) to the prediction and explanation of occupationally relevant skin protection behaviour in individuals with OSD. We used a longitudinal design. In this study, 150 individuals participating in a 3-week inpatient tertiary prevention programme completed measures assessing the constructs of the TPB, PWM and HAPA at admission (T?0), discharge (T?1) and once the individual had returned to work and worked for 4 consecutive weeks (T?2) (n?=?117). Intention was measured at T?0 and skin protection behaviour at T?2. Path analysis was used to assess the longitudinal associations of the models’ constructs with intention and skin protection behaviour. TPB as well as PWM variables accounted for 30% of variance in behaviour, HAPA variables for 33%. While not all predictions were confirmed by the data, all three models are able to inform us about the formation of skin protection intention and behaviour in individuals with OSD. The findings are discussed in light of future interventions and research.  相似文献   

18.
Abstract

The dependence of the normal-state resistivity, resistive superconducting transition (T c, ΔT c), and of the upper critical-field slope (dH c2/dT|T=Tc) on density has been investigated for several YBa2Cu3O9-x(x?2.1) samples. The resistivity decreases rapidly with increasing density, whereas T c and ΔT c are rather insensitive to a change in density. dH c2/dT|Tc depends sensitively on the preparation conditions. The implications of these results both for the evaluation of the parameters relevant to the understanding of the nature of superconductivity and for the technological applications of granular superconductors are briefly discussed.  相似文献   

19.
Finding the greatest lower bound for the reliability of the total score on a test comprisingn non-homogenous items with dispersion matrix Σ x is equivalent to maximizing the trace of a diagonal matrix Σ E with elements θ I , subject to Σ E and Σ T x − Σ E being non-negative definite. The casesn=2 andn=3 are solved explicity. A computer search in the space of the θ i is developed for the general case. When Guttman's λ4 (maximum split-half coefficient alpha) is not the g.l.b., the maximizing set of θ i makes the rank of Σ T less thann − 1. Numerical examples of various bounds are given. Present affiliation of the first author: St. Hild's College of Education, Durham City, England.  相似文献   

20.
Abstract

It is shown that in the high-Tc YBa2 Cu3 O~7 superconductor the critical temperature is a function of the orthorhombic distortion (b–a)/a of the unit cell. From the extrapolation of the (b - a)/a ratio against Tc , a maximum critical temperature of 66 K for the tetragonal phase of YBa2 Cu3 O~7 was predicted. From the correlation between the transition width δTc and the orthorhombic distortion there was deduced an upper limit for Tc , in the orthorhombic phase of YBa2 Cu3 O~7 of 94·5K.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号