共查询到20条相似文献,搜索用时 31 毫秒
1.
Miklós Ferenczi 《Studia Logica》2007,87(1):1-11
It is known that every α-dimensional quasi polyadic equality algebra (QPEA
α
) can be considered as an α-dimensional cylindric algebra satisfying the merrygo- round properties . The converse of this proposition fails to be true. It is investigated in the paper how to get algebras in QPEA from algebras
in CA. Instead of QPEA the class of the finitary polyadic equality algebras (FPEA) is investigated, this class is definitionally
equivalent to QPEA. It is shown, among others, that from every algebra in a β-dimensional algebra can be obtained in QPEA
β
where , moreover the algebra obtained is representable in a sense.
Presented by Daniele Mundici
Supported by the OTKA grants T0351192, T43242. 相似文献
2.
Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the
conclusion being drawn. Despite this, one is entitled to draw the conclusion “by default” on the grounds that we have no information
which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and
ordinary situations in which we find ourselves.
Formally speaking, ‘nonmonotonic reasoning’ refers to argumentation in which one uses certain information to reach a conclusion,
but where it is possible that adding some further information to those very same premises could make one want to retract the
original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning.
Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged
existence of “exceptional instances” of the class. In the absence of explicit information that an object is one of the exceptions
we are enjoined to apply the default statement to the object. But further information may later tell us that the object is
in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning.
The informal notion has been seen as central to a number of areas of scholarly investigation, and we canvass some of them
before turning our attention to its role in AI. It is because ordinary people so cleverly and effortlessly use default reasoning
to solve interesting cognitive tasks that nonmonotonic formalisms were introduced into AI, and we argue that this is a form
of psychologism, despite the fact that it is not usually recognized as such in AI.
We close by mentioning some of the results from our empirical investigations that we believe should be incorporated into nonmonotonic
formalisms. 相似文献
3.
Stephen A. Yachanin 《Current Psychology》1986,5(1):20-29
Two experiments investigated the influence of rule content and instructions on subjects’ ability to reason about conditional
rules. Experiment 1 had subjects determine whether two rules were “true or false” or whether two rules were being “violated.”
Subjects had direct experience with one rule and indirect experience with the other. Performance was superior with direct
experiential rules and with violation instructions. Experiment 2 was conducted to determine whether violation instructions
could facilitate correct selections with content for which subjects had no experience. Subjects were presented with a familiar
and unfamiliar rule. The results were consistent with those of Experiment 1. The possibility of a cognitive trade-off between
rule familiarity and instructions is discussed.
This research was conducted at Bowling Green State University as part of the author’s doctoral dissertation. An earlier version
was presented at the Annual Meeting of the Midwestern Psychological Association, Chicago, May 1983. 相似文献
4.
Franz Huber 《Journal of Philosophical Logic》2007,36(5):511-538
This paper starts by indicating the analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel,
1945) as presented in Huber (submitted). There I argue contra Carnap (1962, Section 87) that Hempel felt the need for two concepts of confirmation: one aiming at plausible theories and another aiming
at informative theories. However, he also realized that these two concepts are conflicting, and he gave up the concept of
confirmation aiming at informative theories. The main part of the paper consists in working out the claim that one can have
Hempel’s cake and eat it too — in the sense that there is a logic of theory assessment that takes into account both of the
two conflicting aspects of plausibility and informativeness. According to the semantics of this logic, α is an acceptable theory for evidence β if and only if α is both sufficiently plausible given β and sufficiently informative about β. This is spelt out in terms of ranking functions (Spohn, 1988) and shown to represent the syntactically specified notion of an assessment relation. The paper then compares these acceptability
relations to explanatory and confirmatory consequence relations (Flach, 2000) as well as to nonmonotonic consequence relations (Kraus et al., 1990). It concludes by relating the plausibility-informativeness approach to Carnap’s positive relevance account, thereby shedding
new light on Carnap’s analysis as well as solving another problem of confirmation theory.
A precursor of this paper has appeared as “The Logic of Confirmation and Theory Assessment” in L. Běhounek & M. Bílková (eds.),
The Logica Yearbook 2004, Prague: Filosofia, 2005, 161–176. 相似文献
5.
Daniel D. Hutto 《Philosophia》2009,37(4):629-652
It is possible to pursue philosophy with a clarificatory end in mind. Doing philosophy in this mode neither reduces to simply
engaging in therapy or theorizing. This paper defends the possibility of this distinctive kind of philosophical activity and
gives an account of its product—non-theoretical insights—in an attempt to show that there exists a third, ‘live’ option for
understanding what philosophy has to offer. It responds to criticisms leveled at elucidatory philosophy by defenders of extreme
therapeutic readings and clearly demonstrates that in rejecting the latter one cannot assume Wittgenstein’s approach to philosophy
was theoretically based by default. 相似文献
6.
Many statistics packages print skewness and kurtosis statistics with estimates of their standard errors. The function most
often used for the standard errors (e.g., in SPSS) assumes that the data are drawn from a normal distribution, an unlikely
situation. Some textbooks suggest that if the statistic is more than about 2 standard errors from the hypothesized value (i.e.,
an approximate value for the critical value from the t distribution for moderate or large sample sizes when α = 5%), the hypothesized value can be rejected. This is an inappropriate
practice unless the standard error estimate is accurate and the sampling distribution is approximately normal. We show distributions
where the traditional standard errors provided by the function underestimate the actual values, often being 5 times too small,
and distributions where the function overestimates the true values. Bootstrap standard errors and confidence intervals are
more accurate than the traditional approach, although still imperfect. The reasons for this are discussed. We recommend that
if you are using skewness and kurtosis statistics based on the 3rd and 4th moments, bootstrapping should be used to calculate standard errors and confidence intervals, rather than using the traditional
standard. Software in the freeware R for this article provides these estimates. 相似文献
7.
SAS and SPSS macros to calculate standardized Cronbach’s alpha using the upper bound of the phi coefficient for dichotomous items 总被引:1,自引:0,他引:1
Cronbach’s α is widely used in social science research to estimate the internal consistency of reliability of a measurement
scale. However, when items are not strictly parallel, the Cronbach’s α coefficient provides a lower-bound estimate of true
reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach’s
α for a scale with dichotomous items can be improved by using the upper bound of coefficient ϕ. SAS and SPSS macros have been
developed in this article to obtain standardized Cronbach’s α via this method. The simulation analysis showed that Cronbach’s
α from upper-bound ϕ might be appropriate for estimating the real reliability when standardized Cronbach’s α is problematic. 相似文献
8.
Based on a close study of benchmark examples in default reasoning, such as Nixon Diamond, Penguin Principle, etc., this paper
provides an in depth analysis of the basic features of default reasoning. We formalize default inferences based on Modus Ponens
for Default Implication, and mark the distinction between “local inferences” (to infer a conclusion from a subset of given
premises) and “global inferences” (to infer a conclusion from the entire set of given premises). These conceptual analyses
are captured by a formal semantics that is built upon the set-selection function technique. A minimal logic system M of default
reasoning that accommodates Modus Ponens for Default Implication and suitable for local inferences is proposed, and its soundness
is proved.
__________
Translated from Zhexue Yanjiu 哲学研究 (Philosophical Studies), 2003 (special issue) by Ye Feng 相似文献
9.
10.
Selective Revision 总被引:1,自引:0,他引:1
We introduce a constructive model of selective belief revision in which it is possible to accept only a part of the input
information. A selective revision operator ο is defined by the equality K ο α = K * f(α), where * is an AGM revision operator
and f a function, typically with the property ⊢ α → f(α). Axiomatic characterizations are provided for three variants of selective
revision.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
11.
Das prädiskursive Einverständnis. Wissenschaftlicher Wahrheitsbegriff und prozedurale Rechtfertigung
Armin Grunwald 《Journal for General Philosophy of Science》1998,29(2):205-223
The pre-discoursive agreement. Theory of scientific truth and procedural justification. — On basis of the constructive philosophy
of science, the attention is focussed to the pre-discoursive elements of discoursive theories of truth. By using a pragmatic
approach it is shown that foundation of those pre-discoursive elements, like discourse rules or the basic terminology, is
possible though the discourse rules are not available at this level. Propositions which can be shown in the presented theory
to be true, always describe a know-how instead of a knowledge about the world. As a result, the relevance of the presented
analysis for prototheories of scientific disciplines is investigated.
This revised version was published online in August 2006 with corrections to the Cover Date. 相似文献
12.
Marie-Louise Käsermann Andreas Altorfer Klaus Foppa Stefan Jossen Heinrich Zimmermann 《Behavior research methods》2000,32(1):33-46
The drawbacks of traditional research into emotional processes have led us to develop a set of methodologies for investigating them in everyday face-to-face communication. The conceptual basis of these procedures is a model of the eliciting conditions of emotional processes as well as a conceptualization of the emotional processes themselves. On the basis of the assumption of conversation as a rule-governed process, one can describe its default temporal, formal, and functional features, for which we use the MAS EDIT and SEQ programs, and the minimal model of communicative exchange, respectively. Violations of these default rules can be identified as unexpected/temporally unpredictable events eliciting emotionalization. The nature of emotionalization is determined by the psychological principle of “standard and deviation.” Its investigation under natural conditions requires the following: A noninvasive method of data acquisition (including procedures for rejecting faulty or missing values), measurement (high-resolution recording of physiological, psychomotor, and vocal variables), and the (nonstatistical) construction of an inventory or “relevant effects” (contrastive and template analysis). Finally, we depict three routes of investigating time courses of activation changes as dependent and independent variables and as a target of modification and reflection. 相似文献
13.
Ruth Manor 《Synthese》2006,153(2):171-186
The present offers a pragmatic solution of the Heap Paradox, based on the idea that vague predicates are “indexical” in the
sense that their denotation does not only depend on the context of their use, but it is a function of the context. The analysis
is based on the following three claims. The borderlines of vague terms are undetermined in the sense that though they may
be determined in some contexts, they may differ from one context to the next. Vagueness serves the important communicative
function, enabling speakers to identify entities as objects (as things we can talk about) in terms of some quantitative differences
between the “object” and its background in the context. Thus, in some contexts we can naturally partition the group of men
uniquely so as to distinguish the bald from the not-bald. Whether a man with a given hair number is among the bald in a given
context depends not only on his own hair number but also on the hair number of others in that context. This provides the background
for the claim that when we assert that John is bald, we presuppose that there is a unique demarcation to the bald in that
context. I consider the truth of the Paradox’s statements in contexts where the presupposition is true and in contexts where
it is false. The analysis yields that the contradiction is avoided because though each of the statements is often true, never
are all the sentences in the Paradox true together. 相似文献
14.
István Aranyosi 《Axiomathes》2009,19(2):223-224
Tobias Hansson Wahlberg argues in a recent article that the truth of “Hesperus is Phosphorus” depends on the assumption that
the endurance theory of persistence is true. I argue that the premise Wahlberg’s conclusion is based upon leads to absurd
consequences, therefore, nothing recommends it. As a consequence, “Hesperus is Phosphorus” has to be true, if it is true,
regardless of which theory of persistence one is committed to. 相似文献
15.
Maxwell J. Cresswell 《Studia Logica》2006,82(3):307-327
The possible-worlds semantics for modality says that a sentence is possibly true if it is true in some possible world. Given
classical prepositional logic, one can easily prove that every consistent set of propositions can be embedded in a ‘maximal
consistent set’, which in a sense represents a possible world. However the construction depends on the fact that standard
modal logics are finitary, and it seems false that an infinite collection of sets of sentences each finite subset of which
is intuitively ‘possible’ in natural language has the property that the whole set is possible. The argument of the paper is
that the principles needed to shew that natural language possibility sentences involve quantification over worlds are analogous
to those used in infinitary modal logic. 相似文献
16.
KENNETH JUNGE 《Scandinavian journal of psychology》1985,26(1):285-287
It is asserted that there are two kinds of belief-feeling, i.e., feeling (confident) that something is true or real. The difference between α-feeling and β-feeling of belief is analogous to the difference between having a sensory impression and merely imagining the impression. An α-belief is a concomitant correlate of affect and it tends to be stronger than the corresponding β-belief. 相似文献
17.
Knowledge-based programs (KBPs) are a powerful notion for expressing action policies in which branching conditions refer to
implicit knowledge and call for a deliberation task at execution time. However, branching conditions in KBPs cannot refer
to possibly erroneous beliefs or to graded belief, such as
“if my belief that φ holds is high then do some action α else perform some sensing action β”.
The purpose of this paper is to build a framework where such programs can be expressed. In this paper we focus on the execution
of such a program (a companion paper investigates issues relevant to the off-line evaluation and construction of such programs).
We define a simple graded version of doxastic logic KD45 as the basis for the definition of belief-based programs. Then we
study the way the agent’s belief state is maintained when executing such programs, which calls for revising belief states
by observations (possibly unreliable or imprecise) and progressing belief states by physical actions (which may have normal
as well as exceptional effects).
* A premliminary and shorter version of this paper in the Proceedings of the 16th European Conference on Artificial Intelligence
(ECAI-04), pp. 368–372 (Laverny and Lang 2004). 相似文献
18.
In conceptual combinations such aspeeled apples, two kinds of features are potentially accessible: phrase features and noun features. Phrase features are true only of the phrase (e.g., “white”), whereas noun features are true of both the phrase and the head noun (e.g., “round”). When people comprehend such combinations, phrase features are verified more quickly and more accurately than noun features. We examine relevance as an explanation for this phrase feature superiority. If relevance is the critical factor, then contexts that explicitly make noun features relevant and phrase features irrelevant should reverse the phrase feature superiority (i.e., they should make noun features easier to verify than phrase features). Consistent with the relevance hypothesis, brief contexts that made noun features relevant also made those noun features more accessible than phrase features, and vice versa. We conclude that the phrase feature superiority effect is attributable to the discourse strategy of assigning relevance to modifiers in combinations, unless a context indicates otherwise. 相似文献
19.
Fixpoint semantics are provided for ambiguity blocking and propagating variants of Nute’s defeasible logic. The semantics
are based upon the well-founded semantics for logic programs. It is shown that the logics are sound with respect to their
counterpart semantics and complete for locally finite theories. Unlike some other nonmonotonic reasoning formalisms such as
Reiter’s default logic, the two defeasible logics are directly skeptical and so reject floating conclusions. For defeasible
theories with transitive priorities on defeasible rules, the logics are shown to satisfy versions of Cut and Cautious Monotony.
For theories with either conflict sets closed under strict rules or strict rules closed under transposition, a form of Consistency
Preservation is shown to hold. The differences between the two logics and other variants of defeasible logic—specifically
those presented by Billington, Antoniou, Governatori, and Maher—are discussed. 相似文献
20.
Tianqun Pan 《Frontiers of Philosophy in China》2010,5(4):666-673
When a person performs a certain action, it signifies that he is causing a certain event to occur. Therefore the action is
conveying a certain true sentence. Playing a game is a mutual activity, namely the listener and the speaker undertake an exchange
through a linguistic dialogue or communicate through action. Because of the peculiar nature of the action, the actions in
games belong to an activity where the speaker speaks “true words” and the listener hears “true words.” A static game is a
process through which the participants are simultaneously “speaking” and “listening”; and a dynamic game is a process where
speaking and listening take place in turn. Each step of a dynamic game is a “speaking-listening” exchange. Through “listening”
and “speaking,” changes in the epistemic states of the participants occur. Of course, the degree of change depends on the
type of game being played. In a dynamic game, each participant proceeds through a process of induction, and thus forms new
epistemic states. 相似文献