首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is proposed that (a) research in counseling and counseling practice do not generally make explicit their roots in counseling theory, (b) this lack of connectedness to theory may represent a weakness in our theories for failing to be useful, (c) this fault may be a function of our current definitions of theory, and (d) we should focus upon philosophical premises at this time. The philosophical premise of syntony is used to illustrate how many of the commonly accepted assumptions which set our standards are not necessarily so, and how the broadening of such assumptions may encourage more activity in theory development. Counseling approaches which do not qualify as theory must at least qualify in the realm of rationale by making explicit their philosophical or value premises.  相似文献   

2.
We introduce a general framework for reasoning with prioritized propositional data by aggregation of distance functions. Our formalism is based on a possible world semantics, where conclusions are drawn according to the most ‘plausible’ worlds (interpretations), namely: the worlds that are as ‘close’ as possible to the set of premises, and, at the same time, are as ‘faithful’ as possible to the more reliable (or important) information in this set. This implies that the consequence relations that are induced by our framework are derived from a pre-defined metric on the space of interpretations, and inferences are determined by a ranking function applied to the premises. We study the basic properties of the entailment relations that are obtained by this framework, and relate our approach to other methods of maintaining incomplete and inconsistent information, most specifically in the contexts of (iterated) belief revision, consistent query answering in database systems, and integration of prioritized data sources.  相似文献   

3.
Gabriella Pigozzi 《Synthese》2006,152(2):285-298
The aggregation of individual judgments on logically interconnected propositions into a collective decision on the same propositions is called judgment aggregation. Literature in social choice and political theory has claimed that judgment aggregation raises serious concerns. For example, consider a set of premises and a conclusion where the latter is logically equivalent to the former. When majority voting is applied to some propositions (the premises) it may give a different outcome than majority voting applied to another set of propositions (the conclusion). This problem is known as the discursive dilemma (or paradox). The discursive dilemma is a serious problem since it is not clear whether a collective outcome exists in these cases, and if it does, what it is like. Moreover, the two suggested escape-routes from the paradox—the so-called premise-based procedure and the conclusion-based procedure—are not, as I will show, satisfactory methods for group decision-making. In this paper I introduce a new aggregation procedure inspired by an operator defined in artificial intelligence in order to merge belief bases. The result is that we do not need to worry about paradoxical outcomes, since these arise only when inconsistent collective judgments are not ruled out from the set of possible solutions.  相似文献   

4.
How do logically naive individuals determine that an inference is invalid? In logic, there are two ways to proceed: (1) make an exhaustive search but fail to find a proof of the conclusion and (2) use the interpretation of the relevant sentences to construct a counterexample—that is, a possibility consistent with the premises but inconsistent with the conclusion. We report three experiments in which the strategies that individuals use to refute invalid inferences based on sentential connectives were examined. In Experiment 1, the participants’ task was to justify their evaluations, and it showed that they used counterexamples more often than any other strategy. Experiment 2 showed that they were more likely to use counterexamples to refute invalid conclusions consistent with the premises than to refute invalid conclusions inconsistent with the premises. In Experiment 3, no reliable difference was detected in the results between participants who wrote justifications and participants who did not.  相似文献   

5.
How do reasoners deal with inconsistencies? James (1907) believed that the rational solution is to revise your beliefs and to do so in a minimal way. We propose an alternative: You explain the origins of an inconsistency, which has the side effect of a revision to your beliefs. This hypothesis predicts that individuals should spontaneously create explanations of inconsistencies rather than refute one of the assertions and that they should rate explanations as more probable than refutations. A pilot study showed that participants spontaneously explain inconsistencies when they are asked what follows from inconsistent premises. In three subsequent experiments, participants were asked to compare explanations of inconsistencies against minimal refutations of the inconsistent premises. In Experiment 1, participants chose which conclusion was most probable; in Experiment 2 they rank ordered the conclusions based on their probability; and in Experiment 3 they estimated the mean probability of the conclusions' occurrence. In all three studies, participants rated explanations as more probable than refutations. The results imply that individuals create explanations to resolve an inconsistency and that these explanations lead to changes in belief. Changes in belief are therefore of secondary importance to the primary goal of explanation.  相似文献   

6.
How do reasoners deal with inconsistencies? James (1907) believed that the rational solution is to revise your beliefs and to do so in a minimal way. We propose an alternative: You explain the origins of an inconsistency, which has the side effect of a revision to your beliefs. This hypothesis predicts that individuals should spontaneously create explanations of inconsistencies rather than refute one of the assertions and that they should rate explanations as more probable than refutations. A pilot study showed that participants spontaneously explain inconsistencies when they are asked what follows from inconsistent premises. In three subsequent experiments, participants were asked to compare explanations of inconsistencies against minimal refutations of the inconsistent premises. In Experiment 1, participants chose which conclusion was most probable; in Experiment 2 they rank ordered the conclusions based on their probability; and in Experiment 3 they estimated the mean probability of the conclusions' occurrence. In all three studies, participants rated explanations as more probable than refutations. The results imply that individuals create explanations to resolve an inconsistency and that these explanations lead to changes in belief. Changes in belief are therefore of secondary importance to the primary goal of explanation.  相似文献   

7.
The aim of the paper is to formulate rules of inference for the predicate 'is true' applied to sentences. A distinction is recognised between (ordinary) truth and definite truth and consequently between two notions of validity, depending on whether truth or definite truth is the property preserved in valid arguments. Appropriate sets of rules of inference governing the two predicates are devised. In each case the consequence relation is in harmony with the respective predicate. Particularly appealing is a set of ND rules for ordinary truth in which premises and assumptions play different roles, premises being taken to assert definite truth, assumptions to suppose truth. This set of rules can be said to capture everyday reasoning with truth. Also presented are formal characterisations, in the meta-language and in the object language, of paradoxical and 'truth teller'-like sentences.  相似文献   

8.
The Undecidability of Propositional Adaptive Logic   总被引:3,自引:3,他引:0  
We investigate and classify the notion of final derivability of two basic inconsistency-adaptive logics. Specifically, the maximal complexity of the set of final consequences of decidable sets of premises formulated in the language of propositional logic is described. Our results show that taking the consequences of a decidable propositional theory is a complicated operation. The set of final consequences according to either the Reliability Calculus or the Minimal Abnormality Calculus of a decidable propositional premise set is in general undecidable, and can be -complete. These classifications are exact. For first order theories even finite sets of premises can generate such consequence sets in either calculus.  相似文献   

9.
ABSTRACT In this article, autoregressive models and growth curve models are compared Autoregressive models are useful because they allow for random change, permit scores to increase or decrease, and do not require strong assumptions about the level of measurement Three previously presented designs for estimating stability are described (a) time-series, (b) simplex, and (c) two-wave, one-factor methods A two-wave, multiple-factor model also is presented, in which the variables are assumed to be caused by a set of latent variables The factor structure does not change over time and so the synchronous relationships are temporally invariant The factors do not cause each other and have the same stability The parameters of the model are the factor loading structure, each variable's reliability, and the stability of the factors We apply the model to two data sets For eight cognitive skill variables measured at four times, the 2-year stability is estimated to be 92 and the 6-year stability is 83 For nine personality variables, the 3-year stability is 68 We speculate that for many variables there are two components one component that changes very slowly (the trait component) and another that changes very rapidly (the state component), thus each variable is a mixture of trait and state Circumstantial evidence supporting this view is presented  相似文献   

10.
We suggest that single adults in contemporary American society are targets of stereotyping, prejudice, and discrimination, a phenomenon we will call singlism. Singlism is an outgrowth of a largely uncontested set of beliefs, the Ideology of Marriage and Family. Its premises include the assumptions that the sexual partnership is the one truly important peer relationship and that people who have such partnerships are happier and more fulfilled than those who do not. We use published claims about the greater happiness of married people to illustrate how the scientific enterprise seems to be influenced by the ideology. We propose that people who are single-particularly women who have always been single-fare better than the ideology would predict because they do have positive, enduring, and important interpersonal relationships. The persistence of singlism is especially puzzling considering that actual differences based on civil (marital) status seem to be qualified and small, the number of singles is growing, and sensitivity to other varieties of prejudice is acute. By way of explanation, we consider arguments from evolutionary psychology, attachment theory, a social problems perspective, the growth of the cult of the couple, and the appeal of an ideology that offers a simple and compelling worldview.  相似文献   

11.
Zegers' (1986) chance-corrected coefficients of association are derived by alternative methods. A different definition of chance correction is used. It is shown that our correction and that of Zegers are identical for large samples. Three possible assumptions for the derivation of metric coefficients are examined. The first, variable reflection, formulated by Zegers and ten Berge (1985), leads to coefficients that require chance-correction. Two other assumptions, zero covariance and covariance reflection, are proposed and it is shown that the latter two assumptions lead directly to coefficients of identity and proportionality that do not require chance correction (i.e., are identical to the Zegers, 1986, corrected coefficients).We are indebted to Robyn M. Dawes, Carnegie-Mellon University, for stimulating our interest in this project, and for helpful suggestions.  相似文献   

12.
We view a perceptual capacity as a nondeductive inference, represented as a function from a set of premises to a set of conclusions. The application of the function to a single premise to produce a single conclusion is called a "percept" or "instantaneous percept." We define a stable percept as a convergent sequence of instantaneous percepts. Assuming that the sets of premises and conclusions are metric spaces, we introduce a strategy for acquiring stable percepts, called directed convergence. We consider probabilistic inferences, where the premise and conclusion sets are spaces of probability measures, and in this context we study Bayesian probabilistic/recursive inference. In this type of Bayesian inference the premises are probability measures, and the prior as well as the posterior is updated nontrivially at each iteration. This type of Bayesian inference is distinguished from classical Bayesian statistical inference where the prior remains fixed, and the posterior evolves by conditioning on successively more punctual premises. We indicate how the directed convergence procedure may be implemented in the context of Bayesian probabilistic/recursive inference. We discuss how the L(infinity) metric can be used to give numerical control of this type of Bayesian directed convergence. Copyright 2001 Academic Press.  相似文献   

13.
The contributions of Murray Sidman to the field of behavior analysis have helped to put the field on a progressive path. In this paper we describe three areas as examples, drawn from the larger set of his notable contributions: the analysis of stimulus equivalence in a way that has fostered a behavior-analytic approach to derived stimulus relations and symbolic meaning; the observation and measurement of individual behavior through time; and his stance against punitive applied methods. In each of these areas Sidman was a dedicated behaviorist, avoiding appeals to mentalistic or transcendental forces, opposing hypothetical mediational accounts, and taking a functional and contextual approach. Clarity of assumptions was at the heart of Sidman's effective scientific practices and there is no reason to think that those same assumptions will not carry us further, as evidence mounts in support of these views on psychological research and practice.  相似文献   

14.
Walter Bossert 《Synthese》2001,129(3):343-369
A generalized theory of revealed preference is formulated for choice situations where the consequences of choices from given menus are uncertain. In a nonprobabilistic framework, rational choice behavior can be defined by requiring the existence of a preference relation on the set of possible consequences and an extension rule for this relation to the power set of the set of consequences such that the chosen sets of possible outcomes are the best elements in the feasible set according to this extension rule. Rational choice is characterized under various assumptions on these relations.  相似文献   

15.
We present a formal analysis of the Cosmological Argument in its two main forms: that due to Aquinas, and the revised version of the Kalam Cosmological Argument more recently advocated by William Lane Craig. We formulate these two arguments in such a way that each conclusion follows in first-order logic from the corresponding assumptions. Our analysis shows that the conclusion which follows for Aquinas is considerably weaker than what his aims demand. With formalizations that are logically valid in hand, we reinterpret the natural language versions of the premises and conclusions in terms of concepts of causality consistent with (and used in) recent work in cosmology done by physicists. In brief: the Kalam argument commits the fallacy of equivocation in a way that seems beyond repair; two of the premises adopted by Aquinas seem dubious when the terms ??cause?? and ??causality?? are interpreted in the context of contemporary empirical science. Thus, while there are no problems with whether the conclusions follow logically from their assumptions, the Kalam argument is not viable, and the Aquinas argument does not imply a caused origination of the universe. The assumptions of the latter are at best less than obvious relative to recent work in the sciences. We conclude with mention of a new argument that makes some positive modifications to an alternative variation on Aquinas by Le Poidevin, which nonetheless seems rather weak.  相似文献   

16.
Everyday reasoning requires more evidence than raw data alone can provide. We explore the idea that people can go beyond this data by reasoning about how the data was sampled. This idea is investigated through an examination of premise non‐monotonicity, in which adding premises to a category‐based argument weakens rather than strengthens it. Relevance theories explain this phenomenon in terms of people's sensitivity to the relationships among premise items. We show that a Bayesian model of category‐based induction taking premise sampling assumptions and category similarity into account complements such theories and yields two important predictions: First, that sensitivity to premise relationships can be violated by inducing a weak sampling assumption; and second, that premise monotonicity should be restored as a result. We test these predictions with an experiment that manipulates people's assumptions in this regard, showing that people draw qualitatively different conclusions in each case.  相似文献   

17.
Evidence of hierarchies in cognitive maps   总被引:17,自引:0,他引:17  
Previous research suggested that the apparent hierarchical organization of landmarks in an environment will influence subjects’ judgments about spatial characteristics of that environment. We extended this previous work to a natural environment that has no predetermined, well-defined hierarchical structure. Using an algorithm that generates a hierarchy of landmarks from recall protocols, we constructed hypothesized clusterings of landmarks for a set of subjects familiar with the space. Then we tested these hypothesized clusters in a series of tasks, all of which required judgments about distances in the space. The results of these tests suggest that subjects do cluster landmarks on the basis of nonspatial attributes, and that the clusters have consequences for performance in various tasks that require access to spatial information.  相似文献   

18.
Steven D. Hales 《Synthese》1994,101(2):273-289
One of the most common views about self-deception ascribes contradictory beliefs to the self-deceiver. In this paper it is argued that this view (the contradiction strategy) is inconsistent with plausible common-sense principles of belief attribution. Other dubious assumptions made by contradiction strategists are also examined. It is concluded that the contradiction strategy is an inadequate account of self-deception. Two other well-known views — those of Robert Audi and Alfred Mele — are investigated and found wanting. A new theory of self-deception relying on an extension of Mark Johnston's subintentional mental tropisms is proposed and defended.  相似文献   

19.
Brewka  Gerhard 《Studia Logica》2001,67(2):153-165
We show how Poole-systems, a simple approach to nonmonotonic reasoning, can be extended to take meta-information into account adequately. The meta-information is used to guide the choice of formulas accepted by the reasoner as premises. Existence of a consistent set of conclusions is guaranteed by a least fixpoint construction. The proposed formalism has useful applications in defeasible reasoning, knowledge base fusion and belief revision.  相似文献   

20.
The present study investigated whether the assumptions of an ideal point response process, similar in spirit to Thurstone's work in the context of attitude measurement, can provide viable alternatives to the traditionally used dominance assumptions for personality item calibration and scoring. Item response theory methods were used to compare the fit of 2 ideal point and 2 dominance models with data from the 5th edition of the Sixteen Personality Factor Questionnaire (S. Conn & M. L. Rieke, 1994). The authors' results indicate that ideal point models can provide as good or better fit to personality items than do dominance models because they can fit monotonically increasing item response functions but do not require this property. Several implications of these findings for personality measurement and personnel selection are described.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号