首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Zwart and Franssen’s impossibility theorem reveals a conflict between the possible-world-based content-definition and the possible-world-based likeness-definition of verisimilitude. In Sect. 2 we show that the possible-world-based content-definition violates four basic intuitions of Popper’s consequence-based content-account to verisimilitude, and therefore cannot be said to be in the spirit of Popper’s account, although this is the opinion of some prominent authors. In Sect. 3 we argue that in consequence-accounts, content-aspects and likeness-aspects of verisimilitude are not in conflict with each other, but in agreement. We explain this fact by pointing towards the deep difference between possible-world- and the consequence-accounts, which does not lie in the difference between syntactic (object-language) versus semantic (meta-language) formulations, but in the difference between ‘disjunction-of-possible-worlds’ versus ‘conjunction-of-parts’ representations of theories. Drawing on earlier work, we explain in Sect. 4 how the shortcomings of Popper’s original definition can be repaired by what we call the relevant element approach. We propose a quantitative likeness-definition of verisimilitude based on relevant elements which provably agrees with the qualitative relevant content-definition of verisimilitude on all pairs of comparable theories. We conclude the paper with a plea for consequence-accounts and a brief analysis of the problem of language-dependence (Sect. 6).  相似文献   

2.
The so-called Preface Paradox seems to show that one can rationally believe two logically incompatible propositions. We address this puzzle, relying on the notions of truthlikeness and approximate truth as studied within the post-Popperian research programme on verisimilitude. In particular, we show that adequately combining probability, approximate truth, and truthlikeness leads to an explanation of how rational belief is possible in the face of the Preface Paradox. We argue that our account is superior to other solutions of the paradox, including a recent one advanced by Hannes Leitgeb (Analysis 74.1).  相似文献   

3.
In this paper, we show that Arrow’s well-known impossibility theorem is instrumental in bringing the ongoing discussion about verisimilitude to a more general level of abstraction. After some preparatory technical steps, we show that Arrow’s requirements for voting procedures in social choice are also natural desiderata for a general verisimilitude definition that places content and likeness considerations on the same footing. Our main result states that no qualitative unifying procedure of a functional form can simultaneously satisfy the requirements of Unanimity, Independence of irrelevant alternatives and Non-dictatorship at the level of sentence variables. By giving a formal account of the incompatibility of the considerations of content and likeness, our impossibility result makes it possible to systematize the discussion about verisimilitude, and to understand it in more general terms.  相似文献   

4.
I. A. Kieseppä 《Synthese》1996,107(3):421-438
J. P. Z. Bonilla's methodological approach to truthlikeness is evaluated critically. On a more general level, various senses in which the theory of truthlikeness could be seen as a theory concerned with methodology are distinguished, and it is argued that providing speical sciences with methodological tools is unrealistic as an aim of the theory of verisimilitude. Rather, when developing this theory, one should rest contnet with the more modest aim of conceptual analysis, or of providing explications for the relational concept of being closer to the truth. In addition, some remarks will be made on the difficulties which the similarity approach to truthlikeness has in realizing this aim and which are caused by the important role that Hintikka's constituents have in it.  相似文献   

5.
Many studies have explored travelers’ perceptions of self-driving cars (or autonomous vehicles, AVs) and their potential impacts. However, medium-term modifications in activity patterns (such as increasing trip frequencies and changing destinations) have been less explored. Using 2017–2018 survey data collected in the US state of Georgia, this paper (1) measures (at a general level) how people expect their activity patterns to change in a hypothetical all-AV era; (2) identifies population segments having similar profiles of expected changes; and (3) further profiles each segment on the basis of attitudinal, sociodemographic, and geographic characteristics. In the survey, respondents were asked to express their expectations regarding 16 potential activity modifications induced by AVs. We first conducted an exploratory factor analysis (EFA) to reduce the dimensionality of the activity-change vector characterizing each individual, and estimated non-mean-centered (NMC) factor scores (which have been rarely used in applied psychology). The EFA solution identified four dimensions of activity change: distance, time flexibility, frequency, and long distance/leisure. Next, we clustered Georgians with respect to these four-dimensional expectation vectors. The cluster solution uncovered six segments: no change, change unlikely, more leisure/long distance, longer trips, more travel, and time flexibility & more leisure/long distance. Using NMC factor scores identified considerably more inertia with respect to expectations for change than would have been apparent from the usual mean-centered scores. Finally, the various segments exhibit distinctive demographics and general attitudes. For example, those in the more leisure/long distance cluster tend to be higher income and are more likely to be Atlanta-region residents compared to other clusters, while those in the no change and change unlikely clusters tend to be older and are more likely to be rural residents.  相似文献   

6.
A recent controversy in the field of depth perception has highlighted an important aspect of model testing concerning a model's complexity, defined as the prior propensity of the model to fit arbitrary data sets. The present article introduces an index of complexity, called the mean minimum distance, defined as the average squared distance between an arbitrary data point and the prediction range of the model. It may also be expressed as a dimensionless quantity called the scaled mean minimum distance. For linear models, theoretical values for the scaled mean minimum distance and the variance of the scaled minimum distance can be readily obtained and compared against empirical estimates obtained from fits to random data. The approach is applied to resolving the question of the relative complexity of the Linear Integration model and the Fuzzy Logic of Perception model, both of which have been the subject of controversy in the field of depth perception. It is concluded that the two models are equally complex. Received: 10 October 1998 / Accepted: 22 December 1998  相似文献   

7.
Philosophers of science as divergent as the inductivist Carnap and the deductivist Popper share the notion that the (logical) content of a proposition is given by its consequence class. I claim that this notion of content is (a) unintuitive and (b) inappropriate for many of the formal needs of philosophers of science. The basic problem is that given this notion of content, for any arbitraryp andq, (p Vq) will count as part of the content of bothp andq. In other words, any arbitraryp andq share some common content. This notion of content has disastrous effects on, for instance, Carnap's attempts to explicate the notion of confirmation in terms of probabilistic favorable relevance, and Popper's attempts to define verisimilitude. After briefly reviewing some of the problems of the traditional notion of content I present an alternative notion of (basic) content which (a) better fits our intuitions about content and (b) better serves the formal needs of philosophers of science.  相似文献   

8.
If X is set and L a lattice, then an L-subset or fuzzy subset of X is any map from X to L, [11]. In this paper we extend some notions of recursivity theory to fuzzy set theory, in particular we define and examine the concept of almost decidability for L-subsets. Moreover, we examine the relationship between imprecision and decidability. Namely, we prove that there exist infinitely indeterminate L-subsets with no more precise decidable versions and classical subsets whose unique shaded decidable versions are the L-subsets almost-everywhere indeterminate.This research was supported by M. P. I. of Italy (60% and 40% 1986).  相似文献   

9.
When someone hosts a party, when governments choose an aid program, or when assistive robots decide what meal to serve to a family, decision-makers must determine how to help even when their recipients have very different preferences. Which combination of people’s desires should a decision-maker serve? To provide a potential answer, we turned to psychology: What do people think is best when multiple people have different utilities over options? We developed a quantitative model of what people consider desirable behavior, characterizing participants’ preferences by inferring which combination of “metrics” (maximax, maxsum, maximin, or inequality aversion [IA]) best explained participants’ decisions in a drink-choosing task. We found that participants’ behavior was best described by the maximin metric, describing the desire to maximize the happiness of the worst-off person, though participant behavior was also consistent with maximizing group utility (the maxsum metric) and the IA metric to a lesser extent. Participant behavior was consistent across variation in the agents involved and  tended to become more maxsum-oriented when participants were told they were players in the task (Experiment 1). In later experiments, participants maintained maximin behavior across multi-step tasks rather than shortsightedly focusing on the individual steps therein (Experiment 2, Experiment 3). By repeatedly asking participants what choices they would hope for in an optimal, just decision-maker, and carefully disambiguating which quantitative metrics describe these nuanced choices, we help constrain the space of what behavior we desire in leaders, artificial intelligence systems helping decision-makers, and the assistive robots and decision-makers of the future.  相似文献   

10.
Popper's definition of verisimilitude was criticized for its paradoxical consequences in the case of false theories. The aim of this paper is to show that paradoxes disappear if the falsity content of a theory is defined with help of dCn or Cn –1.To the memory of Jerzy SupeckiI am grateful to David Pearce, Gerhard Schurz, Peter Simons, Maciej Spasowski and Jan Zygmunt for their helpful discussions on issues analysed in this paper. Particularly, Zygmunt's and Spasowski's comments enabled me to correct several mistakes of the earlier drafts.  相似文献   

11.
A theory or model of cause such as Cheng's power (p) allows people to predict the effectiveness of a cause in a different causal context from the one in which they observed its actions. Liljeholm and Cheng demonstrated that people could detect differences in the effectiveness of the cause when causal power varied across contexts of different outcome base rates, but that they did not detect similar changes when only the cause–outcome contingency, ?p, but not power, varied. However, their procedure allowed participants to simplify the causal scenarios and consider only a subsample of observations with a base rate of zero. This confounds p, ?p, and the probability of an outcome (O) given a cause (C), P(O|C). Furthermore, the contingencies that they used confounded p and P(O|C) in the overall sample. Following the work of Liljeholm and Cheng, we examined whether causal induction in a wider range of situations follows the principles suggested by Cheng. Experiments 1a and 1b compared the procedure used by Liljeholm and Cheng with one that did not allow the sample of observations to be simplified. Experiments 2a and 2b compared the same two procedures using contingencies that controlled for P(O|C). The results indicated that, if the possibility of converting all contexts to a zero base rate situation was avoided, people were sensitive to changes in P(O|C), p, and ?p when each of these was varied. This is inconsistent with Liljeholm and Cheng's conclusion that people detect only changes in p. These results question the idea that people naturally extract the metric or model of cause from their observation of stochastic events and then, reasonably exclusively, use this theory of a causal mechanism, or for that matter any simple normative theory, to generalize their experience to alternative contexts.  相似文献   

12.
In this paper, we discuss the weakness of current action languages for sensing actions with respect to modeling domains with multi-valued fluents. To address this problem, we propose a language with sensing actions and multi-valued fluents, called AMK, provide a transition function based semantics for the language, and demonstrate its use through several examples from the literature. We then define the entailment relationship between action theories and queries in AMK, denoted by ⊧AMK, and discuss some properties about AMK.  相似文献   

13.
We establish a connection between the geometric methods developed in the combinatorial theory of small cancellation and the propositional resolution calculus. We define a precise correspondence between resolution proofs in logic and diagrams in small cancellation theory, and as a consequence, we derive that a resolution proof is a 2-dimensional process. The isoperimetric function defined on diagrams corresponds to the length of resolution proofs.  相似文献   

14.
Correlation Weights in Multiple Regression   总被引:1,自引:0,他引:1  
A general theory on the use of correlation weights in linear prediction has yet to be proposed. In this paper we take initial steps in developing such a theory by describing the conditions under which correlation weights perform well in population regression models. Using OLS weights as a comparison, we define cases in which the two weighting systems yield maximally correlated composites and when they yield minimally similar weights. We then derive the least squares weights (for any set of predictors) that yield the largest drop in R 2 (the coefficient of determination) when switching to correlation weights. Our findings suggest that two characteristics of a model/data combination are especially important in determining the effectiveness of correlation weights: (1) the condition number of the predictor correlation matrix, R xx , and (2) the orientation of the correlation weights to the latent vectors of R xx .  相似文献   

15.
Mereotopology is that branch of the theory of regions concerned with topological properties such as connectedness. It is usually developed by considering the parthood relation that characterizes the, perhaps non-classical, mereology of Space (or Spacetime, or a substance filling Space or Spacetime) and then considering an extra primitive relation. My preferred choice of mereotopological primitive is interior parthood. This choice will have the advantage that filters may be defined with respect to it, constructing “points”, as Peter Roeper has done (“Region-based topology”, Journal of Philosophical Logic, 26 (1997), 25–309). This paper generalizes Roeper’s result, relying only on mereotopological axioms, not requiring an underlying classical mereology, and not assuming the Axiom of Choice. I call the resulting mathematical system an approximate lattice, because although meets and joins are not assumed they are approximated. Theorems are proven establishing the existence and uniqueness of representations of approximate lattices, in which their members, the regions, are represented by sets of “points” in a topological “space”.  相似文献   

16.
Zamora Bonilla  Jesus P. 《Synthese》2000,122(3):321-335
I. A. Kieseppä's criticism of the methodological use of the theory of verisimilitude, and D. B. Resnik's arguments against the explanation of scientific method by appeal to scientific aims are critically considered. Since the notion of verisimilitude was introduced as an attempt to show that science can be seen as a rational enterprise in the pursuit of truth, defenders of the verisimilitude programme need to show that scientific norms can be interpreted (at least in principle) as rules that try to increase the degree of truthlikeness of scientific theories. This possibility is explored for several approaches to the problem of verisimilitude.  相似文献   

17.
Studying how individuals compare two given quantitative stimuli, say d1 and d2, is a fundamental problem. One very common way to address it is through ratio estimation, that is to ask individuals not to give values to d1 and d2, but rather to give their estimates of the ratio p=d1/d2. Several psychophysical theories (the best known being Stevens’ power-law) claim that this ratio cannot be known directly and that there are cognitive distortions on the apprehension of the different quantities. These theories result in the so-called separable representations [Luce, R. D. (2002). A psychophysical theory of intensity proportions, joint presentations, and matches. Psychological Review, 109, 520–532; Narens, L. (1996). A theory of ratio magnitude estimation. Journal of Mathematical Psychology, 40, 109–788], which include Stevens’ model as a special case. In this paper we propose a general statistical framework that allows for testing in a rigorous way whether the separable representation theory is grounded or not. We conclude in favor of it, but reject Stevens’ model. As a byproduct, we provide estimates of the psychophysical functions of interest.  相似文献   

18.
In high‐stakes contexts such as job interviews, people seek to be evaluated favorably by others and they attempt to accomplish such favorable judgments particularly through self‐promotional behaviors. We sought to examine the persuasiveness of job candidates’ self‐promotion by examining job applicants’ subjective hireability from the perspective of construal‐level theory. Construal‐level theory states that perceptions occur from different levels of psychological distance (i.e., distal vs. proximal). This distance is created by other dimensions of distance (e.g., spatial or social distance) and affects how individuals construe incoming information. From a large distance, people more readily process abstract information, whereas from a close distance, people more readily process concrete information. Specifically, construal compatibility occurs when abstract versus concrete features of a stimulus match the psychological distance experienced by message‐recipients. Construal compatibility (vs. incompatibility) makes evaluations (e.g., of messages) more favorable. To apply this principle to self‐promotion, we created self‐promotional videos of a job interview, in which the applicant sat either far away from or close to the hiring manager (manipulating psychological distance); the applicant, then, used either direct or indirect self‐promotion (manipulating message construal level). The results showed participants reported stronger intention to hire the applicant when distance matched (vs. did not match) the type of self‐promotion the applicant used.  相似文献   

19.
S. Brennan (1985, Leonardo, 18, 170–178) has developed a computer-implemented caricature generator based on a holistic theory of caricature. A face is represented by 37 lines, based on a fixed set of 169 points. Caricatures are produced by exaggerating all metric differences between a face and a norm. Anticaricatures can be created by reducing all the differences between a face and a norm. Caricatures of familiar faces were identified more quickly than veridical line drawings, which were identified more quickly than anticaricatures. There was no difference in identification accuracy for the three types of representation. The best likeness was considered to be a caricature. We discuss the implications of these results for how faces are mentally represented. The results are consistent with a holistic theory of encoding in which distinctive aspects of a face are represented by comparison with a norm. We suggest that this theory may be appropriate for classes of visual stimuli, other than faces, whose members share a configuration definable by a fixed set of points.  相似文献   

20.
Abstract

In this article, I describe and systematize the different answers to the question ‘What is ubuntu,?’ that I have been able to identify among South Africans of African descent (SAADs). I show that it is possible to distinguish between two clusters of answers. The answers of the first cluster all define ubuntu, as a moral quality of a person, while the answers of the second cluster all define ubuntu, as a phenomenon (for instance a philosophy, an ethic, African humanism, or, a worldview) according to which persons are interconnected. The concept of a person is of central importance to all the answers of both clusters, which means that to understand these answers, it is decisive to raise the question of who counts, as a person according to SAADs. I show that some SAADs define all Homo sapiens, as persons, whereas others hold the view that only some Homo sapiens, count as persons: only those who are black, only those who have been incorporated into personhood, or only those who behave in a morally acceptable manner.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号