首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Jussi Haukioja 《Ratio》2006,19(2):176-190
The argument known as the ‘McKinsey Recipe’ tries to establish the incompatibility of semantic externalism (about natural kind concepts in particular) and a priori self‐knowledge about thoughts and concepts by deriving from the conjunction of these theses an absurd conclusion, such as that we could know a priori that water exists. One reply to this argument is to distinguish two different readings of ‘natural kind concept’: (i) a concept which in fact denotes a natural kind, and (ii) a concept which aims to denote a natural kind. Paul Boghossian has argued, using a Dry Earth scenario, that this response fails, claiming that the externalist cannot make sense of a concept aiming, but failing, to denote a natural kind. In this paper I argue that Boghossian’s argument is flawed. Borrowing machinery from two‐dimensional semantics, using the notion of ‘considering a possible world as actual’, I claim that we can give a determinate answer to Boghossian’s question: which concept would ‘water’ express on Dry Earth?1  相似文献   

2.
Although many researchers use Wagenaar's framework for understanding the factors that people use to determine whether a process is random, the framework has never undergone empirical scrutiny. This paper uses Wagenaar's framework as a starting point and examines the three properties of his framework—independence of events, fixed alternatives, and equiprobability. We find strong evidence to suggest that independence of events is indeed used as a cue toward randomness. Equiprobability has an effect on randomness judgments. However, it appears to work only in a limited role. Fixedness of alternatives is a complex construct that consists of multiple sub‐concepts. We find that each of these sub‐concepts influences randomness judgments, but that they exert forces in different directions. Stability of outcome ratios increases randomness judgments, while knowledge of outcome ratios decreases randomness judgments. Future directions for development of a functional framework for understanding perceptions of randomness are suggested. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
Arthur Fine 《Synthese》1982,50(2):279-294
This paper constructs two classes of models for the quantum correlation experiments used to test the Bell-type inequalities, synchronization models and prism models. Both classes employ deterministic hidden variables, satisfy the causal requirements of physical locality, and yield precisely the quantum mechanical statistics. In the synchronization models, the joint probabilities, for each emission, do not factor in the manner of stochastic independence, showing that such factorizability is not required for locality. In the prism models the observables are not random variables over a common space; hence these models throw into question the entire random variables idiom of the literature. Both classes of models appear to be testable.Work on this paper was supported, in part, by National Science Foundation Grant SES 79-25917.  相似文献   

4.
Fabio G. Cozman 《Synthese》2012,186(2):577-600
This paper analyzes concepts of independence and assumptions of convexity in the theory of sets of probability distributions. The starting point is Kyburg and Pittarelli??s discussion of ??convex Bayesianism?? (in particular their proposals concerning E-admissibility, independence, and convexity). The paper offers an organized review of the literature on independence for sets of probability distributions; new results on graphoid properties and on the justification of ??strong independence?? (using exchangeability) are presented. Finally, the connection between Kyburg and Pittarelli??s results and recent developments on the axiomatization of non-binary preferences, and its impact on ??complete?? independence, are described.  相似文献   

5.
The study presents a hypothesis on how randomness could be simulated by human subjects. Three sources of deviation from randomness are predicted: (1) the preferred application of overlearned production schemata for producing sequences of digits, (2) a wrong concept of randomness, and (3) the impossibility to monitor for redundancies of higher- than those of first-order. Deviations of random generation of digits produced by healthy subjects, patients with chronic frontal lobe damage, and patients with Parkinson′s disease from random sequences produced by a computer program can be explained by the differential influence of these factors. Whereas incorrect concepts of randomness and limits on monitoring capacity distinguished all sequences produced by humans from actual random sequences, persistence on a single production strategy distinguished brain-damaged patients from controls. Random generation of digits appears to be a theoretically transparent and clinically useful test of executive function.  相似文献   

6.
Sortal concepts, object individuation, and language   总被引:1,自引:0,他引:1  
Cognitive science is an interdisciplinary enterprise. This review highlights how the philosophical notion of a 'sortal'--a concept that provides principles of individuation and principles of identity - has been introduced into cognitive developmental psychology. Although the notion 'sortal' originated in metaphysics, importing it into the cognitive sciences has bridged a gap between philosophical and psychological discussions of concepts and has generated a fruitful and productive research enterprise. As I review here, the sortal concept has inspired several lines of empirical work in the past decade, including the study of object individuation; object identification; the relationship between language and acquisition of kind concepts; the representational capacities of non-human primates; object-based attention and cognitive architecture; and the relationship between kind concepts and individual concepts.  相似文献   

7.
Psychologists traditionally have employed both statistical and process assumptions in models of human learning and performance. The corresponding tradition in the field of artificial intelligence is to minimize or eliminate the use of statistical assumptions. This article reviews some stochastic and nonntochastic models of human memory, probability learning, medical diagnosis, and concept identification. Some stochastic models are found to have a larger deterministic component than was previously realized; deterministic models applicable with random selection of stimuli can be represented in stochastic form. A policy of methodological determinism is recommended in which the model builder originally assumes that no random processes take place within the organism. If necessary, and as a last resort, such processes can be appended to an otherwise deterministic model for predictive convenience.  相似文献   

8.
In a number of studies, tendencies toward nonrepetition in judgments of randomness of visually presented sequences of events have been attributed to a biased concept of randomness. The present study proposed that such bias is due to "bottom-up" visual processes rather than a concept of randomness. Experiment 1 showed that judgments of randomness were less biased when repetitions were made less conspicuous by increasing the distance between adjacent items. Experiment 2 produced comparable results for increasing dissimilarity of categorically identical items. A third experiment showed that the bias in the judgment task was not related to a more direct measure of knowledge of random processes, the assignment of probabilities of repetition to imagined random sequences. The results supported the view that judgments of randomness are determined to a high degree by the conspicuousness of repetitions and are independent of the concept of randomness.  相似文献   

9.
We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments.  相似文献   

10.
The equiprobability bias (EB) is a tendency to believe that every process in which randomness is involved corresponds to a fair distribution, with equal probabilities for any possible outcome. The EB is known to affect both children and adults, and to increase with probability education. Because it results in probability errors resistant to pedagogical interventions, it has been described as a deep misconception about randomness: the erroneous belief that randomness implies uniformity. In the present paper, we show that the EB is actually not the result of a conceptual error about the definition of randomness. On the contrary, the mathematical theory of randomness does imply uniformity. However, the EB is still a bias, because people tend to assume uniformity even in the case of events that are not random. The pervasiveness of the EB reveals a paradox: The combination of random processes is not necessarily random. The link between the EB and this paradox is discussed, and suggestions are made regarding educational design to overcome difficulties encountered by students as a consequence of the EB.  相似文献   

11.
A characterization of stochastic independence in terms of association of random variables is given. The result is applied to yield a simple proof of the Sattath-Tversky inequality without continuity assumptions.  相似文献   

12.
This paper examines seven independence concepts based on a preference relation on the set of simple probability measures defined on a set of multiattribute consequences. Three of the independence relations involve gambles and the other four are based on riskless preferences over the n-tuples in the consequence set. The main theorems state conditions under which one or more of the risky independence relations can be derived from a riskless independence relation in conjunction with other conditions. The other conditions include a risky independence condition which differs from the one(s) to be derived, the assumption that the consequence set is a convex subset of a finite-dimensional Euclidean space, and the assumption that the individual's von Neumann-Morgenstern utility function on the consequence set is continuous.  相似文献   

13.
The conjunction fallacy occurs when people judge a conjunctive statement B‐and‐A to be more probable than a constituent B, in contrast to the law of probability that P(B ∧ A) cannot exceed P(B) or P(A). Researchers see this fallacy as demonstrating that people do not follow probability theory when judging conjunctive probability. This paper shows that the conjunction fallacy can be explained by the standard probability theory equation for conjunction if we assume random variation in the constituent probabilities used in that equation. The mathematical structure of this equation is such that random variation will be most likely to produce the fallacy when one constituent has high probability and the other low, when there is positive conditional support between the constituents, when there are two rather than three constituents, and when people rank probabilities rather than give numerical estimates. The conjunction fallacy has been found to occur most frequently in exactly these situations. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
Accounts of subjective randomness suggest that people consider a stimulus random when they cannot detect any regularities characterizing the structure of that stimulus. We explored the possibility that the regularities people detect are shaped by the statistics of their natural environment. We did this by testing the hypothesis that people’s perception of randomness in two-dimensional binary arrays (images with two levels of intensity) is inversely related to the probability with which the array’s pattern would be encountered in nature. We estimated natural scene probabilities for small binary arrays by tabulating the frequencies with which each pattern of cell values appears. We then conducted an experiment in which we collected human randomness judgments. The results show an inverse relationship between people’s perceived randomness of an array pattern and the probability of the pattern appearing in nature.  相似文献   

15.
Numerous studies have found that likelihood judgment typically exhibits subadditivity in which judged probabilities of events are less than the sum of judged probabilities of constituent events. Whereas traditional accounts of subadditivity attribute this phenomenon to deterministic sources, this paper demonstrates both formally and empirically that subadditivity is systematically influenced by the stochastic variability of judged probabilities. First, making rather weak assumptions, we prove that regressive error (or variability) in mapping covert probability judgments to overt responses is sufficient to produce subadditive judgments. Experiments follow in which participants provided repeated probability estimates. The results support our model assumption that stochastic variability is regressive in probability estimation tasks and show the contribution of such variability to subadditivity. The theorems and the experiments focus on within-respondent variability, but most studies use between-respondent designs. Numerical simulations extend the work to contrast within- and between-respondent measures of subadditivity. Methodological implications of all the results are discussed, emphasizing the importance of taking stochastic variability into account when estimating the role of other factors (such as the availability bias) in producing subadditive judgments.  相似文献   

16.
Lon P. Turner 《Zygon》2007,42(1):7-24
In contradistinction to the contemporary human sciences, recent theological accounts of the individual‐in‐relation continue to defend the concept of the singular continuous self. Consequently, theological anthropology and the human sciences seem to offer widely divergent accounts of the sense of self‐fragmentation that many believe pervades the modern world. There has been little constructive interdisciplinary conversation in this area. In this essay I address the damaging implications of this oversight and establish the necessary conditions for future dialogue. I have three primary objectives. First, I show how the notion of personal continuity acquires philosophical theological significance through its close association with the concept of personal particularity. Second, through a discussion of contemporary accounts of self‐multiplicity, I clarify the extent of theological anthropology's disagreement with the human sciences. Third, I draw upon narrative accounts of identity to suggest an alternative means of understanding the experiential continuity of personhood that maintains the tension between self‐plurality, unity, and particularity and thereby reconnects philosophical theological concerns with human‐scientific analyses of the human condition. Narrative approaches to personhood are ideally suited to this purpose, and, I suggest, offer an intriguing solution to understanding and resolving the problem of self‐fragmentation that has caused recent theological anthropology so much consternation.  相似文献   

17.
Birnbaum MH 《Psychological review》2011,118(4):675-83; discussion 684-8
This article contrasts 2 approaches to analyzing transitivity of preference and other behavioral properties in choice data. The approach of Regenwetter, Dana, and Davis-Stober (see record 2011-00732-003) assumes that on each choice, a decision maker samples randomly from a mixture of preference orders to determine whether A is preferred to B. In contrast, Birnbaum and Gutierrez (2007) assumed that within each block of trials, the decision maker has a true set of preferences and that random errors generate variability of response. In this latter approach, preferences are allowed to differ between people; within-person, they might differ between repetition blocks. Both approaches allow mixtures of preferences, both assume a type of independence, and both yield statistical tests. They differ with respect to the locus of independence in the data. The approaches also differ in the criterion for assessing the success of the models. Regenwetter et al. fitted only marginal choice proportions and assumed that choices are independent, which means that a mixture cannot be identified from the data. Birnbaum and Gutierrez fitted choice combinations with replications; their approach allows estimation of the probabilities in the mixture. It is suggested that researchers should separate tests of the stochastic model from the test of transitivity. Evidence testing independence and stationarity assumptions is presented. Available data appear to fit the assumption that errors are independent better than they fit the assumption that choices are independent.  相似文献   

18.
A large number of reports have been published on stochastic independence between implicit and explicit measures of memory. This is often taken to imply that different memory systems mediate implicit and explicit memory performance. In these cases, stochastic independence is inferred from contingency analysis of overall success rates in two memory tasks when performance in one or both of the tasks is, to a large extent, mediated by factors other than memory. Typically, the difference between performance with studied and nonstudied items is not large in implicit memory tasks. It is argued that this must be taken into account when evaluating the contingency analysis. A method is presented for estimating the relevant joint and conditional probabilities, assuming that the aspects of performance in the two tasks that are related to memory are dependent to the maximum possible extent. The method is applied to a number of published studies, and it is shown that the difference between these estimated probabilities and those given by stochastic independence is too small to allow any conclusion to be drawn about memory systems from contingency analysis of data reported in these studies.  相似文献   

19.
A branch of probability theory that has been studied extensively in recent years, the theory of conditional expectation, provides just the concepts needed for mathematical derivation of the main results of the classical test theory with minimal assumptions and greatest economy in the proofs. The collection of all random variables with finite variance defined on a given probability space is a Hilbert space; the function that assigns to each random variable its conditional expectation is a linear operator; and the properties of the conditional expectation needed to derive the usual test-theory formulas are general properties of linear operators in Hilbert space. Accordingly, each of the test-theory formulas has a simple geometric interpretation that holds in all Hilbert spaces.  相似文献   

20.
Stefan Lukits 《Synthese》2014,191(7):1409-1431
Sometimes we receive evidence in a form that standard conditioning (or Jeffrey conditioning) cannot accommodate. The principle of maximum entropy (MAXENT) provides a unique solution for the posterior probability distribution based on the intuition that the information gain consistent with assumptions and evidence should be minimal. Opponents of objective methods to determine these probabilities prominently cite van Fraassen’s Judy Benjamin case to undermine the generality of maxent. This article shows that an intuitive approach to Judy Benjamin’s case supports maxent. This is surprising because based on independence assumptions the anticipated result is that it would support the opponents. It also demonstrates that opponents improperly apply independence assumptions to the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号