首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
John L. Pollock 《Synthese》2011,181(2):317-352
In concrete applications of probability, statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q&;R), and we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability calculus for prob(P/Q&;R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q&;R)? A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability—nomic probability—there are principles of “probable probabilities” that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r, s, a) such that if prob(P/Q) = r, prob(P/R) = s, and prob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&;R) = Y(r, s, a). Numerous other defeasible inferences are licensed by similar principles of probable probabilities. This has the potential to greatly enhance the usefulness of probabilities in practical application.  相似文献   

2.
This study investigated 5-year-old Mandarin-speaking children’s comprehension of wh-questions, universal statements and free choice inferences. Previous research has found that Mandarin-speaking children assign a universal interpretation to sentences with a wh-word (e.g., shei ‘who’) followed by the adverbial quantifier dou ‘all’ (Zhou in Appl Psycholinguist 36:411–435, 2013). Children also compute free choice inferences in sentences that contain a modal verb in addition to a wh-word and dou (Zhou, in: Nakayama, Su, Huang (eds.) Studies in Chinese and Japanese language acquisition: in honour of Stephen Crain. John Benjamins Publishing Company, Amsterdam, pp 223–235, 2017). The present study used a Question-Statement Task to assess children’s interpretation of sentences containing shei + dou, both with and without the modal verb beiyunxu ‘was allowed to’, as well as the contrast between sentences with shei + dou, which are statements for adults, versus ones with dou + shei, which are wh-questions for adults. The 5-year-old Mandarin-speaking child participants exhibited adult-like linguistic knowledge of the semantics and pragmatics of wh-words, the adverbial quantifier dou, and the deontic modal verb beiyunxu.  相似文献   

3.
The duality between congruence lattices of semilattices, and algebraic subsets of an algebraic lattice, is extended to include semilattices with operators. For a set G of operators on a semilattice S, we have \({{\rm Con}(S,+,0,G) \cong^{d} {{\rm S}_{p}}(L,H)}\), where L is the ideal lattice of S, and H is a corresponding set of adjoint maps on L. This duality is used to find some representations of lattices as congruence lattices of semilattices with operators. It is also shown that these congruence lattices satisfy the Jónsson–Kiefer property.  相似文献   

4.
The conventional randomized response design is unidimensional in the sense that it measures a single dimension of a sensitive attribute, like its prevalence, frequency, magnitude, or duration. This paper introduces a multidimensional design characterized by categorical questions that each measure a different aspect of the same sensitive attribute. The benefits of the multidimensional design are (i) a substantial gain in power and efficiency, and the potential to (ii) evaluate the goodness-of-fit of the model, and (iii) test hypotheses about evasive response biases in case of a misfit. The method is illustrated for a two-dimensional design measuring both the prevalence and the magnitude of social security fraud.  相似文献   

5.
In this paper we give an analytic tableau calculus P L 1 6 for a functionally complete extension of Shramko and Wansing’s logic. The calculus is based on signed formulas and a single set of tableau rules is involved in axiomatising each of the four entailment relations ? t , ? f , ? i , and ? under consideration—the differences only residing in initial assignments of signs to formulas. Proving that two sets of formulas are in one of the first three entailment relations will in general require developing four tableaux, while proving that they are in the ? relation may require six.  相似文献   

6.
The dual strategy model of reasoning proposed by Verschueren, Schaeken, and d’Ydewalle (Thinking & Reasoning, 11(3), 239–278, 2005a; Memory & Cognition, 33(1), 107–119, 2005b) suggests that people can use either a statistical or a counterexample-based strategy to make deductive inferences. Subsequent studies have supported this distinction and investigated some properties of the two strategies. In the following, we examine the further hypothesis that reasoners using statistical strategies should be more vulnerable to the effects of conclusion belief. In each of three studies, participants were given abstract problems used to determine strategy use and three different forms of syllogism with believable and unbelievable conclusions. Responses, response times, and feeling of rightness (FOR) measures were taken. The results show that participants using a statistical strategy were more prone to the effects of conclusion belief across all three forms of reasoning. In addition, statistical reasoners took less time to make inferences than did counterexample reasoners. Patterns of variation in response times and FOR ratings between believable and unbelievable conclusions were very similar for both strategies, indicating that both statistical and counterexample reasoners were aware of conflict between conclusion belief and premise-based reasoning.  相似文献   

7.
According to Hempel’s (Aspects of scientific explanation and other essays. The Free Press, New York, 1965) influential theory of explanation, explaining why some a is G consists in showing that the truth that a is G follows from a law-like generalization to the effect that all Fs are G together with the initial condition that a is F. While Hempel’s overall account is now widely considered to be deeply flawed, the idea that some generalizations play the explanatory role that the account predicts is still often endorsed by contemporary philosophers of science. This idea, however, conflicts with widely shared views in metaphysics according to which the generalization that all Fs are G is partially explained by the fact that a is G. I discuss two solutions to this conflict that have been proposed recently, argue that they are unsatisfactory, and offer an alternative.  相似文献   

8.
Causal reasoning is crucial to people’s decision making in probabilistic environments. It may rely directly on data about covariation between variables (correspondence) or on inferences based on reasonable constraints if larger causal models are constructed based on local relations (coherence). For causal chains an often assumed constraint is transitivity. For probabilistic causal relations, mismatches between such transitive inferences and direct empirical evidence may lead to distortions of empirical evidence. Previous work has shown that people may use the generative local causal relations A → B and B → C to infer a positive indirect relation between events A and C, despite data showing that these events are actually independent (von Sydow et al. in Proceedings of the thirty-first annual conference of the cognitive science society. Cognitive Science Society, Austin, 2009, Proceedings of the 32nd annual conference of the cognitive science society. Cognitive Science Society, Austin, 2010, Mem Cogn 44(3):469–487, 2016). Here we used a sequential learning scenario to investigate how transitive reasoning in intransitive situations with negatively related distal events may relate to betting behavior. In three experiments participants bet as if they were influenced by a transitivity assumption, even when the data strongly contradicted transitivity.  相似文献   

9.
Research suggests that a feature-matching process underlies cue familiarity-detection when cued recall with graphemic cues fails. When a test cue (e.g., potchbork) overlaps in graphemic features with multiple unrecalled studied items (e.g., patchwork, pitchfork, pocketbook, pullcork), higher cue familiarity ratings are given during recall failure of all of the targets than when the cue overlaps in graphemic features with only one studied target and that target fails to be recalled (e.g., patchwork). The present study used semantic feature production norms (McRae et al., Behavior Research Methods, Instruments, & Computers, 37, 547–559, 2005) to examine whether the same holds true when the cues are semantic in nature (e.g., jaguar is used to cue cheetah). Indeed, test cues (e.g., cedar) that overlapped in semantic features (e.g., a_tree, has_bark, etc.) with four unretrieved studied items (e.g., birch, oak, pine, willow) received higher cue familiarity ratings during recall failure than test cues that overlapped in semantic features with only two (also unretrieved) studied items (e.g., birch, oak), which in turn received higher familiarity ratings during recall failure than cues that did not overlap in semantic features with any studied items. These findings suggest that the feature-matching theory of recognition during recall failure can accommodate recognition of semantic cues during recall failure, providing a potential mechanism for conceptually-based forms of cue recognition during target retrieval failure. They also provide converging evidence for the existence of the semantic features envisaged in feature-based models of semantic knowledge representation and for those more concretely specified by the production norms of McRae et al. (Behavior Research Methods, Instruments, & Computers, 37, 547–559, 2005).  相似文献   

10.
Several socio-cultural factors complicate mental health care in the ultra-Orthodox Jewish population. These include societal stigma, fear of the influence of secular ideas, the need for rabbinic approval of the method and provider, and the notion that excessive concern with the self is counter-productive to religious growth. Little is known about how the religious beliefs of this population might be employed in therapeutic contexts. One potential point of convergence is the Jewish philosophical tradition of introspection as a means toward personal, interpersonal, and spiritual growth. We reviewed Jewish religious-philosophical writings on introspection from antiquity (the Babylonian Talmud) to the Middle Ages (Duties of the Heart), the eighteenth century (Path of the Just), the early Hasidic movement (the Tanya), and modernity (Alei Shur, Halakhic Man). Analysis of these texts indicates that: (1) introspection can be a religiously acceptable reaction to existential distress; (2) introspection might promote alignment of religious beliefs with emotions, intellect and behavior; (3) some religious philosophers were concerned about the demotivating effects of excessive introspection and self-critique on religious devotion and emotional well-being; (4) certain religious forms of introspection are remarkably analogous to modern methods of psychiatry and psychology, particularly psychodynamic psychotherapy and cognitive-behavioral therapy. We conclude that homology between religious philosophy of emotion and secular methods of psychiatry and psychotherapy may inform the choice and method of mental health care, foster the therapist-patient relationship, and thereby enable therapeutic convergence.  相似文献   

11.
In this paper we shall introduce two types of contextual-hierarchical (from now on abbreviated by ‘ch’) approaches to the strengthened liar problem. These approaches, which we call the ‘standard’ and the ‘alternative’ ch-reconstructions of the strengthened liar problem, differ in their philosophical view regarding the nature of truth and the relation between the truth predicates T r n and T r n+1 of different hierarchy-levels. The basic idea of the standard ch-reconstruction is that the T r n+1-schema should hold for all sentences of \(\mathcal {L}^{n}\). In contrast, the alternative ch-reconstruction, for which we shall argue in section four, is motivated by the idea that T r n and T r n+1 are coherent in the sense that the same sentences of \(\mathcal {L}^{n}\) should be true according to T r n and T r n+1. We show that instances of the standard ch-reconstruction can be obtained by iterating Kripke’s strong Kleene jump operator. Furthermore, we will demonstrate how instances of the alternative ch-reconstruction can be obtained by a slight modification of the iterated axiom system KF and of the iterated strong Kleene jump operator.  相似文献   

12.
This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.  相似文献   

13.
What is ?Curriculum Theory articulates the problematic of difference, diversity, and multiplicity in contemporary curriculum thought. More specifically, this essay argues that the conceptualization of difference that dominates the contemporary curriculum landscape is inadequate to either the task of ontological experimentation or the creation of non-representational ways for thinking a life. Despite the ostensible radicality ascribed to the curricular ideas of difference and multiplicity, What is ?Curriculum Theory argues that these ideas remain wed to an structural or identitarian logic that derives difference from the a priori conditions of the possible. Further, this essay argues that the orthodox conceptualization of difference in contemporary curriculum studies is complicit with the capitalist commitment to quantitative multiplicity, or rather, the proliferation of ‘multiple consumer choices’. Following this problematic, the task of this paper is oriented to the conceptualization of difference adequate to the creation of a people yet-to-come, or rather, a people for which no prior image exists. To accomplish this, What is ?Curriculum Theory draws upon Deleuze’s Bergsonism in order to advance a conceptualization of difference that breaks from modes of dialectical negation and contradiction particular to the tyranny of representational thinking. Articulating an image of difference that no longer accords to the possible, this essay composes a thought experiment conceptualizing a pedagogical life in a manner that explicates the transversal relationship between the actual (what is) and the virtual (what is not-yet).  相似文献   

14.
Hoarding disorder is a new DSM-5 disorder that causes functional impairment and affects 2 to 6% of the population (Frost and Steketee 2014). The current study evaluated a multiple mediation model with 243 undergraduate women in which indecisiveness (VOCI; Thordarson et al. Behaviour Research and Therapy, 42(11), 1289-1314, 2004) and decisional procrastination (DPS; Mann 1982) mediated the relationship between dimensions of perfectionism (F-MPS-B; Burgess et al. 2016a) and hoarding behavior (SI-R; Frost et al. Behaviour Research And Therapy, 42(10), 1163–1182, 2004) and excessive acquiring (CAS; Frost et al. Annual Review of Clinical Psychology, 8, 219–242, 2012). Multiple mediational analyses indicated a significant indirect effect for decisional procrastination, but not indecisiveness, in mediating evaluative concerns (but not striving) to SI-R Total, SI-R Clutter, SIR Excessive Acquisition, and both CAS subscales. Both mediators were significant pathways between evaluative concerns and SI-R Difficulty Discarding. These findings support a cognitive behavioral model of hoarding, suggesting that evaluative concerns produces problems in decision-making that influence acquisition, discarding, and clutter.  相似文献   

15.
Two experiments were conducted to investigate the psychological refractory period (PRP), a delay induced into the second of two reaction times (RT) when the interstimulus interval (ISI) is short. In Experiment1, time and event uncertainty were factorially varied by providing or not providing S with foreknowledge of the ISI and the order in which the two events would occur, respectively. ISIs of0, 50, 100, 200, and400 msec were used. Time and event uncertainty produced independent degradation of both RTs. Also, the second RT (RT 2 ) was delayed at50 msec ISI when both time and event certainty were present. Experiment 2 attempted to replicate this latter finding using ISIs of0, 25, 50, 75, and100 msec. Delays in RT 2 were found for the middle three values of ISI. These results were interpreted as supporting a modified single channel theory of the PRP.  相似文献   

16.
We prove that for any recursively axiomatized consistent extension T of Peano Arithmetic, there exists a \(\Sigma _2\) provability predicate of T whose provability logic is precisely the modal logic \(\mathsf{K}\). For this purpose, we introduce a new bimodal logic \(\mathsf{GLK}\), and prove the Kripke completeness theorem and the uniform arithmetical completeness theorem for \(\mathsf{GLK}\).  相似文献   

17.
18.
When participants switch between relevant stimulus dimensions in speeded classification tasks, taskswitching cost is reduced by advance preparation. Previous studies in which speeded classification tasks were used have suggested that this effect results from attending to the relevant stimulus dimension. Because selective attention to the relevant stimulus dimension in same—different judgments is relatively poor (e.g., Santee &; Egeth, 1980), it was predicted that advance task preparation for a shift in the relevant stimulus dimension would be compromised. This prediction was borne out in two experiments comparing dimension shifts (shape vs.fill) with task rule shifts (same? vs.different?) and shifts in the mapping of right—left keys toyes andto responses (yes—no vs.no—yes). The results indicate that advance attentional selection of the relevant dimension is an optional preparatory strategy in task switching, employed only in conditions enabling flexible refocusing of attention.  相似文献   

19.
The existence of multiple modes of explanation means that a crucial step in the process of generating explanations has to be selecting a particular mode. The present article identifies the key conceptual, as well as some pragmatic and epistemological, considerations that license the use of the formal mode of explanation, and thus that enter into the process of selecting and generating a formal explanation. Formal explanations explain the presence of certain properties in an instance of a kind by reference to the kind of thing it is (e.g. That has four legs because it is a dog). As such, this mode of explanation is intrinsically tied to kind representations and is applicable domain-generally. Although it is possible for formal explanation to apply domain-generally, for any given kind it is selective in its application, in that it can explain some, but not all, properties of the instances of a kind. It also appears that different types of properties can receive formal explanations across different domains. This article provides a sketch of a theory of the selectivity of formal explanation that results from the manner in which kinds of different types are distinguished. The present discussion also suggests how the mechanisms underlying formal explanations may contribute to the illusion of explanatory depth Keil (Trends in Cognitive Sciences, 7, 368–373,2003), the operation of the inherence heuristic Cimpian & Salomon (Behavioral and Brain Sciences, 37, 461–480, 2014a; Behavioral and Brain Sciences, 37, 506–527,2014b), and psychological essentialism (Gelman, 2003).  相似文献   

20.
According to one argument for Animalism about personal identity, animal, but not person, is a Wigginsian substance concept—a concept that tells us what we are essentially. Person supposedly fails to be a substance concept because it is a functional concept that answers the question “what do we do?” without telling us what we are. Since person is not a substance concept, it cannot provide the criteria for our coming into or going out of existence; animal, on the other hand, can provide such criteria. This argument has been defended by Eric Olson, among others. I argue that this line of reasoning fails to show Animalism to be superior to the Psychological Approach, for the following two reasons: (1) human animal, animal, and organism are all functional concepts, and (2) the distinction between what something is and what it does is illegitimate on the reading that the argument needs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号