首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Iverson, Lee, and Wagenmakers (2009) claimed that Killeen’s (2005) statistic prep overestimates the “true probability of replication.” We show that Iverson et al. confused the probability of replication of an observed direction of effect with a probability of coincidence—the probability that two future experiments will return the same sign. The theoretical analysis is punctuated with a simulation of the predictions of prep for a realistic random effects world of representative parameters, when those are unknown a priori. We emphasize throughout that prep is intended to evaluate the probability of a replication outcome after observations, not to estimate a parameter. Hence, the usual conventional criteria (unbiasedness, minimum variance estimator) for judging estimators are not appropriate for probabilities such as p and prep.  相似文献   

2.
Several authors have cautioned against using Fisher's z‐transformation in random‐effects meta‐analysis of correlations, which seems to perform poorly in some situations, especially with substantial inter‐study heterogeneity. Attributing this performance largely to the direct z‐to‐r transformation (DZRT) of Fisher z results (e.g. point estimate of mean correlation), in a previous paper Hafdahl (2009) proposed point and interval estimators of the mean Pearson r correlation that instead use an integral z‐to‐r transformation (IZRT). The present Monte Carlo study of these IZRT Fisher z estimators includes comparisons with their DZRT counterparts and with estimators based on Pearson r correlations. The IZRT point estimator was usually more accurate and efficient than its DZRT counterpart and comparable to the two Pearson r point estimators – better in some conditions but worse in others. Coverage probability for the IZRT confidence intervals (CIs) was often near nominal, much better than for the DZRT CIs, and comparable to coverage for the Pearson r CIs; every approach's CI fell markedly below nominal in some conditions. The IZRT estimators contradict warnings about Fisher z estimators' poor performance. Recommendations for practising research synthesists are offered, and an Appendix provides computing code to implement the IZRT as in the real‐data example.  相似文献   

3.
Several authors have studied or used the following estimation strategy for meta‐analysing correlations: obtain a point estimate or confidence interval for the mean Fisher z correlation, and transform this estimate to the Pearson r metric. Using the relationship between Fisher z and Pearson r random variables, I demonstrate the potential discrepancy induced by directly z‐to‐r transforming a mean correlation parameter. Point and interval estimators based on an alternative integral z‐to‐r transformation are proposed. Analytic expressions for the expectation and variance of certain meta‐analytic point estimators are also provided, as are selected moments of correlation parameters; numerical examples are included. In an application of these analytic results, the proposed point estimator outperformed its usual direct z‐to‐r counterpart and compared favourably with an estimator based on Pearson r correlations. Practical implications, extensions of the proposed estimators, and uses for the analytic results are discussed.  相似文献   

4.
This article investigates some unfamiliar properties of the Pearson product—moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson’s r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.  相似文献   

5.
In their useful logic for a computer network Shramko and Wansing generalize initial values of Belnap’s 4-valued logic to the set 16 to be the power-set of Belnap’s 4. This generalization results in a very specific algebraic structure — the trilattice SIXTEEN 3 with three orderings: information, truth and falsity. In this paper, a slightly different way of generalization is presented. As a base for further generalization a set 3 is chosen, where initial values are a — incoming data is asserted, d — incoming data is denied, and u — incoming data is neither asserted nor denied, that corresponds to the answer “don’t know”. In so doing, the power-set of 3, that is the set 8 is considered. It turns out that there are not three but four orderings naturally defined on the set 8 that form the tetralattice EIGHT 4. Besides three ordering relations mentioned above it is an extra uncertainty ordering. Quite predictably, the logics generated by a–order (truth order) and d–order (falsity order) coincide with first-degree entailment. Finally logic with two kinds of operations (a–connectives and d–connectives) and consequence relation defined via a–ordering is considered. An adequate axiomatization for this logic is proposed.  相似文献   

6.
The intraclass correlation,, is a parameter featured in much psychological research. Two commonly used estimators of, the maximum likelihood and least squares estimators, are known to be negatively biased. Olkin and Pratt (1958) derived the minimum variance unbiased estimator of the intraclass correlation, but use of this estimator has apparently been impeded by the lack of a closed form solution. This note briefly reviews the unbiased estimator and gives a FORTRAN 77 subroutine to calculate it.The first author was supported by an All-University Fellowship from the University of Southern California.  相似文献   

7.
A commonly used method of estimating population sensitivity is the so‐called averaged d ′ method. In this method, the arithmetic mean of a set of individual d ′ is usually taken as a population sensitivity estimator. This practice ignores the fact that the individual d ′ itself is an estimator with an inherent variance. For observations with different levels of precision, the arithmetic mean is not the best estimator of a population parameter. It may lead to an estimate with a large variation. Another fact, which is often ignored, is that the variance of individual d ′ involves both between‐ and within‐subject variations in a random effects model when population sensitivity and its level of precision are estimated. Failing to account for both components of variance leads to an underestimate of variation and an overestimate of precision for the estimator. In this paper a lognormal distribution rather than a normal distribution is assumed for individual sensitivity. An iterative weighting procedure is proposed for estimating population sensitivity on the log scale on the basis of a random effects model. An ordinary weighting procedure is proposed for estimating group sensitivity on the log scale on the basis of a fixed effects model. The levels of precision of population and group sensitivity estimators are also given. Numerical examples illustrate the estimation procedures.  相似文献   

8.
We consider two topological interpretations of the modal diamond—as the closure operator (C-semantics) and as the derived set operator (d-semantics). We call the logics arising from these interpretations C-logics and d-logics, respectively. We axiomatize a number of subclasses of the class of nodec spaces with respect to both semantics, and characterize exactly which of these classes are modally definable. It is demonstrated that the d-semantics is more expressive than the C-semantics. In particular, we show that the d-logics of the six classes of spaces considered in the paper are pairwise distinct, while the C-logics of some of them coincide. Mathematics Subject Classifications (2000): 03B45, 54G99. Presented by Michael Zakharyaschev  相似文献   

9.
Jordi Valor Abad 《Synthese》2008,160(2):183-202
Proponents of the explanatory gap claim that consciousness is a mystery. No one has ever given an account of how a physical thing could be identical to a phenomenal one. We fully understand the identity between water and H2O but the identity between pain and the firing of C-fibers is inconceivable. Mark Johnston [Journal of philosophy (1997), 564–583] suggests that if water is constituted by H2O, not identical to it, then the explanatory gap becomes a pseudo-problem. This is because all “manifest kinds”—those identified in experience—are on a par in not being identical to their physical bases, so that the special problem of the inconceivability of ‘pain = the firing of C-fibers’ vanishes. Moreover, the substitute relation, constitution, raises no explanatory difficulties: pain can be constituted by its physical base, as can water. The thesis of this paper is that the EG does not disappear when we substitute constitution for identity. I examine four arguments for the EG, and show that none of them is undermined by the move from constitution to identity.  相似文献   

10.
There is a heated dispute among consequentialists concerning the following deontic principle:
The principle states that for any acts (or any bearers of normative status) a and b, if it is obligatory for a specific agent to do the conjunctive (or compound) act a & b, then that agent is obligated to do a and is also obligated to do b—the deontic operator of obligation distributes over conjunction. Possibilists—those who believe that we should always pursue a “best” possible course of action available to us—accept the principle as true. Actualists—those who believe that certain future facts about the actual world can generate obligations incompatible with the best possible course of action available to us—reject the principle as false. And recent commentators on the dispute—some who endorse DC, others who reject it—have attempted to dig out and defend intermediary positions, suggesting that extreme versions of each view are unsatisfactory. I’m out to defend DC from the actualist attack. Here I briefly present the central actualist argument against DC. I then show that possibilism has all of the resources to explain the phenomena with which actualists are so concerned. Next, I try to diagnose the actualists’ malcontent: The relevance of certain subjunctive conditionals to consequentialist reasoning has been vastly overemphasized. Finally, I attempt to shed some light on the nature of consequentialist conditionals by incorporating possibilist insights into a semantics for subjunctive conditionals appropriate for consequentialist theorizing.
Jean-Paul VesselEmail:
  相似文献   

11.
Relational Services   总被引:1,自引:0,他引:1  
Recent research projects have looked for social innovations, i.e., people creating solutions outside the mainstream patterns of production and consumption. An analysis of these innovations indicates the emergence of a particular kind of service configuration—defined here as relational services—which requires intensive interpersonal relations to operate. Based on a comparative analysis between standard and relational services, we propose to the Service Design discipline an interpretative framework able to reinforce its ability to deal with the interpersonal relational qualities in services, indicating how these qualities can be understood and favored by design activities, as well as the limits of this design intervention. Martin Buber’s conceptual framework is presented as the main interpretative basis. Buber describes two ways of interacting (“I-Thou” and “I-It”). Relational services are those most favoring “I-Thou” interpersonal encounters.
Ezio ManziniEmail:
  相似文献   

12.
This article provides a formal definition for a sensivity measure,d g , between two multivariate stimuli. In recent attempts to assess perceptual representations using qualitative tests on response probabilities, the concept of ad′ between two multidimensional stimuli has played a central role. For example, Kadlec and Townsend (1992a, 1992b) proposed several tests based on multidimensional signal detection theory that allow conclusions concerning the perceptual and/or decisional interactions of stimulus dimensions. One proposition, referred to as thediagonal dtest, relies on specific stimulus subsets of a feature-complete factorial identification task to infer perceptual separability. Also, Ashby and Townsend (1986), in a similar manner, attempted to relate perceptual independence to dimensional orthogonality in Tanner’s (1956) model, which also involvesd′ between two multivariate signals. An analysis of the proposedd g reveals shortcomings in the diagonald′ test and also demonstrates that the assumptions behind equating perceptual independence to dimensional orthogonality are too weak. Thisd g can be related to a common measure of statistical distance, Mahalanobis distance, in the special case of equal covariance matrices.  相似文献   

13.
This paper challenges the view that arguments are (by definition, as it were) attempts to persuade or convince an audience to accept (or reject) a point of view by presenting reasons for (or against) that point of view. I maintain, first, that an arguer need not intend any effect beyond that of making it manifest to readers or hearers that there is a reason for doing some particular thing (e.g., for believing a certain proposition, or alternatively for rejecting it), and second that when an arguer is in fact trying to induce an effect above and beyond rendering a reason manifest, the effect intended—the use to which his or her argument is put—need not be that hearers “do” what the stated reasons are reasons for “doing.” Where the actual or intended effect of making a reason R for “doing X” manifest is something other than “doing X,” I call it an oblique—as opposed to a direct—effect of making that reason manifest. The core of the paper presents an overview or map of the main categories of effect which arguments can have, and the main sub-types within each category, calling attention to the points at which such effects can be indirect or oblique effects. The purpose of that typology is to make it clear (i) how oblique effects can come about and (ii) how important a role they can play in the conduct of argumentation.  相似文献   

14.
Modified motion detectors can be used to monitor locomotor activity and measure endogenous rhythms. Although these devices can help monitor insects in their home cages, the small size of the animals requires a very short wavelength detector. We modified a commercial microwave-based detection device, connected the detector’s output to the digital input of a computer, and validated the device by recording circadian and ultradian rhythms.Periplaneta americana were housed in individual cages, and their activity was monitored at 18°C and subsequently at 28°C in constant darkness. Time series were analyzed by a discrete Fourier transform and a chi-square periodogram. Q10 values and the circadian free-running period confirmed the data reported in the literature, validating the apparatus. Moreover, the spectral analysis and periodogram revealed the presence of ultradian rhythmicity in the range of 1–8 h.  相似文献   

15.
The literature of bioethics suffers from two serious problems. (1) Most authors are unable to take seriously both the rights of the great apes and of severely disabled human infants. Rationalism—moral status rests on rational capacities—wrongly assigns a higher moral status to the great apes than to all severely disabled human infants with less rational capacities than the great apes. Anthropocentrism—moral status depends on membership in the human species—falsely grants all humans a higher moral status than the great apes. Animalism—moral status is dependent on the ability to suffer—mistakenly equates the moral status of humans and most animals. (2) The concept person is widely used for justificatory purposes, but it seems that it cannot play such a role. It seems that it is either redundant or unable to play any justificatory role. I argue that we can solve the second problem by understanding person as a thick evaluative concept. This then enables us to justify assigning a higher moral status to the great apes than to simple animals: the great apes are persons. To solve the first problem, I argue that certain severely disabled infants have a higher moral status than the great apes because they are dependent upon human relationships for their well-being. Only very limited abilities are required for such relationships, and the question who is capable of them must be based on thick evaluative concepts. Thus, it turns out that to make progress in bioethics we must assign thick evaluative concepts a central role.
Logi GunnarssonEmail:
  相似文献   

16.
This study examined the psychometric properties of the Teasing Questionnaire—Revised (TQ-R) in a non-clinical community sample of adults. The TQ-R, Brief Fear of Negative Evaluation Scale, Beck Depression Inventory-II, and UCLA Loneliness Scale were administered to 355 adults, aged 18–86 years. Confirmatory factor analysis showed the five-factor teasing model proposed by Storch et al. (Journal of Anxiety Disorders, 18, 665–679, 2004c) was not a good fit for these data. A three-factor model consisting of Academic, Social, and Appearance factors was found through exploratory analyses [termed the Teasing Questionnaire—Revised—Short Form (TQ-R-S)]. Internal consistency was good for the TQ-R-S Total Score and resultant TQ-R-S Academic, Social, and Appearance factors. TQ-R-S scores were directly correlated with current psychosocial functioning with correlations of a small to medium effect size. These results provide evidence that teasing during childhood is linked to later symptoms of depression, anxiety, and loneliness.  相似文献   

17.
Many studies have suggested that a word’s orthographic form must be processed before its meaning becomes available. Some interpret the (null) finding of equal facilitation after semantically transparent and opaque morphologically related primes in early stages of morphological processing as consistent with this view. Recent literature suggests that morphological facilitation tends to be greater after transparent than after opaque primes, however. To determine whether the degree of semantic transparency influences parsing into a stem and a suffix (morphological decomposition) in the forward masked priming variant of the lexical decision paradigm, we compared patterns of facilitation between semantically transparent (e.g., coolantcool) and opaque (e.g., rampantramp) prime—target pairs. Form properties of the stem (frequency, neighborhood size, and prime—target letter overlap), as well as related—unrelated and transparent—opaque affixes, were matched. Morphological facilitation was significantly greater for semantically transparent pairs than for opaque pairs. Ratings of prime—target relatedness predicted the magnitude of facilitation. The results limit the scope of form-then-meaning models of word recognition and demonstrate that semantic similarity can influence even early stages of morphological processing. The research reported here was supported by National Institute of Child Health and Development Grant HD-01994 to Haskins Laboratories.  相似文献   

18.
In this paper a theory of finitistic and frequentistic approximations — in short: f-approximations — of probability measures P over a countably infinite outcome space N is developed. The family of subsets of N for which f-approximations converge to a frequency limit forms a pre-Dynkin system . The limiting probability measure over D can always be extended to a probability measure over , but this measure is not always σ-additive. We conclude that probability measures can be regarded as idealizations of limiting frequencies if and only if σ-additivity is not assumed as a necessary axiom for probabilities. We prove that σ-additive probability measures can be characterized in terms of so-called canonical and in terms of so-called full f-approximations. We also show that every non-σ-additive probability measure is f-approximable, though neither canonically nor fully f-approximable. Finally, we transfer our results to probability measures on open or closed formulas of first-order languages.  相似文献   

19.
20.
Chris Heathwood has recently put forward a novel and ingenious argument against the view that intrinsic value is analyzable in terms of fitting attitudes. According to Heathwood, this view holds water only if the related but distinct concept of welfare—intrinsic value for a person—can be analyzed in terms of fitting attitudes too. Moreover, he argues against such an analysis of welfare by appealing to the rationality of our bias towards the future. In this paper, I argue that so long as we keep the tenses and the intrinsic/extrinsic distinction right, the fitting-attitudes analysis of welfare can be shown to survive Heathwood’s criticism.
Jens JohanssonEmail: Email:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号