首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 6 毫秒
1.
2.
3.
A metamodel for the traditional regression concept—consisting of two general parts—is proposed. The first part contains the basic and general definition of the concept, together with the central assumptions. The phenomenon is essentially defined as a conditional trend, predicted within a linear and bivariate regression system for a specified population, and as a function of the imperfectness of the correlation and of the extremeness of the selected score. The second part of the metamodel consists of special cases of the regression phenomenon, which are defined with respect to various psychometrical/distributional aspects, to various "sources" which reduce the correlation coefficient, and to observed and true scores. Also, the general features of the metamodel are elaborated, and correct and incorrect interpretations are pointed out.  相似文献   

4.
5.
An experiment is described in which it was demonstrated that relative judgments of the more probable of two statements are quicker if the statements are both probable rather than improbable. For judgments of the less probable, the reverse result was obtained. This phenomenon is discussed in relation to various theories of judgment and choice. A theory is presented which assumes that judgment involves a relation between a stimulus and a word and follows Thurstone’s notion that stimuli differ in their discrirninal dispersions. This new theory is shown to be consistent with recent results in psycholinguistics.  相似文献   

6.
7.
8.
9.
In a previous paper (Dixon, 1958b) one of the authors reported an experiment which suggested that apparent changes in the threshold for one eye occur as a function of the emotionality of stimulus material presented below threshold to the other eye. The following experiment describes an attempt to investigate further the validity of this conclusion. The results are consistent with those from the previous research.  相似文献   

10.
11.
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.  相似文献   

12.
13.
Deborah G. Mayo 《Synthese》1983,57(3):297-340
Theories of statistical testing may be seen as attempts to provide systematic means for evaluating scientific conjectures on the basis of incomplete or inaccurate observational data. The Neyman-Pearson Theory of Testing (NPT) has purported to provide an objective means for testing statistical hypotheses corresponding to scientific claims. Despite their widespread use in science, methods of NPT have themselves been accused of failing to be objective; and the purported objectivity of scientific claims based upon NPT has been called into question. The purpose of this paper is first to clarify this question by examining the conceptions of (I) the function served by NPT in science, and (II) the requirements of an objective theory of statistics upon which attacks on NPT's objectivity are based. Our grounds for rejecting these conceptions suggest altered conceptions of (I) and (II) that might avoid such attacks. Second, we propose a reformulation of NPT, denoted by NPT*, based on these altered conceptions, and argue that it provides an objective theory of statistics. The crux of our argument is that by being able to objectively control error frequencies NPT* is able to objectively evaluate what has or has not been learned from the result of a statistical test.  相似文献   

14.
Metacontrast conditions were created by the onset of flanking lights designed to mask the prior blink of an otherwise steady center light. For pseudometacontrast trials, the center light did not blink in advance of the flanking lights. Responses were an immediate finger-key release to the first detectable change in the visual display followed by a statement of whether the center light had been doused. A signal-detection analysis was used to avoid both threshold and response-criterion problems. Verbal report was, on the average, more sensitive than finger latency in detecting the masked blinks. However, there were large and consistent individual differences; reaction time was the more sensitive for a few subjects. Data analysis revealed that each response system showed detection with the other system “partialed out.” A model was offered in which verbal- and finger-response systems act in parallel, with uncorrelated variability between systems accounting for the subception effect.  相似文献   

15.
16.
17.
The relevance of the traditional regression concept to measurement of change and to causal research is investigated by applying a metamodel for the concept as a frame of reference in a logical analysis of these two research fields. It is concluded that the concept is considerably less important than ordinarily maintained in the methodological research literature. As for measurement of change, only a special error case of the concept often constitutes a biasing factor, while all special cases are of little interest for describing true changes. Concerning causal research, the regression phenomenon will probably seldom constitute any threat to internal validity. The unjustified attention given to the concept in psychological research is probably mainly due to the fact that central assumptions have been overlooked and that a thorough logical analysis of the relevance issue has not been attempted previously.  相似文献   

18.
19.
20.
Implicit learning and statistical learning: one phenomenon, two approaches   总被引:4,自引:0,他引:4  
The domain-general learning mechanisms elicited in incidental learning situations are of potential interest in many research fields, including language acquisition, object knowledge formation and motor learning. They have been the focus of studies on implicit learning for nearly 40 years. Stemming from a different research tradition, studies on statistical learning carried out in the past 10 years after the seminal studies by Saffran and collaborators, appear to be closely related, and the similarity between the two approaches is strengthened further by their recent evolution. However, implicit learning and statistical learning research favor different interpretations, focusing on the formation of chunks and statistical computations, respectively. We examine these differing approaches and suggest that this divergence opens up a major theoretical challenge for future studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号