首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The concepts of differential prediction and multiple absolute prediction were developed in earlier papers [2, 3]. Methods for determining optimal distribution of testing time for each type of prediction are available [4, 5] and are appropriate for use provided that no altered time allotment approaches zero. In this article the methods developed in [4, 5] are extended to include cases where the altered time allotment for one or more tests may approach zero. The procedures developed are illustrated by numerical examples, after which the mathematical rationales are provided.  相似文献   

3.
P. S. Dwyer 《Psychometrika》1939,4(2):163-171
A method is indicated by which multiple factor analysis may be used in determining a number,r, and then in selectingr predicting variables out ofn variables so that each of the remainingn-r variables may be predicted almost as well from ther variables as it could be predicted from all then—1 variables.  相似文献   

4.
5.
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non‐normality for Yuen's two‐group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.  相似文献   

6.
A new method, with an application program in Matlab code, is proposed for testing item performance models on empirical databases. This method uses data intraclass correlation statistics as expected correlations to which one compares simple functions of correlations between model predictions and observed item performance. The method rests on a data population model whose validity for the considered data is suitably tested and has been verified for three behavioural measure databases. Contrarily to usual model selection criteria, this method provides an effective way of testing under-fitting and over-fitting, answering the usually neglected question "does this model suitably account for these data?"  相似文献   

7.
This paper describes a relationship between the variance-covariance matrix of test items and Woodbury's concept of the standard length of a test. An index of item-test relationship is described in standard length terms. The sum of these indices for the items in a test is equal to the square of Jackson's coefficient of sensitivity.  相似文献   

8.
It is demonstrated that the squared multiple correlation of a variable with the remaining variables in a set of variables is a function of the communalities and the squared canonical correlations between the observed variables and common factors. This equation is shown to imply a strict inequality between the squared multiple correlation and communality.  相似文献   

9.
A simple proof that the squared multiple correlation of a variable with the remaining variables in the set of variables is a lower bound to the communality of that variable is presented.  相似文献   

10.
The cognitive reflection test (CRT) is a widely used measure of the propensity to engage in analytic or deliberative reasoning in lieu of gut feelings or intuitions. CRT problems are unique because they reliably cue intuitive but incorrect responses and, therefore, appear simple among those who do poorly. By virtue of being composed of so-called “trick problems” that, in theory, could be discovered as such, it is commonly held that the predictive validity of the CRT is undermined by prior experience with the task. Indeed, recent studies have shown that people who have had previous experience with the CRT score higher on the test. Naturally, however, it is not obvious that this actually undermines the predictive validity of the test. Across six studies with ~ 2,500 participants and 17 variables of interest (e.g., religious belief, bullshit receptivity, smartphone usage, susceptibility to heuristics and biases, and numeracy), we did not find a single case in which the predictive power of the CRT was significantly undermined by repeated exposure. This occurred despite the fact that we replicated the previously reported increase in accuracy among individuals who reported previous experience with the CRT. We speculate that the CRT remains robust after multiple exposures because less reflective (more intuitive) individuals fail to realize that being presented with apparently easy problems more than once confers information about the task’s actual difficulty.  相似文献   

11.
There are a number of methods of factoring the correlation matrix which require the calculation of a table of residual correlations after each factor has been extracted. This is perhaps the most laborious part of factoring. The method to be described here avoids the computation of residuals after each factor has been computed. Since the method turns on the selection of a set of constellations or clusters of test vectors, it will be calleda multiple group method of factoring. The method can be used for extracting one factor at a time if that is desired but it will be considered here for the more interesting case in which a number of constellations are selected from the correlation matrix at the start. The result of this method of factoring is a factor matrixF which satisfies the fundamental relationFF'=R.This study is one of a series of investigations in the development of multiple factor analysis and application to the study of primary mental abilities. We wish to acknowledge the financial assistance from the Social Science Research Committee of The University of Chicago which has made possible the work of the Psychometric Laboratory.  相似文献   

12.
HORST P 《Psychometrika》1948,13(3):125-134
A battery of pencil-and-paper tests is commonly used for predicting a single criterion. If the score on each test is the number of correct answers, the composite battery score would normally be the sum of the weighted test scores, where the weights are the raw score regression weights. Knowing the reliability of each test, it is possible to alter the lengths of the tests in a manner such that the weights will all be equal. The composite battery score would then simply be the total number of items answered correctly and scoring would be greatly simplified. Such simplification is particularly desirable where the volume of testing is large. Section I of the article outlines the procedure for altering the lengths of the tests, and Section II gives a proof of the method.  相似文献   

13.
This article proposes 2 new approaches to test a nonzero population correlation (rho): the hypothesis-imposed univariate sampling bootstrap (HI) and the observed-imposed univariate sampling bootstrap (OI). The authors simulated correlated populations with various combinations of normal and skewed variates. With alpha set=.05, N> or =10, and rho< or =0.4, empirical Type I error rates of the parametric r and the conventional bivariate sampling bootstrap reached .168 and .081, respectively, whereas the largest error rates of the HI and the OI were .079 and .062. On the basis of these results, the authors suggest that the OI is preferable in alpha control to parametric approaches if the researcher believes the population is nonnormal and wishes to test for nonzero rhos of moderate size.  相似文献   

14.
Observers completed perceptual categorization tasks that included separate base-rate/payoff manipulations, corresponding simultaneous base-rate/payoff manipulations, and conflicting simultaneous base-rate/payoff manipulations. Performance (1) was closer to optimal for 2:1 than for 3:1 base-rate/payoff ratios and when base rates as opposed to payoffs were manipulated, and (2) was more in line with the predictions from the flat-maxima hypothesis than from the independence assumption of the optimal classifier in corresponding and conflicting simultaneous base-rate/payoff conditions. A hybrid model that instantiated simultaneously the flat-maxima and the competition between reward and accuracy maximization (COBRA) hypotheses was applied to the data. The hybrid model was superior to a model that incorporated the independence assumption, suggesting that violations of the independence assumption are to be expected and are well captured by the flat-maxima hypothesis without requiring any additional assumptions. The parameters indicated that observers' reward-maximizing decision criterion rapidly approaches the optimal value and that more weight is placed on accuracy maximization in separate and corresponding simultaneous base-rate/payoff conditions than in conflicting simultaneous base-rate/payoff conditions.  相似文献   

15.
Decisions between multiple alternatives typically conform to Hick’s Law: Mean response time increases log-linearly with the number of choice alternatives. We recently demonstrated context effects in Hick’s Law, showing that patterns of response latency and choice accuracy were different for easy versus difficult blocks. The context effect explained previously observed discrepancies in error rate data and provided a new challenge for theoretical accounts of multialternative choice. In the present article, we propose a novel approach to modeling context effects that can be applied to any account that models the speed–accuracy trade-off. The core element of the approach is “optimality” in the way an experimental participant might define it: minimizing the total time spent in the experiment, without making too many errors. We show how this approach can be included in an existing Bayesian model of choice and highlight its ability to fit previous data as well as to predict novel empirical context effects. The model is shown to provide better quantitative fits than a more flexible heuristic account.  相似文献   

16.
Formulas are derived for simplified computation of partial and multiple correlation coefficients, and generalized ton variables. Time required for computation is compared with other methods.  相似文献   

17.
Under assumptions that will hold for the usual test situation, it is proved that test reliability and variance increase (a) as the average inter-item correlation increases, and (b) as the variance of the item difficulty distribution decreases. As the average item variance increases, the test variance will increase, but the test reliability will not be affected. (It is noted that as the average item variance increases, the average item difficulty approaches .50). In this development, no account is taken of the effect of chance success, or the possible effect on student attitude of different item difficulty distributions. In order to maximize the reliability and variance of a test, the items should have high intercorrelations, all items should be of the same difficulty level, and the level should be as near to 50% as possible.The desirability of determining this relationship has been indicated by previous writers. Work on the present paper arose out of some problems raised by Dr. Herbert S. Conrad in connection with an analysis of aptitude tests.On leave for Government war research from the Psychology Department, University of Chicago.  相似文献   

18.
Basing on a classification of the dynamic forms of multiple sclerosis according to prognostic-social aspects and in view to different degrees of defect the incidence of the five possible syndromes of cerebrospinal fluid were subjected to a correlation in 345 cases. In moderate till severe grades of neurologic disturbances and courses of illness an immunoreactive syndrome of cerebrospinal fluid - typically complete to incomplete - was doing found (in ca 40%). The slighter forms of the disease predominantly presented the whole spectrum of possible findings of cerebrospinal fluid; in it the syndromes with unimportant deviations were prevailing. In the course of multiple sclerosis alterations in the constellation of cerebrospinal fluid in manner of a retrograde tendency were scarcely noted. An increase of pathologic parameters in cerebrospinal fluid did rather show the slighter forms, in the severe progredient courses the syndromes turned out to be comparatively constant.  相似文献   

19.
Solutions of the communality problem and of the problem ofmeaning of common and unique factors have been shown previously to depend intimately on certain relations with ordinary multiple correlation. To make these basic propositions more accessible, simple proofs of some of them are provided here, avoiding any matrix algebra. New results are also obtained, with no extra work, that extend the previously known propositions to a more general class of coefficients than that of communalities.Revised from a paper written while on leave at the Center for Advanced Study in the Behavioral Sciences.  相似文献   

20.
Pigeons received equal variable-interval reinforcement during presentations of two line-orientation stimuli while five other orientations appeared in extinction. Component duration was 30 seconds for all orientations and the sequence was arranged so that each orientation preceded itself and each other orientation equally often. The duration of one component (0°) was shortened to 10 seconds and the other (90°) was lengthened to 50 seconds. All animals showed large increases in response rate in the shortened component and this increase was recoverable after an interpolated condition in which all components were again 30 seconds in duration. This effect was replicated in a second experiment in which component duration was changed from 150 seconds to 50 seconds and 250 seconds. An examination of local contrast effects during the first experiment showed that the shortened component produced local contrast during subsequent presentations of the lengthened component, just as would a component associated with more frequent reinforcement. When the presentation sequence was changed so that the lengthened component was always followed by the shortened component, response rates generally increased during the lengthened component. When the sequence was arranged so that the shortened component always preceded the longer component, response rate decreased in the former. These effects, as well as the increases in response rate following change in component length, seem not to be the product of local contrast effects among components.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号