首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The statistical and structural characteristics of 13 matrices of random numbers in which both the cells and the entries were randomly chosen are discussed. Each matrix was explored considering row means, standard deviations, and correlations as well as column means, standard deviations, and correlations. A study concerning the sequential arrangement of digits was performed by finding out in tables of random numbers how many times the values 0 to 9 are followed by any other digit. Analyses indicate clear factor structures when factor analyzing correlations of rows and of columns and when examining sequential arrangements, concluding that for a given set of digits it is possible to assert both randomness and nonrandomness depending on how the data are examined.  相似文献   

2.
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.  相似文献   

3.
Approximate methods of solving for discriminant functions have been tried on three sets of data. The principal illustration is the problem of finding a weighted sum of scores, on four psychological tests, so that men and women may be distinguished most clearly. The work starts from the complete solution, due to R. A. Fisher, where it is necessary to solve as many simultaneous equations, dependent on the standard deviations of the tests and their mutual correlations, as there are tests. It is proposed, by way of numerical simplification, that a set of equations be substituted where some one quantity replaces all the correlations. A solution is obtained where the weights to be assigned the tests are very simply expressed in terms of differences between the mean values of tests, the standard deviations of tests, and the said quantity. The difficulty remains of finding an estimate of the arbitrary constant that will give good discrimination. If an optimal solution is made a result is obtained which, in the three sets of data considered, is almost indistinguishable from that yielded by the complete solution. The calculation of this optimal common quantity is, however, itself so considerable that another estimate, previously suggested by R. W. B. Jackson, appears more profitable. This estimate is derived simply from the variability between the total scores for each subject and the variability of each test. Using this estimate, the discriminant functions can be rapidly calculated; the results compare very favorably, in the case of the data considered, with those from the complete solution.The present work was done while the writer was employed by the Ontario Department of Health.  相似文献   

4.
Many statistics packages print skewness and kurtosis statistics with estimates of their standard errors. The function most often used for the standard errors (e.g., in SPSS) assumes that the data are drawn from a normal distribution, an unlikely situation. Some textbooks suggest that if the statistic is more than about 2 standard errors from the hypothesized value (i.e., an approximate value for the critical value from the t distribution for moderate or large sample sizes when α = 5%), the hypothesized value can be rejected. This is an inappropriate practice unless the standard error estimate is accurate and the sampling distribution is approximately normal. We show distributions where the traditional standard errors provided by the function underestimate the actual values, often being 5 times too small, and distributions where the function overestimates the true values. Bootstrap standard errors and confidence intervals are more accurate than the traditional approach, although still imperfect. The reasons for this are discussed. We recommend that if you are using skewness and kurtosis statistics based on the 3rd and 4th moments, bootstrapping should be used to calculate standard errors and confidence intervals, rather than using the traditional standard. Software in the freeware R for this article provides these estimates.  相似文献   

5.
Global memory models are evaluated by using data from recognition memory experiments. For recognition, each of the models gives a value of familiarity as the output from matching a test item against memory. The experiments provide ROC (receiver operating characteristic) curves that give information about the standard deviations of familiarity values for old and new test items in the models. The experimental results are consistent with normal distributions of familiarity (a prediction of the models). However, the results also show that the new-item familiarity standard deviation is about 0.8 that of the old-item familiarity standard deviation and independent of the strength of the old items (under the assumption of normality). The models are inconsistent with these results because they predict either nearly equal old and new standard deviations or increasing values of old standard deviation with strength. Thus, the data provide the basis for revision of current models or development of new models.  相似文献   

6.
The sampling properties of four item discrimination indices (biserialr, Cook's indexB, theU–L 27 per cent index, and DeltaP) were investigated in order to ascertain their sampling properties when small samples drawn from actual test data rather than constructed data were employed. The empirical results indicated that the mean index values approximated the population values and that values of the standard deviations computed from large sample formulas were good approximations to the standard deviations of the observed distributions based on samples of size 120 or less. Goodness of fit tests comparing the observed distributions with the corresponding distribution of the product-moment correlation coefficient based upon a bivariate normal population indicated that this correlational model was inappropriate for the data. The lack of adequate mathematical models for the sampling distributions of item discrimination indices suggests that one should avoid indices whose only real reason for existence was the simplification of computational procedures.This research reported herein was performed pursuant to a contract (OE-2-10-071) with the United States Office of Education, Department of Health, Education and Welfare.  相似文献   

7.
Three methods for fitting the diffusion model (Ratcliff, 1978) to experimental data are examined. Sets of simulated data were generated with known parameter values, and from fits of the model, we found that the maximum likelihood method was better than the chi-square and weighted least squares methods by criteria of bias in the parameters relative to the parameter values used to generate the data and standard deviations in the parameter estimates. The standard deviations in the parameter values can be used as measures of the variability in parameter estimates from fits to experimental data. We introduced contaminant reaction times and variability into the other components of processing besides the decision process and found that the maximum likelihood and chi-square methods failed, sometimes dramatically. But the weighted least squares method was robust to these two factors. We then present results from modifications of the maximum likelihood and chi-square methods, in which these factors are explicitly modeled, and show that the parameter values of the diffusion model are recovered well. We argue that explicit modeling is an important method for addressing contaminants and variability in nondecision processes and that it can be applied in any theoretical approach to modeling reaction time.  相似文献   

8.
The use of hierarchical data (also called multilevel data or clustered data) is common in behavioural and psychological research when data of lower-level units (e.g., students, clients, repeated measures) are nested within clusters or higher-level units (e.g., classes, hospitals, individuals). Over the past 25 years we have seen great advances in methods for computing the sample sizes needed to obtain the desired statistical properties for such data in experimental evaluations. The present research provides closed-form and iterative formulas for sample size determination that can be used to ensure the desired width of confidence intervals for hierarchical data. Formulas are provided for a four-level hierarchical linear model that assumes slope variances and inclusion of covariates under both balanced and unbalanced designs. In addition, we address several mathematical properties relating to sample size determination for hierarchical data via the standard errors of experimental effect estimates. These include the relative impact of several indices (e.g., random intercept or slope variance at each level) on standard errors, asymptotic standard errors, minimum required values at the highest level, and generalized expressions of standard errors for designs with any-level randomization under any number of levels. In particular, information on the minimum required values will help researchers to minimize the risk of conducting experiments that are statistically unlikely to show the presence of an experimental effect.  相似文献   

9.
Explanations of grouping in immediate ordered recall   总被引:3,自引:0,他引:3  
This article is about grouping in immediate ordered recall. The following findings are reported: (1) grouping a presentation improves recall, even when steps are taken to prevent rehearsal; (2) grouping primarily improves recall of the items adjoining the grouping, creating primacy and recency within groups; and (3) this primacy and recency are found even when single, isolated errors in recall are considered. These results suggest that the effects of grouping cannot be fully explained by rehearsal, chunking, or the number of directions in which an item can be transposed. It is suggested instead that (1) the auditory short-term store contains an unparsed and uncategorized representation that must be parsed and categorized just prior to recall, in a process of recovery; (2) items adjoining the boundary of a presentation are more easily recovered; and (3) grouping creates a boundary within the presentation. To support this explanation, a final experiment demonstrates an interaction between type of stimuli and serial position, with grouping most improving recall of adjoining phonemes.  相似文献   

10.
G C Baylis  J Driver  P McLeod 《Perception》1992,21(2):201-218
A relatively frequent error when reporting brief visual displays is to combine presented features incorrectly. It has been proposed that Gestalt grouping constrains such errors so that miscombined features tend to come from the same perceptual group. In three experiments it was examined whether this principle applies to grouping by motion, and to grouping by proximity. Miscombinations of colour and form were more likely to consist of a colour and form that had moved in the same direction than features which had moved in opposite directions. Miscombinations were also more likely for adjacent items. The implications of these results for the mechanisms of feature integration are discussed.  相似文献   

11.
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.  相似文献   

12.
TUCKER LR 《Psychometrika》1948,13(4):245-250
Outlined is the method used at present by the Educational Testing Service for computing intercorrelations from basic summations. This procedure is adapted to the use of high speed calculators in performing the calculations, and much of its value lies in the complete system of checks that is a part of the method. Besides the correlations that are the object of the procedure, covariances, means, standard deviations, and the number of cases are also recorded on the completed form to be available for further statistical steps.  相似文献   

13.
This article uses Monte Carlo techniques to examine the effect of heterogeneity of variance in multilevel analyses in terms of relative bias, coverage probability, and root mean square error (RMSE). For all simulated data sets, the parameters were estimated using the restricted maximum-likelihood (REML) method both assuming homogeneity and incorporating heterogeneity into multilevel models. We find that (a) the estimates for the fixed parameters are unbiased, but the associated standard errors are frequently biased when heterogeneity is ignored; by contrast, the standard errors of the fixed effects are almost always accurate when heterogeneity is considered; (b) the estimates for the random parameters are slightly overestimated; (c) both the homogeneous and heterogeneous models produce standard errors of the variance component estimates that are underestimated; however, taking heterogeneity into account, the REML-estimations give correct estimates of the standard errors at the lowest level and lead to less underestimated standard errors at the highest level; and (d) from the RMSE point of view, REML accounting for heterogeneity outperforms REML assuming homogeneity; a considerable improvement has been particularly detected for the fixed parameters. Based on this, we conclude that the solution presented can be uniformly adopted. We illustrate the process using a real dataset.  相似文献   

14.
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable. The EFA model is specified for these underlying continuous variables rather than the observed ordinal variables. Although these underlying continuous variables are not observed directly, their correlations can be estimated from the ordinal variables. These correlations are referred to as polychoric correlations. This article is concerned with ordinary least squares (OLS) estimation of parameters in EFA with polychoric correlations. Standard errors and confidence intervals for rotated factor loadings and factor correlations are presented. OLS estimates and the associated standard error estimates and confidence intervals are illustrated using personality trait ratings from 228 college students. Statistical properties of the proposed procedure are explored using a Monte Carlo study. The empirical illustration and the Monte Carlo study showed that (a) OLS estimation of EFA is feasible with large models, (b) point estimates of rotated factor loadings are unbiased, (c) point estimates of factor correlations are slightly negatively biased with small samples, and (d) standard error estimates and confidence intervals perform satisfactorily at moderately large samples.  相似文献   

15.
Grouping and short-term memory: Different means and patterns of grouping   总被引:5,自引:0,他引:5  
Two experiments, concerned with the improving effects of grouping on auditory short-term memory, are described. In the first, temporal grouping was found to improve recall considerably, but non-temporal grouping had a much smaller effect. Temporal grouping reduced the order errors more than other errors; it also changed the pattern of the order errors. Further, it altered the shape of the serial position curve of all errors. In the second experiment, irregular patterns of temporal grouping were found to be inferior to a regular pattern. The results are discussed in terms of the time available for processing previous items during the presentation of a sequence, and the form that this processing may take.  相似文献   

16.
In many situations, researchers collect multilevel (clustered or nested) data yet analyze the data either ignoring the clustering (disaggregation) or averaging the micro-level units within each cluster and analyzing the aggregated data at the macro level (aggregation). In this study we investigate the effects of ignoring the nested nature of data in confirmatory factor analysis (CFA). The bias incurred by ignoring clustering is examined in terms of model fit and standardized parameter estimates, which are usually of interest to researchers who use CFA. We find that the disaggregation approach increases model misfit, especially when the intraclass correlation (ICC) is high, whereas the aggregation approach results in accurate detection of model misfit in the macro level. Standardized parameter estimates from the disaggregation and aggregation approaches are deviated toward the values of the macro- and micro-level standardized parameter estimates, respectively. The degree of deviation depends on ICC and cluster size, particularly for the aggregation method. The standard errors of standardized parameter estimates from the disaggregation approach depend on the macro-level item communalities. Those from the aggregation approach underestimate the standard errors in multilevel CFA (MCFA), especially when ICC is low. Thus, we conclude that MCFA or an alternative approach should be used if possible.  相似文献   

17.
In the vast majority of psychological research utilizing multiple regression analysis, asymptotic probability values are reported. This paper demonstrates that asymptotic estimates of standard errors provided by multiple regression are not always accurate. A resampling permutation procedure is used to estimate the standard errors. In some cases the results differ substantially from the traditional least squares regression estimates.  相似文献   

18.
Relationships between the results of factor analysis and component analysis are derived when oblique factors have independent clusters with equal variances of unique factors. The factor loadings are analytically shown to be smaller than the corresponding component loadings while the factor correlations are shown to be greater than the corresponding component correlations. The condition for the inequality of the factor/component contributions is derived in the case with different variances for unique factors. Further, the asymptotic standard errors of parameter estimates are obtained for a simplified model with the assumption of multivariate normality, which shows that the component loading estimate is more stable than the corresponding factor loading estimate.  相似文献   

19.
Range restriction corrections require the predictor standard deviation in the applicant pool of interest. Unfortunately, this information is frequently not available in applied contexts. The common strategy in this type of situations is to use national‐norm standard deviation estimates. This study used data from 8,276 applicants applying to nine jobs in German governmental organizations to compare applicant pool standard deviations for two cognitive ability tests with national‐norm standard deviation estimates, and standard deviations for the total group of governmental applicants. Results revealed that job‐ and organizational context‐specific applicant pool standard deviations were on average about 10–12% smaller than estimates from national norms, and about 4–6% smaller than standard deviations for the total group of governmental applicants.  相似文献   

20.
Formulas for the standard error of measurement of three measures of change—simple difference scores, residualized difference scores, and the measure introduced by Tucker, Damarin, and Messick—are derived. Equating these formulas by pairs yields additional explicit formulas which provide a practical guide for determining the relative error of the three measures in any pretest-posttest design. The functional relationship between the standard error of measurement and the correlation between pretest and posttest observed scores remains essentially the same for each of the three measures despite variations in other test parameters (reliability coefficients, standard deviations), even when pretest and posttest errors of measurement are correlated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号