首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A Monte Carlo approach is employed in determining whether or not certain variables produce systematic effects on the sampling variability of individual factor loadings. A number of sample correlation matrices were generated from a specified population, factored, and transformed to a least-squares fit to the population values. Influences of factor strength, communality and loading size are discussed in relation to the statistics summarizing the results of the above procedures. Influences producing biased estimators of the population values are also discussed.This study was supported in part by NSF Grant GB 4230. Computing assistance was obtained from the Western Data Processing Center and Health Sciences Computing Facility, UCLA, sponsored by NIH Grant FR-3.  相似文献   

2.
The method of oversampling data from a preselected range of a variable’s distribution is often applied by researchers who wish to study rare outcomes without substantially increasing sample size. Despite frequent use, however, it is not known whether this method introduces statistical bias due to disproportionate representation of a particular range of data. The present study employed simulated data sets to examine how oversampling introduces systematic bias in effect size estimates (of the relationship between oversampled predictor variables and the outcome variable), as compared with estimates based on a random sample. In general, results indicated that increased oversampling was associated with a decrease in the absolute value of effect size estimates. Critically, however, the actual magnitude of this decrease in effect size estimates was nominal. This finding thus provides the first evidence that the use of the oversampling method does not systematically bias results to a degree that would typically impact results in behavioral research. Examining the effect of sample size on oversampling yielded an additional important finding: For smaller samples, the use of oversampling may be necessary to avoid spuriously inflated effect sizes, which can arise when the number of predictor variables and rare outcomes is comparable.  相似文献   

3.
The performance of four rules for determining the number of components to retain (Kaiser's eigenvalue greater than unity, Cattell's SCREE, Bartlett's test, and Velicer's MAP) was investigated across four systematically varied factors (sample size, number of variables, number of components, and component saturation). Ten sample correlation matrices were generated from each of 48 known population correlation matrices representing the combinations of conditions. The performance of the SCREE and MAP rules was generally the best across all situations. Bartlett's test was generally adequate except when the number of variables was close to the sample size. Kaiser's rule tended to severely overestimate the number of components.  相似文献   

4.
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes (N), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for N below 50. Simulations were carried out to estimate the minimum required N for different levels of loadings (λ), number of factors (f), and number of variables (p) and to examine the extent to which a small N solution can sustain the presence of small distortions such as interfactor correlations, model error, secondary loadings, unequal loadings, and unequal p/f. Factor recovery was assessed in terms of pattern congruence coefficients, factor score correlations, Heywood cases, and the gap size between eigenvalues. A subsampling study was also conducted on a psychological dataset of individuals who filled in a Big Five Inventory via the Internet. Results showed that when data are well conditioned (i.e., high λ, low f, high p), EFA can yield reliable results for N well below 50, even in the presence of small distortions. Such conditions may be uncommon but should certainly not be ruled out in behavioral research data.  相似文献   

5.
When multisource feedback instruments, for example, 360-degree feedback tools, are validated, multilevel structural equation models are the method of choice to quantify the amount of reliability as well as convergent and discriminant validity. A non-standard multilevel structural equation model that incorporates self-ratings (level-2 variables) and others’ ratings from different additional perspectives (level-1 variables), for example, peers and subordinates, has recently been presented. In a Monte Carlo simulation study, we determine the minimal required sample sizes for this model. Model parameters are accurately estimated even with the smallest simulated sample size of 100 self-ratings and two ratings of peers and of subordinates. The precise estimation of standard errors necessitates sample sizes of 400 self-ratings or at least four ratings of peers and subordinates. However, if sample sizes are smaller, mainly standard errors concerning common method factors are biased. Interestingly, there are trade-off effects between the sample sizes of self-ratings and others’ ratings in their effect on estimation bias. The degree of convergent and discriminant validity has no effect on the accuracy of model estimates. The χ2 test statistic does not follow the expected distribution. Therefore, we suggest using a corrected level-specific standardized root mean square residual to analyse model fit and conclude with further recommendations for applied organizational research.  相似文献   

6.
Norman Cliff 《Psychometrika》1970,35(2):163-178
Data are reported which show the statistical relation between the sample and population characteristic vectors of correlation matrices with squared multiple correlations as communality estimates. Sampling fluctuations were found to relate only to differences in the square roots of characteristic roots and to sample size. A principle for determining the number of factors to rotate and interpret after rotation is suggested.This study was supported by the National Science Foundation, Grant GB 4230. The author wishes to express his appreciation for the use of Western Data Processing Center and the Health Sciences Computing Facility, UCLA. He also thanks Dr. Roger Pennell for extremely valuable assistance in a number of phases of the study.  相似文献   

7.
Although it is generally accepted that social risk factors predict delays in early cognitive and language development, there is less agreement about how to represent such associations statistically. Using data collected prospectively on 87 African American children during their first 4 years, this study examined 3 analytic methods for describing a child's level of social risk: (a) individual risk variables, (b) factor scores derived from those risk variables, and (c) a risk index computed by tallying the number of risk conditions present. Comparisons indicated that the individual-risk-variables approach provides better overall prediction of developmental outcomes at a particular age but is less useful in predicting developmental patterns. The risk-factor approach provides good prediction of developmental trajectories when sample sizes are moderate to large. Finally, the risk-index approach is useful for relating social risk to developmental patterns when a large number of risk variables are assessed with a small sample or when other constructs are of primary interest.  相似文献   

8.
本研究通过蒙特卡洛模拟考查了分类精确性指数Entropy及其变式受样本量、潜类别数目、类别距离和指标个数及其组合的影响情况。研究结果表明:(1)尽管Entropy值与分类精确性高相关,但其值随类别数、样本量和指标数的变化而变化,很难确定唯一的临界值;(2)其他条件不变的情况下,样本量越大,Entropy的值越小,分类精确性越差;(3)类别距离对分类精确性的影响具有跨样本量和跨类别数的一致性;(4)小样本(N=50~100)的情况下,指标数越多,Entropy的结果越好;(5)在各种条件下Entropy对分类错误率比其它变式更灵敏。  相似文献   

9.
How do changes in choice-set size influence information search and subsequent decisions? Moreover, does information overload influence information processing with larger choice sets? We investigated these questions by letting people freely explore sets of gambles before choosing one of them, with the choice sets either increasing or decreasing in number for each participant (from two to 32 gambles). Set size influenced information search, with participants taking more samples overall, but sampling a smaller proportion of gambles and taking fewer samples per gamble, when set sizes were larger. The order of choice sets also influenced search, with participants sampling from more gambles and taking more samples overall if they started with smaller as opposed to larger choice sets. Inconsistent with information overload, information processing appeared consistent across set sizes and choice order conditions, reliably favoring gambles with higher sample means. Despite the lack of evidence for information overload, changes in information search did lead to systematic changes in choice: People who started with smaller choice sets were more likely to choose gambles with the highest expected values, but only for small set sizes. For large set sizes, the increase in total samples increased the likelihood of encountering rare events at the same time that the reduction in samples per gamble amplified the effect of these rare events when they occurred—what we call search-amplified risk. This led to riskier choices for individuals whose choices most closely followed the sample mean.  相似文献   

10.
A Monte Carlo study assessed the effect of sampling error and model characteristics on the occurrence of nonconvergent solutions, improper solutions and the distribution of goodness-of-fit indices in maximum likelihood confirmatory factor analysis. Nonconvergent and improper solutions occurred more frequently for smaller sample sizes and for models with fewer indicators of each factor. Effects of practical significance due to sample size, the number of indicators per factor and the number of factors were found for GFI, AGFI, and RMR, whereas no practical effects were found for the probability values associated with the chi-square likelihood ratio test.James Anderson is now at the J. L. Kellogg Graduate School of Management, Northwestern University. The authors gratefully acknowledge the comments and suggestions of Kenneth Land and the reviewers, and the assistance of A. Narayanan with the analysis. Support for this research was provided by the Graduate School of Business and the University Research Institute of the University of Texas at Austin.  相似文献   

11.
Asymptotic distributions of the estimators of communalities are derived for the maximum likelihood method in factor analysis. It is shown that the common practice of equating the asymptotic standard error of the communality estimate to the unique variance estimate is correct for standardized communality but not correct for unstandardized communality. In a Monte Carlo simulation the accuracy of the normal approximation to the distributions of the estimators are assessed when the sample size is 150 or 300. This study was carried out in part under the ISM Cooperative Research Program (90-ISM-CRP-9).  相似文献   

12.
In this paper, we study the effect of the elimination of items from a scale so that only those items that correlate highly are chosen. Using a simulation, we estimate the impact on Cronbach's alpha as a function of the total number of items in a scale, the number of items chosen, the true correlation among the items, and the sample size. The results suggest that a substantial effect can exist. Not surprisingly, the effect is larger when sample sizes are smaller, when a smaller fraction of the original items is retained, and when there is greater variation in the true item-total correlations of the measures.  相似文献   

13.
I examined Rorschach assessment of personality changes following psychotherapy. I conducted a comprehensive literature search to find all studies using the Rorschach method at least twice for the same participant in connection with psychotherapy. I conducted meta-analyses for 38 samples, and I performed regression analyses to identify moderating factors. Across all Rorschach scores, the total weighted sample effect size was r = .26, and nearly half the variables obtained effect sizes higher than .30. Several moderating factors were found. Most important, effect sizes increased with longer and more intensive therapy. More concern for interscorer reliability was associated with larger effect sizes, whereas a higher degree of scorer blinding was associated with smaller effect size magnitudes. Predicted levels of change based on the regression models indicated substantial increases in effect size with longer therapies. The data indicate that many elements in the Rorschach are valid indicators of change despite the poor reputation the method has acquired within psychotherapy research.  相似文献   

14.
When sample information is combined, it is generally considered normative to weight information based on larger samples more heavily than information based on smaller samples. However, if samples appear likely to have been drawn from different subpopulations, it is reasonable to combine estimates of these subpopulation means (typically, the sample means) without weighting these estimates by sample size. This study investigated whether laypeople are influenced by the likelihood of samples coming from the same population when determining how to combine information. In two experiments we show that (1) implied binomial variability affected participants’ judgments of the likelihood that a sample was drawn from a given population, (2) participants' judgments were more affected by sample size when samples were implied to be drawn randomly from a general population, compared to when they were implied to be drawn from different subpopulations, and (3) people higher in numeracy gave more normative responses. We conclude that when determining how to weight and combine samples, laypeople use not only the provided data, but also information about likelihood and sampling processes that these data imply.  相似文献   

15.
Principal covariate regression (PCOVR) is a method for regressing a set of criterion variables with respect to a set of predictor variables when the latter are many in number and/or collinear. This is done by extracting a limited number of components that simultaneously synthesize the predictor variables and predict the criterion ones. So far, no procedure has been offered for estimating statistical uncertainties of the obtained PCOVR parameter estimates. The present paper shows how this goal can be achieved, conditionally on the model specification, by means of the bootstrap approach. Four strategies for estimating bootstrap confidence intervals are derived and their statistical behaviour in terms of coverage is assessed by means of a simulation experiment. Such strategies are distinguished by the use of the varimax and quartimin procedures and by the use of Procrustes rotations of bootstrap solutions towards the sample solution. In general, the four strategies showed appropriate statistical behaviour, with coverage tending to the desired level for increasing sample sizes. The main exception involved strategies based on the quartimin procedure in cases characterized by complex underlying structures of the components. The appropriateness of the statistical behaviour was higher when the proper number of components were extracted.  相似文献   

16.
An examination of the determinantal equation associated with Rao's canonical factors suggests that Guttman's best lower bound for the number of common factors corresponds to the number of positive canonical correlations when squared multiple correlations are used as the initial estimates of communality. When these initial communality estimates are used, solving Rao's determinantal equation (at the first stage) permits expressing several matrices as functions of factors that differ only in the scale of their columns; these matrices include the correlation matrix with units in the diagonal, the correlation matrix with squared multiple correlations as communality estimates, Guttman's image covariance matrix, and Guttman's anti-image covariance matrix. Further, the factor scores associated with these factors can be shown to be either identical or simply related by a scale change. Implications for practice are discussed, and a computing scheme which would lead to an exhaustive analysis of the data with several optional outputs is outlined.  相似文献   

17.
We investigated whether chimpanzees (Pan troglodytes) misperceived food portion sizes depending upon the context in which they were presented, something that often affects how much humans serve themselves and subsequently consume. Chimpanzees judged same-sized and smaller food portions to be larger in amount when presented on a small plate compared to an equal or larger food portion presented on a large plate and did so despite clearly being able to tell the difference in portions when plate size was identical. These results are consistent with data from the human literature in which people misperceive food portion sizes as a function of plate size. This misperception is attributed to the Delboeuf illusion which occurs when the size of a central item is misperceived on the basis of its surrounding context. These results demonstrate a cross-species shared visual misperception of portion size that affects choice behavior, here in a nonhuman species for which there is little experience with tests that involve choosing between food amounts on dinnerware. The biases resulting in this form of misperception of food portions appear to have a deep-rooted evolutionary history which we share with, at minimum, our closest living nonhuman relative, the chimpanzee.  相似文献   

18.
At least four approaches have been used to estimate communalities that will leave an observed correlation matrixR Gramian and with minimum rank. It has long been known that the square of the observed multiple-correlation coefficient is a lower bound to any communality of a variable ofR. This lower bound actually provides a best possible estimate in several senses. Furthermore, under certain conditions basic to the Spearman-Thurstone common-factor theory, the bound must equal the communality in the limit as the number of observed variables increases. Otherwise, this type of theory cannot hold forR.This research was facilitated by a grant from the Lucius N. Littauer Foundation to the American Committee for Social Research in Israel in order to promote methodological work of the Israel Institute of Applied Social Research.  相似文献   

19.
The Leiter-3 is a nonverbal assessment that evaluates cognitive abilities and has been adapted for use in Scandinavia. Generalizability of United States-based normative scoring for use with the Scandinavian population was evaluated. Leiter-3 scores from a sample of Scandinavian students were compared with scores obtained from the Leiter-3 standardization sample, controlling for confounding variables, across ages, using mixed-methods analysis. A Scandinavian-population-based sample was created from Leiter-3 standardization data and norms were constructed and were used to generate standardized scores from the sample data. Results suggest that overall the Scandinavian test-takers score higher than American test-takers, but that differences between groups were minimized when controlling for factors that may influence cognitive performance. Creating Scandinavian based scores was not effective at reducing gaps in performance, suggesting that differences in performance between the different populations may be attributable to factors other than those typically controlled for when constructing standardized tests. Implications of these results and recommendations for Leiter-3 adaptation are reviewed.  相似文献   

20.
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample standardized regression coefficients are also biased in general, although it should not be a concern in practice when the sample size is not too small. Monte Carlo results imply that, for both standardized and unstandardized sample regression coefficients, SE estimates based on asymptotics tend to under-predict the empirical ones at smaller sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号