首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Valid use of the traditional independent samples ANOVA procedure requires that the population variances are equal. Previous research has investigated whether variance homogeneity tests, such as Levene's test, are satisfactory as gatekeepers for identifying when to use or not to use the ANOVA procedure. This research focuses on a novel homogeneity of variance test that incorporates an equivalence testing approach. Instead of testing the null hypothesis that the variances are equal against an alternative hypothesis that the variances are not equal, the equivalence-based test evaluates the null hypothesis that the difference in the variances falls outside or on the border of a predetermined interval against an alternative hypothesis that the difference in the variances falls within the predetermined interval. Thus, with the equivalence-based procedure, the alternative hypothesis is aligned with the research hypothesis (variance equality). A simulation study demonstrated that the equivalence-based test of population variance homogeneity is a better gatekeeper for the ANOVA than traditional homogeneity of variance tests.  相似文献   

2.
黎光明  张敏强 《心理科学》2013,36(1):203-209
方差分量估计是概化理论的必用技术,但受限于抽样,需要对其变异量进行探讨。采用Monte Carlo数据模拟技术,探讨非正态数据分布对四种方法估计概化理论方差分量变异量的影响。结果表明:(1)不同非正态数据分布下,各种估计方法的“性能”表现出差异性;(2)数据分布对方差分量变异量估计有影响,适合于非正态分布数据的方差分量变异量估计方法不一定适合于正态分布数据。  相似文献   

3.
In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely—but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners’ position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.  相似文献   

4.
ProblemThe association between personality and traffic accident involvement has been extensively researched, but the literature is difficult to summarise, because different personality instruments and statistics have been used, and effect sizes differ strongly between studies.MethodA meta-analysis of studies which had used measures of personality which could be converted into Big Five dimensions, and traffic accidents as the dependent variable, was undertaken.AnalysisOutlier values were identified and removed. Also, analyses on effects of common method variance, type of instrument, dissemination bias and restriction of variance were undertaken.ResultsOutlier problems exist in these data, which prohibit any certainty in the conclusions. Each of the 5 personality dimensions were predictors of accident involvement, but the effects were small (r < .1), which is much weaker than in a previous meta-analysis. Effect sizes were dependent upon variance in the accident variable, and the true (population) effects could therefore be larger than the present estimates, something which could be ascertained by new studies using high-risk samples over longer time periods. Newer studies and those using Big Five instruments tended to have smaller effects. No effects of common method variance could be found.ConclusionsTests of personality are weak predictors of traffic accident involvement, compared to other variables, such as previous accidents. Research into whether larger effects of personality can be found with methods other than self-reports is needed.  相似文献   

5.
Abstract: It is often required to predict the scores or their variations under interest. Ishii and Watanabe (2001) investigated, in the context of psychological measurement, the Bayesian predictive distribution of a new subject’s scores for tests and subjects’ scores for a new test. In this paper, the Bayesian posterior predictive distribution of a new subject’s scores for a new parallel test were considered. And the effects of the number of subjects, the number of the tests, and the test reliability were investigated. Then, it was found that, under assumptions that (co)variance parameters are known, the predictive variance of a new subject’s score for a new test was equal to the predictive variances of the new subject’s scores for the existent tests. It was also found that the effect of the number of subjects was relatively large and the effect of the number of tests was relatively small, when a new subject’s scores for existent tests were not observed.  相似文献   

6.
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the observed covariance matrix minus that diagonal matrix are positive semidefinite. As a result, it becomes possible to distinguish the explained common variance from the total common variance. The percentage of explained common variance is similar in meaning to the percentage of explained observed variance in Principal Component Analysis, but typically the former is much closer to 100 than the latter. So far, no statistical theory of MRFA has been developed. The present paper is a first start. It yields closed-form expressions for the asymptotic bias of the explained common variance, or, more precisely, of the unexplained common variance, under the assumption of multivariate normality. Also, the asymptotic variance of this bias is derived, and also the asymptotic covariance matrix of the unique variances that define a MRFA solution. The presented asymptotic statistical inference is based on a recently developed perturbation theory of semidefinite programming. A numerical example is also offered to demonstrate the accuracy of the expressions.This work was supported, in part, by grant DMS-0073770 from the National Science Foundation.  相似文献   

7.
共同方法变异是由构念间相似的测量方法特征引起的系统变异, 可歪曲构念间的关系, 造成共同方法偏差。60年来, 这一问题在社会科学研究中被反复提及, 但它是否严重威胁研究效度尚无定论。虽然实证证据表明, 共同方法变异普遍存在, 数据来源、测量时间、问卷设计等因素可导致共同方法偏差, 使自我报告的横断式调查研究饱受质疑, 但部分学者从测量误差和非共同方法变异的制约作用等角度做出了回应和辩护, 认为无需过度担忧。以测量为中心的新视角强调共同方法变异是测量方法和被测构念交互作用的产物, 应从方法和构念两个维度评估共同方法变异风险。建议研究者树立均衡无偏的态度, 接纳共同方法变异的存在, 纠正对自我报告的偏见, 着重通过改进研究设计做好预先应对。  相似文献   

8.
A general one-way analysis of variance components with unequal replication numbers is used to provide unbiased estimates of the true and error score variance of classical test theory. The inadequacy of the ANOVA theory is noted and the foundations for a Bayesian approach are detailed. The choice of prior distribution is discussed and a justification for the Tiao-Tan prior is found in the particular context of the “n-split” technique. The posterior distributions of reliability, error score variance, observed score variance and true score variance are presented with some extensions of the original work of Tiao and Tan. Special attention is given to simple approximations that are available in important cases and also to the problems that arise when the ANOVA estimate of true score variance is negative. Bayesian methods derived by Box and Tiao and by Lindley are studied numerically in relation to the problem of estimating true score. Each is found to be useful and the advantages and disadvantages of each are discussed and related to the classical test-theoretic methods. Finally, some general relationships between Bayesian inference and classical test theory are discussed. Supported in part by the National Institute of Child Health and Human Development under Research Grant 1 PO1 HDO1762. Reproduction, translation, use or disposal by or for the United States Government is permitted.  相似文献   

9.
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.  相似文献   

10.
The more replicated findings about gender difference in cognitive performance suggest female superiority on visuomotor speed and language ability and male superiority on mechanical and visuospatial tasks. Generally, group strengths found in the early school years become more established at adolescence and remain stable through adulthood. The current study tested whether the patterns established in the early years remained among 30 adult subjects. We also utilized a series of exploratory analyses to determine if observed gender differences were impacted by the covariance present between all cognitive tests. Results suggest that although the patterns established in the early years remain stable through time for males, the established patterns for females are altered with age. Our findings are compelling in supporting a male advantage on visuospatial tasks among older adults. These findings are discussed in terms of common variance between test instruments as a possible source of difference. Our finding that the gender effect tended to increase when common variance was controlled argues that this methodology may enhance the ability to detect domain specific effects.  相似文献   

11.
Introducing principal components (PCs) to students is difficult. First, the matrix algebra and mathematical maximization lemmas are daunting, especially for students in the social and behavioral sciences. Second, the standard motivation involving variance maximization subject to unit length constraint does not directly connect to the “variance explained” interpretation. Third, the unit length and uncorrelatedness constraints of the standard motivation do not allow re-scaling or oblique rotations, which are common in practice. Instead, we propose to motivate the subject in terms of optimizing (weighted) average proportions of variance explained in the original variables; this approach may be more intuitive, and hence easier to understand because it links directly to the familiar “R-squared” statistic. It also removes the need for unit length and uncorrelatedness constraints, provides a direct interpretation of “variance explained,” and provides a direct answer to the question of whether to use covariance-based or correlation-based PCs. Furthermore, the presentation can be made without matrix algebra or optimization proofs. Modern tools from data science, including heat maps and text mining, provide further help in the interpretation and application of PCs; examples are given. Together, these techniques may be used to revise currently used methods for teaching and learning PCs in the behavioral sciences.  相似文献   

12.
A Bayesian procedure is given for estimation in unrestricted common factor analysis. A choice of the form of the prior distribution is justified. It is shown empirically that the procedure achieves its objective of avoiding inadmissible estimates of unique variances, and is reasonably insensitive to certain variations in the shape of the prior distribution.  相似文献   

13.
Klotzke  Konrad  Fox  Jean-Paul 《Psychometrika》2019,84(3):649-672

A multivariate generalization of the log-normal model for response times is proposed within an innovative Bayesian modeling framework. A novel Bayesian Covariance Structure Model (BCSM) is proposed, where the inclusion of random-effect variables is avoided, while their implied dependencies are modeled directly through an additive covariance structure. This makes it possible to jointly model complex dependencies due to for instance the test format (e.g., testlets, complex constructs), time limits, or features of digitally based assessments. A class of conjugate priors is proposed for the random-effect variance parameters in the BCSM framework. They give support to testing the presence of random effects, reduce boundary effects by allowing non-positive (co)variance parameters, and support accurate estimation even for very small true variance parameters. The conjugate priors under the BCSM lead to efficient posterior computation. Bayes factors and the Bayesian Information Criterion are discussed for the purpose of model selection in the new framework. In two simulation studies, a satisfying performance of the MCMC algorithm and of the Bayes factor is shown. In comparison with parameter expansion through a half-Cauchy prior, estimates of variance parameters close to zero show no bias and undercoverage of credible intervals is avoided. An empirical example showcases the utility of the BCSM for response times to test the influence of item presentation formats on the test performance of students in a Latin square experimental design.

  相似文献   

14.
各种心理调查、心理实验中, 数据的缺失随处可见。由于数据缺失, 给概化理论分析非平衡数据的方差分量带来一系列问题。基于概化理论框架下, 运用Matlab 7.0软件, 自编程序模拟产生随机双面交叉设计p×i×r缺失数据, 比较和探讨公式法、REML法、拆分法和MCMC法在估计各个方差分量上的性能优劣。结果表明:(1) MCMC方法估计随机双面交叉设计p×i×r缺失数据方差分量, 较其它3种方法表现出更强的优势; (2) 题目和评分者是缺失数据方差分量估计重要的影响因素。  相似文献   

15.
黎光明  张敏强 《心理学报》2013,45(1):114-124
Bootstrap方法是一种有放回的再抽样方法, 可用于概化理论的方差分量及其变异量估计。用Monte Carlo技术模拟四种分布数据, 分别是正态分布、二项分布、多项分布和偏态分布数据。基于p×i设计, 探讨校正的Bootstrap方法相对于未校正的Bootstrap方法, 是否改善了概化理论估计四种模拟分布数据的方差分量及其变异量。结果表明:跨越四种分布数据, 从整体到局部, 不论是“点估计”还是“变异量”估计, 校正的Bootstrap方法都要优于未校正的Bootstrap方法, 校正的Bootstrap方法改善了概化理论方差分量及其变异量估计。  相似文献   

16.
Missing data are very common in behavioural and psychological research. In this paper, we develop a Bayesian approach in the context of a general nonlinear structural equation model with missing continuous and ordinal categorical data. In the development, the missing data are treated as latent quantities, and provision for the incompleteness of the data is made by a hybrid algorithm that combines the Gibbs sampler and the Metropolis‐Hastings algorithm. We show by means of a simulation study that the Bayesian estimates are accurate. A Bayesian model comparison procedure based on the Bayes factor and path sampling is proposed. The required observations from the posterior distribution for computing the Bayes factor are simulated by the hybrid algorithm in Bayesian estimation. Our simulation results indicate that the correct model is selected more frequently when the incomplete records are used in the analysis than when they are ignored. The methodology is further illustrated with a real data set from a study concerned with an AIDS preventative intervention for Filipina sex workers.  相似文献   

17.
The use of one-way analysis of variance tables for obtaining unbiased estimates of true score variance and error score variance in the classical test theory model is discussed. Attention is paid to both balanced (equal numbers of observations on each person) and unbalanced designs, and estimates provided for both homoscedastic (common error variance for all persons) and heteroscedastic cases. It is noted that optimality properties (minimum variance) can be claimed for estimates derived from analysis of variance tables only in the balanced, homoscedastic case, and that there they are essentially a reflection of the symmetry inherent in the situation. Estimates which might be preferable in other cases are discussed. An example is given where a natural analysis of variance table leads to estimates which cannot be derived from the set of statistics which is sufficient under normality assumptions. Reference is made to Bayesian studies which shed light on the difficulties encountered. Work on this paper was carried out at the headquarters of the American College Testing Program, Iowa City, Iowa, while the author was on leave from the University College of Wales.  相似文献   

18.
时空干扰效应是指时间知觉受空间信息干扰或空间知觉受时间信息干扰而出现错觉的现象。部分研究认为时空干扰是不对称的,空间对时间的干扰总是更大;还有研究认为时间和空间相互干扰的强度受实验因素影响,一般来说,空间对时间的干扰更大,但时间也能对空间产生同等程度甚至更大的干扰。在回顾隐喻理论和量值理论的主要观点之后,重点分析贝叶斯模型对时空干扰效应的解释,最后提出未来研究应关注的三个问题,即拓展贝叶斯模型对时空干扰效应的解释范围,探明基于贝叶斯推断的时空干扰神经机制和探索时空干扰的调控方法。  相似文献   

19.
基于概化理论的方差分量变异量估计   总被引:2,自引:0,他引:2  
黎光明  张敏强 《心理学报》2009,41(9):889-901
概化理论广泛应用于心理与教育测量实践中, 方差分量估计是进行概化理论分析的关键。方差分量估计受限于抽样, 需要对其变异量进行探讨。采用蒙特卡洛(Monte Carlo)数据模拟技术, 在正态分布下讨论不同方法对基于概化理论的方差分量变异量估计的影响。结果表明: Jackknife方法在方差分量变异量估计上不足取; 不采取Bootstrap方法的“分而治之”策略, 从总体上看, Traditional方法和有先验信息的MCMC方法在标准误及置信区间这两个变异量估计上优势明显。  相似文献   

20.
A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi‐component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one‐tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号