首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A procedure for generating multivariate nonnormal distributions is proposed. Our procedure generates average values of intercorrelations much closer to population parameters than competing procedures for skewed and/or heavy tailed distributions and for small sample sizes. Also, it eliminates the necessity of conducting a factorization procedure on the population correlation matrix that underlies the random deviates, and it is simpler to code in a programming language (e.g., FORTRAN). Numerical examples demonstrating the procedures are given. Monte Carlo results indicate our procedure yields excellent agreement between population parameters and average values of intercorrelation, skew, and kurtosis.  相似文献   

2.
A method for simulating non-normal distributions   总被引:11,自引:0,他引:11  
A method of introducing a controlled degree of skew and kurtosis for Monte Carlo studies was derived. The form of such a transformation on normal deviates [X N(0, 1)] isY =a +bX +cX 2 +dX 3. Analytic and empirical validation of the method is demonstrated.This work was done while the author was at the University of Illinois at Champaign-Urbana.  相似文献   

3.
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust \(\chi ^2\) and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.  相似文献   

4.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

5.
Several authors have cautioned against using Fisher's z‐transformation in random‐effects meta‐analysis of correlations, which seems to perform poorly in some situations, especially with substantial inter‐study heterogeneity. Attributing this performance largely to the direct z‐to‐r transformation (DZRT) of Fisher z results (e.g. point estimate of mean correlation), in a previous paper Hafdahl (2009) proposed point and interval estimators of the mean Pearson r correlation that instead use an integral z‐to‐r transformation (IZRT). The present Monte Carlo study of these IZRT Fisher z estimators includes comparisons with their DZRT counterparts and with estimators based on Pearson r correlations. The IZRT point estimator was usually more accurate and efficient than its DZRT counterpart and comparable to the two Pearson r point estimators – better in some conditions but worse in others. Coverage probability for the IZRT confidence intervals (CIs) was often near nominal, much better than for the DZRT CIs, and comparable to coverage for the Pearson r CIs; every approach's CI fell markedly below nominal in some conditions. The IZRT estimators contradict warnings about Fisher z estimators' poor performance. Recommendations for practising research synthesists are offered, and an Appendix provides computing code to implement the IZRT as in the real‐data example.  相似文献   

6.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

7.
Asymptotic distributions of the estimators of communalities are derived for the maximum likelihood method in factor analysis. It is shown that the common practice of equating the asymptotic standard error of the communality estimate to the unique variance estimate is correct for standardized communality but not correct for unstandardized communality. In a Monte Carlo simulation the accuracy of the normal approximation to the distributions of the estimators are assessed when the sample size is 150 or 300. This study was carried out in part under the ISM Cooperative Research Program (90-ISM-CRP-9).  相似文献   

8.
Recent studies have shown that many physiological and behavioral processes can be characterized by long-range correlations. The Hurst exponent H of fractal analysis and the fractional-differencing parameter d of the ARFIMA methodology are useful for capturing serial correlations. In this study, we report on different estimators of H and d implemented in R, a popular and freely available software package. By means of Monte Carlo simulations, we analyzed the performance of (1) the Geweke—Porter-Hudak estimator, (2) the approximate maximum likelihood algorithm, (3) the smoothed periodogram approach, (4) the Whittle estimator, (5) rescaled range analysis, (6) a modified periodogram, (7) Higuchi’s method, and (8) detrended fluctuation analysis. The findings—confined to ARFIMA (0, d, 0) models and fractional Gaussian noise—identify the best estimators for persistent and antipersistent series. Two examples combining these results with the step-by-step procedure proposed by Delignières et al. (2006) demonstrate how this evaluation can be used as a guideline in a typical research situation.  相似文献   

9.
方杰  温忠麟 《心理科学》2018,(4):962-967
比较了贝叶斯法、Monte Carlo法和参数Bootstrap法在2-1-1多层中介分析中的表现。结果发现:1)有先验信息的贝叶斯法的中介效应点估计和区间估计都最准确;2)无先验信息的贝叶斯法、Monte Carlo法、偏差校正和未校正的参数Bootstrap法的中介效应点估计和区间估计表现相当,但Monte Carlo法在第Ⅰ类错误率和区间宽度指标上表现略优于其他三种方法,偏差校正的Bootstrap法在统计检验力上表现略优于其他三种方法,但在第Ⅰ类错误率上表现最差;结果表明,当有先验信息时,推荐使用贝叶斯法;当先验信息不可得时,推荐使用Monte Carlo法。  相似文献   

10.
Arnau J  Bendayan R  Blanca MJ  Bono R 《Psicothema》2012,24(3):449-454
This study aimed to evaluate the robustness of the linear mixed model, with the Kenward-Roger correction for degrees of freedom, when implemented in SAS PROC MIXED, using split-plot designs with small sample sizes. A Monte Carlo simulation design involving three groups and four repeated measures was used, assuming an unstructured covariance matrix to generate the data. The study variables were: sphericity, with epsilon values of 0.75 and 0.57; group sizes, equal or unequal; and shape of the distribution. As regards the latter, non-normal distributions were introduced, combining different values of kurtosis in each group. In the case of unbalanced designs, the effect of pairing (positive or negative) the degree of kurtosis with group size was also analysed. The results show that the Kenward-Roger procedure is liberal, particularly for the interaction effect, under certain conditions in which normality is violated. The relationship between the values of kurtosis in the groups and the pairing of kurtosis with group size are found to be relevant variables to take into account when applying this procedure.  相似文献   

11.
The study explores the robustness to violations of normality and sphericity of linear mixed models when they are used with the Kenward–Roger procedure (KR) in split‐plot designs in which the groups have different distributions and sample sizes are small. The focus is on examining the effect of skewness and kurtosis. To this end, a Monte Carlo simulation study was carried out, involving a split‐plot design with three levels of the between‐subjects grouping factor and four levels of the within‐subjects factor. The results show that: (1) the violation of the sphericity assumption did not affect KR robustness when the assumption of normality was not fulfilled; (2) the robustness of the KR procedure decreased as skewness in the distributions increased, there being no strong effect of kurtosis; and (3) the type of pairing between kurtosis and group size was shown to be a relevant variable to consider when using this procedure, especially when pairing is positive (i.e., when the largest group is associated with the largest value of the kurtosis coefficient and the smallest group with its smallest value). The KR procedure can be a good option for analysing repeated‐measures data when the groups have different distributions, provided the total sample sizes are 45 or larger and the data are not highly or extremely skewed.  相似文献   

12.
Generalized fiducial inference (GFI) has been proposed as an alternative to likelihood-based and Bayesian inference in mainstream statistics. Confidence intervals (CIs) can be constructed from a fiducial distribution on the parameter space in a fashion similar to those used with a Bayesian posterior distribution. However, no prior distribution needs to be specified, which renders GFI more suitable when no a priori information about model parameters is available. In the current paper, we apply GFI to a family of binary logistic item response theory models, which includes the two-parameter logistic (2PL), bifactor and exploratory item factor models as special cases. Asymptotic properties of the resulting fiducial distribution are discussed. Random draws from the fiducial distribution can be obtained by the proposed Markov chain Monte Carlo sampling algorithm. We investigate the finite-sample performance of our fiducial percentile CI and two commonly used Wald-type CIs associated with maximum likelihood (ML) estimation via Monte Carlo simulation. The use of GFI in high-dimensional exploratory item factor analysis was illustrated by the analysis of a set of the Eysenck Personality Questionnaire data.  相似文献   

13.
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z′ under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z′ interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code.  相似文献   

14.
This study analyzes the robustness of the linear mixed model (LMM) with the Kenward–Roger (KR) procedure to violations of normality and sphericity when used in split-plot designs with small sample sizes. Specifically, it explores the independent effect of skewness and kurtosis on KR robustness for the values of skewness and kurtosis coefficients that are most frequently found in psychological and educational research data. To this end, a Monte Carlo simulation study was designed, considering a split-plot design with three levels of the between-subjects grouping factor and four levels of the within-subjects factor. Robustness is assessed in terms of the probability of type I error. The results showed that (1) the robustness of the KR procedure does not differ as a function of the violation or satisfaction of the sphericity assumption when small samples are used; (2) the LMM with KR can be a good option for analyzing total sample sizes of 45 or larger when their distributions are normal, slightly or moderately skewed, and with different degrees of kurtosis violation; (3) the effect of skewness on the robustness of the LMM with KR is greater than the corresponding effect of kurtosis for common values; and (4) when data are not normal and the total sample size is 30, the procedure is not robust. Alternative analyses should be performed when the total sample size is 30.  相似文献   

15.
RMediation: An R package for mediation analysis confidence intervals   总被引:1,自引:0,他引:1  
This article describes the RMediation package,which offers various methods for building confidence intervals (CIs) for mediated effects. The mediated effect is the product of two regression coefficients. The distribution-of-the-product method has the best statistical performance of existing methods for building CIs for the mediated effect. RMediation produces CIs using methods based on the distribution of product, Monte Carlo simulations, and an asymptotic normal distribution. Furthermore, RMediation generates percentiles, quantiles, and the plot of the distribution and CI for the mediated effect. An existing program, called PRODCLIN, published in Behavior Research Methods, has been widely cited and used by researchers to build accurate CIs. PRODCLIN has several limitations: The program is somewhat cumbersome to access and yields no result for several cases. RMediation described herein is based on the widely available R software, includes several capabilities not available in PRODCLIN, and provides accurate results that PRODCLIN could not.  相似文献   

16.
方杰  张敏强 《心理学报》2012,44(10):1408-1420
针对中介效应ab的抽样分布往往不是正态分布的问题,学者近年提出了三类无需对ab的抽样分布进行任何限制且适用于中、小样本的方法,包括乘积分布法、非参数Bootstrap和马尔科夫链蒙特卡罗(MCMC)方法.采用模拟技术比较了三类方法在中介效应分析中的表现.结果发现:1)有先验信息的MCMC方法的ab点估计最准确;2)有先验信息的MCMC方法的统计功效最高,但付出了低估第Ⅰ类错误率的代价,偏差校正的非参数百分位Bootstrap方法的统计功效其次,但付出了高估第Ⅰ类错误率的代价;3)有先验信息的MCMC方法的中介效应区间估计最准确.结果表明,当有先验信息时,推荐使用有先验信息的MCMC方法;当先验信息不可得时,推荐使用偏差校正的非参数百分位Bootstrap方法.  相似文献   

17.
Organizational research and practice involving ratings are rife with what the authors term ill-structured measurement designs (ISMDs)--designs in which raters and ratees are neither fully crossed nor nested. This article explores the implications of ISMDs for estimating interrater reliability. The authors first provide a mock example that illustrates potential problems that ISMDs create for common reliability estimators (e.g., Pearson correlations, intraclass correlations). Next, the authors propose an alternative reliability estimator--G(q,k)--that resolves problems with traditional estimators and is equally appropriate for crossed, nested, and ill-structured designs. By using Monte Carlo simulation, the authors evaluate the accuracy of traditional reliability estimators compared with that of G(q,k) for ratings arising from ISMDs. Regardless of condition, G(q,k) yielded estimates as precise or more precise than those of traditional estimators. The advantage of G(q,k) over the traditional estimators became more pronounced with increases in the (a) overlap between the sets of raters that rated each ratee and (b) ratio of rater main effect variance to true score variance. Discussion focuses on implications of this work for organizational research and practice.  相似文献   

18.
The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.  相似文献   

19.
Fleishman's power method is frequently used to simulate non-normal data with a desired skewness and kurtosis. Fleishman's method requires solving a system of nonlinear equations to find the third-order polynomial weights that transform a standard normal variable into a non-normal variable with desired moments. Most users of the power method seem unaware that Fleishman's equations have multiple solutions for typical combinations of skewness and kurtosis. Furthermore, researchers lack a simple method for exploring the multiple solutions of Fleishman's equations, so most applications only consider a single solution. In this paper, we propose novel methods for finding all real-valued solutions of Fleishman's equations. Additionally, we characterize the solutions in terms of differences in higher order moments. Our theoretical analysis of the power method reveals that there typically exists two solutions of Fleishman's equations that have noteworthy differences in higher order moments. Using simulated examples, we demonstrate that these differences can have remarkable effects on the shape of the non-normal distribution, as well as the sampling distributions of statistics calculated from the data. Some considerations for choosing a solution are discussed, and some recommendations for improved reporting standards are provided.  相似文献   

20.
Autocorrelation and partial autocorrelation, which provide a mathematical tool to understand repeating patterns in time series data, are often used to facilitate the identification of model orders of time series models (e.g., moving average and autoregressive models). Asymptotic methods for testing autocorrelation and partial autocorrelation such as the 1/T approximation method and the Bartlett's formula method may fail in finite samples and are vulnerable to non-normality. Resampling techniques such as the moving block bootstrap and the surrogate data method are competitive alternatives. In this study, we use a Monte Carlo simulation study and a real data example to compare asymptotic methods with the aforementioned resampling techniques. For each resampling technique, we consider both the percentile method and the bias-corrected and accelerated method for interval construction. Simulation results show that the surrogate data method with percentile intervals yields better performance than the other methods. An R package pautocorr is used to carry out tests evaluated in this study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号