首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The paper takes up the problem of performing all pairwise comparisons amongJ independent groups based on 20% trimmed means. Currently, a method that stands out is the percentile-t bootstrap method where the bootstrap is used to estimate the quantiles of a Studentized maximum modulus distribution when all pairs of population trimmed means are equal. However, a concern is that in simulations, the actual probability of one or more Type I errors can drop well below the nominal level when sample sizes are small. A practical issue is whether a method can be found that corrects this problem while maintaining the positive features of the percentile-t bootstrap. Three new methods are considered here, one of which achieves the desired goal. Another method, which takes advantage of theoretical results by Singh (1998), performs almost as well but is not recommended when the smallest sample size drops below 15. In some situations, however, it gives substantially shorter confidence intervals.  相似文献   

2.
A local measure of association that allows both heteroscedasticity and a non‐linear association was developed during the 1990s. The basic goal is to measure the strength of the association between X and Y, given X, when Y=θ(X)+τ(X)ε for some unknown functions θ(X) and τ(X). Application of this method requires the estimation of the derivative of θ(X). The focus in this paper is on four alternatives to a very slight modification of the method used by Doksum et al. when estimating this derivative. The main result is that in simulations, a certain robust analogue of their method dominates in terms of mean squared error, even under normality. The bias of the method is found to be small but a little larger than the bias associated with the method used by Doksum et al. The method is based in part on bootstrap bagging followed by a lowess smooth.  相似文献   

3.
This paper describes a method for generating sample survival distributions from a hypothetical population, as would be required for running Monte Carlo simulations. The method is based on the concept of a quincunx. Cases are entered into a life table and allowed to drop out or die during each interval with probabilities that mirror the hypothetical population. By repeating this process many times and tracking the results, the researcher is able to study the sampling distribution of effect size indices and test statistics, and can generate empirical estimates of power and precision for planned studies. Unlike other methods that are commonly used for this purpose, the model proposed here allows the researcher to define a population in which the hazard rates and/or attrition rates vary substantially from one time point to the next, as may be the case in clinical trials or studies of processing times. The method requires less than 100 lines of code and runs some 10,000 simulations per hour on a microcomputer.  相似文献   

4.
Two common methods for adjusting group comparisons for differences in the distribution of confounders, namely analysis of covariance (ANCOVA) and subset selection, are compared using real examples from neuropsychology, theory, and simulations. ANCOVA has potential pitfalls, but the blanket rejection of the method in some areas of empirical psychology is not justified. Assumptions of the methods are reviewed, with issues of selection bias, nonlinearity, and interaction emphasized. Advantages of ANCOVA include better power, improved ability to detect and estimate interactions, and the availability of extensions to deal with measurement error in the covariates. Forms of ANCOVA are advocated that relax the standard assumption of linearity between the outcome and covariates. Specifically, a version of ANCOVA that models the relationship between the covariate and the outcome through cubic spline with fixed knots outperforms other methods in simulations.  相似文献   

5.
Dominance‐based ordinal multiple regression (DOR) is designed to answer ordinal questions about relationships among ordinal variables. Only one parameter per predictor is estimated, and the number of parameters is constant for any number of outcome levels. The majority of existing simulation evaluations of DOR use predictors that are continuous or ordinal with many categories, so the performance of the method is not well understood for ordinal variables with few categories. This research evaluates DOR in simulations using three‐category ordinal variables for the outcome and predictors, with a comparison to the cumulative logits proportional odds model (POC). Although ordinary least squares (OLS) regression is inapplicable for theoretical reasons, it was also included in the simulations because of its popularity in the social sciences. Most simulation outcomes indicated that DOR performs well for variables with few categories, and is preferable to the POC for smaller samples and when the proportional odds assumption is violated. Nevertheless, confidence interval coverage for DOR was not flawless and possibilities for improvement are suggested.  相似文献   

6.
Four misconceptions about the requirements for proper use of analysis of covariance (ANCOVA) are examined by means of Monte Carlo simulation. Conclusions are that ANCOVA does not require covariates to be measured without error, that ANCOVA can be used effectively to adjust for initial group differences that result from nonrandom assignment which is dependent on observed covariate scores, that ANCOVA does not provide unbiased estimates of true treatment effects where initial group differences are due to nonrandom assignment which is dependent on the true latent covariable if the covariate contains measurement error, and that ANCOVA requires no assumption concerning the equality of within-groups and between-groups regression. Where treatments actually influence covariate scores, the hypothesis tested by ANCOVA concerns a weighted combination of effects on covariate and dependent variables.  相似文献   

7.
A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies.  相似文献   

8.
Principal covariate regression (PCOVR) is a method for regressing a set of criterion variables with respect to a set of predictor variables when the latter are many in number and/or collinear. This is done by extracting a limited number of components that simultaneously synthesize the predictor variables and predict the criterion ones. So far, no procedure has been offered for estimating statistical uncertainties of the obtained PCOVR parameter estimates. The present paper shows how this goal can be achieved, conditionally on the model specification, by means of the bootstrap approach. Four strategies for estimating bootstrap confidence intervals are derived and their statistical behaviour in terms of coverage is assessed by means of a simulation experiment. Such strategies are distinguished by the use of the varimax and quartimin procedures and by the use of Procrustes rotations of bootstrap solutions towards the sample solution. In general, the four strategies showed appropriate statistical behaviour, with coverage tending to the desired level for increasing sample sizes. The main exception involved strategies based on the quartimin procedure in cases characterized by complex underlying structures of the components. The appropriateness of the statistical behaviour was higher when the proper number of components were extracted.  相似文献   

9.
基于概化理论的方差分量变异量估计   总被引:2,自引:0,他引:2  
黎光明  张敏强 《心理学报》2009,41(9):889-901
概化理论广泛应用于心理与教育测量实践中, 方差分量估计是进行概化理论分析的关键。方差分量估计受限于抽样, 需要对其变异量进行探讨。采用蒙特卡洛(Monte Carlo)数据模拟技术, 在正态分布下讨论不同方法对基于概化理论的方差分量变异量估计的影响。结果表明: Jackknife方法在方差分量变异量估计上不足取; 不采取Bootstrap方法的“分而治之”策略, 从总体上看, Traditional方法和有先验信息的MCMC方法在标准误及置信区间这两个变异量估计上优势明显。  相似文献   

10.
Misunderstanding analysis of covariance   总被引:24,自引:0,他引:24  
Despite numerous technical treatments in many venues, analysis of covariance (ANCOVA) remains a widely misused approach to dealing with substantive group differences on potential covariates, particularly in psychopathology research. Published articles reach unfounded conclusions, and some statistics texts neglect the issue. The problem with ANCOVA in such cases is reviewed. In many cases, there is no means of achieving the superficially appealing goal of "correcting" or "controlling for" real group differences on a potential covariate. In hopes of curtailing misuse of ANCOVA and promoting appropriate use, a nontechnical discussion is provided, emphasizing a substantive confound rarely articulated in textbooks and other general presentations, to complement the mathematical critiques already available. Some alternatives are discussed for contexts in which ANCOVA is inappropriate or questionable.  相似文献   

11.
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z′ under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z′ interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code.  相似文献   

12.
基于结构方程模型的有调节的中介效应分析   总被引:1,自引:0,他引:1  
方杰  温忠麟 《心理科学》2018,(2):475-483
有调节的中介模型是中介过程受到调节变量影响的模型。指出了目前有调节的中介效应分析普遍存在的问题:当前有调节的中介效应检验大多使用多元线性回归分析,忽略了测量误差;而基于结构方程模型(SEM)的有调节的中介效应分析需要产生乘积指标,又会面临乘积指标生成和乘积项非正态分布的问题。在简介潜调节结构方程(LMS)方法后,建议使用LMS方法得到偏差校正的bootstrap置信区间来进行基于SEM的有调节的中介效应分析。总结出一个有调节的中介SEM分析流程,并有示例和相应的Mplus程序。文末展望了LMS和有调节的中介模型的发展方向。  相似文献   

13.
黎光明  张敏强 《心理学报》2013,45(1):114-124
Bootstrap方法是一种有放回的再抽样方法, 可用于概化理论的方差分量及其变异量估计。用Monte Carlo技术模拟四种分布数据, 分别是正态分布、二项分布、多项分布和偏态分布数据。基于p×i设计, 探讨校正的Bootstrap方法相对于未校正的Bootstrap方法, 是否改善了概化理论估计四种模拟分布数据的方差分量及其变异量。结果表明:跨越四种分布数据, 从整体到局部, 不论是“点估计”还是“变异量”估计, 校正的Bootstrap方法都要优于未校正的Bootstrap方法, 校正的Bootstrap方法改善了概化理论方差分量及其变异量估计。  相似文献   

14.
This paper aims to improve the prediction accuracy of Tropical Cyclone Tracks (TCTs) over the South China Sea (SCS) with 24 h lead time. The model proposed in this paper is a regularized extreme learning machine (ELM) ensemble using bagging. The method which turns the original problem into quadratic programming (QP) problem is proposed in this paper to solve lasso and elastic net problem in ELM. The forecast error of TCTs data set is the distance between real position and forecast position. Compared with the stepwise regression method widely used in TCTs, 8.26 km accuracy improvement is obtained by our model based on the dataset with 70/1680 testing/training records. By contrast, the improvement using this model is 16.49 km based on a smaller dataset with 30/720 testing/training records. Results show that the regularized ELM bagging has a general better generalization capacity on TCTs data set.  相似文献   

15.
认知诊断模型选择是认知诊断评估中重要研究问题之一。在实际应用中实践者并不知道真正拟合数据的模型,通常会用模型拟合指标检验模型与数据的拟合程度。从测量结果质量来看,除保证模型与数据拟合之外,还需要重点评价模型诊断结果的信度和效度等。考虑到以往研究大都采用基于信息量的拟合指标去判定模型与数据的匹配性,本研究提出综合考虑模型拟合指标与信度指标用于模型选择或评价模型误设。考虑实验因素为真实模型或分析模型(DINA模型、G-DINA模型、R-RUM模型)、样本量、题量和属性个数,在五因素(3×3×2×2×2)实验设计条件下,比较Bootstrap区间估计的属性分类一致性信度平均数与标准误和常用的拟合统计量-2LL、AIC、BIC对正确模型的选择率。结果表明:-2LL在题目数量多的情况下表现较好,而AIC、BIC在被试量较大的情况下表现较好,在不同的研究条件下,-2LL、AIC、BIC的模型选择率很不稳定,而用Bootstrap法估计的属性分类一致性信度平均数和标准误在不同研究条件的模型选择率较稳定,总体表现较好。  相似文献   

16.
Recently, a multiple comparisons procedure was derived with the goal of determining whether it is reasonable to make a decision about which of J independent groups has the largest robust measure of location. This was done by testing hypotheses aimed at comparing the group with the largest estimate to the remaining J − 1 groups. It was demonstrated that for the goal of controlling the familywise error rate, meaning the probability of one or more Type I errors, well-known improvements on the Bonferroni method can perform poorly. A technique for dealing with this issue was suggested and found to perform well in simulations. However, when dealing with dependent groups, the method is unsatisfactory. This note suggests an alternative method that is designed for dependent groups.  相似文献   

17.
黎光明  张敏强 《心理科学》2013,36(1):203-209
方差分量估计是概化理论的必用技术,但受限于抽样,需要对其变异量进行探讨。采用Monte Carlo数据模拟技术,探讨非正态数据分布对四种方法估计概化理论方差分量变异量的影响。结果表明:(1)不同非正态数据分布下,各种估计方法的“性能”表现出差异性;(2)数据分布对方差分量变异量估计有影响,适合于非正态分布数据的方差分量变异量估计方法不一定适合于正态分布数据。  相似文献   

18.
Many robust regression estimators have been proposed that have a high, finite‐sample breakdown point, roughly meaning that a large porportion of points must be altered to drive the value of an estimator to infinity. But despite this, many of them can be inordinately influenced by two properly placed outliers. With one predictor, an estimator that appears to correct this problem to a fair degree, and simultaneously maintain good efficiency when standard assumptions are met, consists of checking for outliers using a projection‐type method, removing any that are found, and applying the Theil — Sen estimator to the data that remain. When dealing with multiple predictors, there are two generalizations of the Theil — Sen estimator that might be used, but nothing is known about how their small‐sample properties compare. Also, there are no results on testing the hypothesis of zero slopes, and there is no information about the effect on efficiency when outliers are removed. In terms of hypothesis testing, using the more obvious percentile bootstrap method in conjunction with a slight modification of Mahalanobis distance was found to avoid Type I error probabilities above the nominal level, but in some situations the actual Type I error probabilities can be substantially smaller than intended when the sample size is small. An alternative method is found to be more satisfactory.  相似文献   

19.
In linear regression, the most appropriate standardized effect size for individual independent variables having an arbitrary metric remains open to debate, despite researchers typically reporting a standardized regression coefficient. Alternative standardized measures include the semipartial correlation, the improvement in the squared multiple correlation, and the squared partial correlation. No arguments based on either theoretical or statistical grounds for preferring one of these standardized measures have been mounted in the literature. Using a Monte Carlo simulation, the performance of interval estimators for these effect-size measures was compared in a 5-way factorial design. Formal statistical design methods assessed both the accuracy and robustness of the four interval estimators. The coverage probability of a large-sample confidence interval for the semipartial correlation coefficient derived from Aloe and Becker was highly accurate and robust in 98% of instances. It was better in small samples than the Yuan-Chan large-sample confidence interval for a standardized regression coefficient. It was also consistently better than both a bootstrap confidence interval for the improvement in the squared multiple correlation and a noncentral interval for the squared partial correlation.  相似文献   

20.
This paper focuses on the two‐parameter latent trait model for binary data. Although the prior distribution of the latent variable is usually assumed to be a standard normal distribution, that prior distribution can be estimated from the data as a discrete distribution using a combination of EM algorithms and other optimization methods. We assess with what precision we can estimate the prior from the data, using simulations and bootstrapping. A novel calibration method is given to check that near optimality is achieved for the bootstrap estimates. We find that there is sufficient information on the prior distribution to be informative, and that the bootstrap method is reliable. We illustrate the bootstrap method for two sets of real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号