首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies.  相似文献   

2.
Chan W  Chan DW 《心理学方法》2004,9(3):369-385
The standard Pearson correlation coefficient is a biased estimator of the true population correlation, rho, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, rc, has been recommended, and a standard formula based on asymptotic results for estimating its standard error is also available. In the present study, the bootstrap standard-error estimate is proposed as an alternative. Monte Carlo simulation studies involving both normal and nonnormal data were conducted to examine the empirical performance of the proposed procedure under different levels of rho, selection ratio, sample size, and truncation types. Results indicated that, with normal data, the bootstrap standard-error estimate is more accurate than the traditional estimate, particularly with small sample size. With nonnormal data, performance of both estimates depends critically on the distribution type. Furthermore, the bootstrap bias-corrected and accelerated interval consistently provided the most accurate coverage probability for rho.  相似文献   

3.
Spiess  Martin  Jordan  Pascal  Wendt  Mike 《Psychometrika》2019,84(1):212-235

In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  相似文献   

4.
Cross validation is a useful way of comparing predictive generalizability of theoretically plausible a priori models in structural equation modeling (SEM). A number of overall or local cross validation indices have been proposed for existing factor-based and component-based approaches to SEM, including covariance structure analysis and partial least squares path modeling. However, there is no such cross validation index available for generalized structured component analysis (GSCA) which is another component-based approach. We thus propose a cross validation index for GSCA, called Out-of-bag Prediction Error (OPE), which estimates the expected prediction error of a model over replications of so-called in-bag and out-of-bag samples constructed through the implementation of the bootstrap method. The calculation of this index is well-suited to the estimation procedure of GSCA, which uses the bootstrap method to obtain the standard errors or confidence intervals of parameter estimates. We empirically evaluate the performance of the proposed index through the analyses of both simulated and real data.  相似文献   

5.
基于概化理论的方差分量变异量估计   总被引:2,自引:0,他引:2  
黎光明  张敏强 《心理学报》2009,41(9):889-901
概化理论广泛应用于心理与教育测量实践中, 方差分量估计是进行概化理论分析的关键。方差分量估计受限于抽样, 需要对其变异量进行探讨。采用蒙特卡洛(Monte Carlo)数据模拟技术, 在正态分布下讨论不同方法对基于概化理论的方差分量变异量估计的影响。结果表明: Jackknife方法在方差分量变异量估计上不足取; 不采取Bootstrap方法的“分而治之”策略, 从总体上看, Traditional方法和有先验信息的MCMC方法在标准误及置信区间这两个变异量估计上优势明显。  相似文献   

6.
Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user’s needs.  相似文献   

7.
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate normality. For nonlinear PCA, however, standard options for establishing stability are not provided. The authors use the nonparametric bootstrap procedure to assess the stability of nonlinear PCA results, applied to empirical data. They use confidence intervals for the variable transformations and confidence ellipses for the eigenvalues, the component loadings, and the person scores. They discuss the balanced version of the bootstrap, bias estimation, and Procrustes rotation. To provide a benchmark, the same bootstrap procedure is applied to linear PCA on the same data. On the basis of the results, the authors advise using at least 1,000 bootstrap samples, using Procrustes rotation on the bootstrap results, examining the bootstrap distributions along with the confidence regions, and merging categories with small marginal frequencies to reduce the variance of the bootstrap results.  相似文献   

8.
A robust approach for the analysis of experiments with ordered treatment levels is presented as an alternative to existing approaches such as the parametric Abelson-Tukey test for monotone alternatives and the nonparametric Terpstra-Jonckheere test. The method integrates the familiar Spearman rank-order correlation with bootstrap routines to provide magnitude of association measures, p values, and confidence intervals for magnitude of association measures. The advantages of this method relative to five alternative approaches are pointed out.  相似文献   

9.
Confidence intervals for an effect size can provide the information about the magnitude of an effect and its precision as well as the binary decision about the existence of an effect. In this study, the performances of five different methods for constructing confidence intervals for ratio effect size measures of an indirect effect were compared in terms of power, coverage rates, Type I error rates, and widths of confidence intervals. The five methods include the percentile bootstrap method, the bias-corrected and accelerated (BCa) bootstrap method, the delta method, the Fieller method, and the Monte Carlo method. The results were discussed with respect to the adequacy of the distributional assumptions and the nature of ratio quantity. The confidence intervals from the five methods showed similar results for samples of more than 500, whereas, for samples of less than 500, the confidence intervals were sufficiently narrow to convey the information about the population effect sizes only when the effect sizes of regression coefficients defining the indirect effect are large.  相似文献   

10.
黎光明  张敏强 《心理科学》2013,36(1):203-209
方差分量估计是概化理论的必用技术,但受限于抽样,需要对其变异量进行探讨。采用Monte Carlo数据模拟技术,探讨非正态数据分布对四种方法估计概化理论方差分量变异量的影响。结果表明:(1)不同非正态数据分布下,各种估计方法的“性能”表现出差异性;(2)数据分布对方差分量变异量估计有影响,适合于非正态分布数据的方差分量变异量估计方法不一定适合于正态分布数据。  相似文献   

11.
The psychometric function relates an observer’s performance to an independent variable, usually a physical quantity of an experimental stimulus. Even if a model is successfully fit to the data and its goodness of fit is acceptable, experimenters require an estimate of the variability of the parameters to assess whether differences across conditions are significant. Accurate estimates of variability are difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statistical techniques are only asymptotically correct and can be shown to be unreliable in some common situations. Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternative statistical techniques based on Monte Carlo resampling methods. The present paper’s principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes. First, we outline the basic bootstrap procedure and argue in favor of the parametric, as opposed to the nonparametric, bootstrap. Second, we describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested. Third, we show how one’s choice of sampling scheme (the placement of sample points on the stimulus axis) strongly affects the reliability of bootstrap confidence intervals, and we make recommendations on how to sample the psychometric function efficiently. Fourth, we show that, under certain circumstances, the (arbitrary) choice of the distribution function can exert an unwanted influence on the size of the bootstrap confidence intervals obtained, and we make recommendations on how to avoid this influence. Finally, we introduce improved confidence intervals (bias corrected and accelerated) that improve on the parametric and percentile-based bootstrap confidence intervals previously used. Software implementing our methods is available.  相似文献   

12.
The psychometric function relates an observer's performance to an independent variable, usually a physical quantity of an experimental stimulus. Even if a model is successfully fit to the data and its goodness of fit is acceptable, experimenters require an estimate of the variability of the parameters to assess whether differences across conditions are significant. Accurate estimates of variability are difficult to obtain, however, given the typically small size of psychophysical data sets: Traditional statistical techniques are only asymptotically correct and can be shown to be unreliable in some common situations. Here and in our companion paper (Wichmann & Hill, 2001), we suggest alternative statistical techniques based on Monte Carlo resampling methods. The present paper's principal topic is the estimation of the variability of fitted parameters and derived quantities, such as thresholds and slopes. First, we outline the basic bootstrap procedure and argue in favor of the parametric, as opposed to the nonparametric, bootstrap. Second, we describe how the bootstrap bridging assumption, on which the validity of the procedure depends, can be tested. Third, we show how one's choice of sampling scheme (the placement of sample points on the stimulus axis) strongly affects the reliability of bootstrap confidence intervals, and we make recommendations on how to sample the psychometric function efficiently. Fourth, we show that, under certain circumstances, the (arbitrary) choice of the distribution function can exert an unwanted influence on the size of the bootstrap confidence intervals obtained, and we make recommendations on how to avoid this influence. Finally, we introduce improved confidence intervals (bias corrected and accelerated) that improve on the parametric and percentile-based bootstrap confidence intervals previously used. Software implementing our methods is available.  相似文献   

13.
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of freedom heteroscedastic statistic for independent and correlated groups designs in order to achieve robustness to the biasing effects of nonnormality and variance heterogeneity. The authors describe a nonparametric bootstrap methodology that can provide improved Type I error control. In addition, the authors indicate how researchers can set robust confidence intervals around a robust effect size parameter estimate. In an online supplement, the authors use several examples to illustrate the application of an SAS program to implement these statistical methods.  相似文献   

14.
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable. The EFA model is specified for these underlying continuous variables rather than the observed ordinal variables. Although these underlying continuous variables are not observed directly, their correlations can be estimated from the ordinal variables. These correlations are referred to as polychoric correlations. This article is concerned with ordinary least squares (OLS) estimation of parameters in EFA with polychoric correlations. Standard errors and confidence intervals for rotated factor loadings and factor correlations are presented. OLS estimates and the associated standard error estimates and confidence intervals are illustrated using personality trait ratings from 228 college students. Statistical properties of the proposed procedure are explored using a Monte Carlo study. The empirical illustration and the Monte Carlo study showed that (a) OLS estimation of EFA is feasible with large models, (b) point estimates of rotated factor loadings are unbiased, (c) point estimates of factor correlations are slightly negatively biased with small samples, and (d) standard error estimates and confidence intervals perform satisfactorily at moderately large samples.  相似文献   

15.
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods.

This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx.

Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.  相似文献   

16.
SIGNIFICANCE TESTS HAVE THEIR PLACE   总被引:1,自引:0,他引:1  
Abstract— Null-hypothesis significance tests (NHST), properly used, tell us whether we have sufficient evidence to be confident of the sign of the population effect—but only if we abandon two-valued logic in favor of Kaiser's (1960) three-alternative hypothesis tests Confidence intervals provide a useful addition to NHSTs, and can be used to provide the same sign-determination function as NHST However, when so used, confidence intervals are subject to exactly the same Type I, II, and III error rates as NHST In addition, NHSTs provide two pieces of information about our data—maximum probability of a Type III error and probability of a successful exact replication—that confidence intervals do not The proposed alternative to NHST is just as susceptible to misinterpretation as is NHST The problem of bias due to censoring of data collection or publication can be handled by providing archives for all methodologically sound data sets, but reserving interpretations and conclusions for statistically significant results.  相似文献   

17.
Confidence intervals for the parameters of psychometric functions   总被引:1,自引:0,他引:1  
A Monte Carlo method for computing the bias and standard deviation of estimates of the parameters of a psychometric function such as the Weibull/Quick is described. The method, based on Efron's parametric bootstrap, can also be used to estimate confidence intervals for these parameters. The method's ability to predict bias, standard deviation, and confidence intervals is evaluated in two ways. First, its predictions are compared to the outcomes of Monte Carlo simulations of psychophysical experiments. Second, its predicted confidence intervals were compared with the actual variability of human observers in a psychophysical task. Computer programs implementing the method are available from the author.  相似文献   

18.
Earlier research has shown that bootstrap confidence intervals from principal component loadings give a good coverage of the population loadings. However, this only applies to complete data. When data are incomplete, missing data have to be handled before analysing the data. Multiple imputation may be used for this purpose. The question is how bootstrap confidence intervals for principal component loadings should be corrected for multiply imputed data. In this paper, several solutions are proposed. Simulations show that the proposed corrections for multiply imputed data give a good coverage of the population loadings in various situations.  相似文献   

19.
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of SE-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and hybrid intervals are explored using simulation studies involving different sample sizes, perfect and imperfect models, and normal and elliptical data. The bootstrap confidence intervals are also illustrated using a personality data set of 537 Chinese men. The results suggest that the bootstrap is an effective method for assigning confidence intervals at moderately large sample sizes.  相似文献   

20.
ObjectivesThe purpose of the present paper is to provide a primer on the understanding of meta-analysis.Design and methodAfter presenting the rationale behind meta-analysis, the present paper defines statistical artifacts of sampling error and measurement.FindingsExamples show that statistical artifacts influence the correlation coefficient. The paper also explains the notions of confidence intervals and credibility intervals and how correlations corrected for sampling error and measurement error are calculated.ConclusionsThe paper concludes by explaining the notion of second-order sampling error and moderator meta-analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号