首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
The standard Pearson correlation coefficient, r, is a biased estimator of the population correlation coefficient, ρ(XY) , when predictor X and criterion Y are indirectly range-restricted by a third variable Z (or S). Two correction algorithms, Thorndike's (1949) Case III, and Schmidt, Oh, and Le's (2006) Case IV, have been proposed to correct for the bias. However, to our knowledge, the two algorithms did not provide a procedure to estimate the associated standard error and confidence intervals. This paper suggests using the bootstrap procedure as an alternative. Two Monte Carlo simulations were conducted to systematically evaluate the empirical performance of the proposed bootstrap procedure. The results indicated that the bootstrap standard error and confidence intervals were generally accurate across simulation conditions (e.g., selection ratio, sample size). The proposed bootstrap procedure can provide a useful alternative for the estimation of the standard error and confidence intervals for the correlation corrected for indirect range restriction.  相似文献   

3.
A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies.  相似文献   

4.
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods.

This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx.

Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.  相似文献   

5.
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. NewF approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds. An earlier version of this paper was submitted in partial fulfillment of the requirements for the M.S. in Biostatistics, and also summarized in a presentation at the meetings of the Eastern North American Region of the International Biometric Society in March, 2001. Kistner's work was supported in part by NIEHS training grant ES07018-24 and NCI program project grant P01 CA47 982-04. She gratefully acknowledges the inspiration of A. Calandra's “Scoring formulas and probability considerations” (Psychometrika, 6, 1–9). Muller's work supported in part by NCI program project grant P01 CA47 982-04.  相似文献   

6.
In educational and psychological measurement when short test forms are used, the asymptotic normality of the maximum likelihood estimator of the person parameter of item response models does not hold. As a result, hypothesis tests or confidence intervals of the person parameter based on the normal distribution are likely to be problematic. Inferences based on the exact distribution, on the other hand, do not suffer from this limitation. However, the computation involved for the exact distribution approach is often prohibitively expensive. In this paper, we propose a general framework for constructing hypothesis tests and confidence intervals for IRT models within the exponential family based on exact distribution. In addition, an efficient branch and bound algorithm for calculating the exact p value is introduced. The type-I error rate and statistical power of the proposed exact test as well as the coverage rate and the lengths of the associated confidence interval are examined through a simulation. We also demonstrate its practical use by analyzing three real data sets.  相似文献   

7.
This research concerns the estimation of polychoric correlations in the context of fitting structural equation models to observed ordinal variables by multistage estimation. The first main contribution of this research is to propose and evaluate a Monte Carlo estimator for the asymptotic covariance matrix (ACM) of the polychoric correlation estimates. In multistage estimation, the ACM plays a prominent role, as overall test statistics, derived fit indices, and parameter standard errors all depend on this quantity. The ACM, however, must itself be estimated. Established approaches to estimating the ACM use a sample-based version, which can yield poor estimates with small samples. A simulation study demonstrates that the proposed Monte Carlo estimator can be more efficient than its sample-based counterpart. This leads to better calibration for established test statistics, in particular with small samples. The second main contribution of this research is a further exploration of the consequences of violating the normality assumption for the underlying response variables. We show the consequences depend on the type of nonnormality, and the number and location of thresholds. The simulation study also demonstrates that overall test statistics have little power to detect the studied forms of nonnormality, regardless of the ACM estimator.  相似文献   

8.
A Monte Carlo experiment is conducted to investigate the performance of the bootstrap methods in normal theory maximum likelihood factor analysis both when the distributional assumption is satisfied and unsatisfied. The parameters and their functions of interest include unrotated loadings, analytically rotated loadings, and unique variances. The results reveal that (a) bootstrap bias estimation performs sometimes poorly for factor loadings and nonstandardized unique variances; (b) bootstrap variance estimation performs well even when the distributional assumption is violated; (c) bootstrap confidence intervals based on the Studentized statistics are recommended; (d) if structural hypothesis about the population covariance matrix is taken into account then the bootstrap distribution of the normal theory likelihood ratio test statistic is close to the corresponding sampling distribution with slightly heavier right tail.This study was carried out in part under the ISM cooperative research program (91-ISM · CRP-85, 92-ISM · CRP-102). The authors would like to thank the editor and three reviewers for their helpful comments and suggestions which improved the quality of this paper considerably.  相似文献   

9.
In an effort to find accurate alternatives to the usual confidence intervals based on normal approximations, this paper compares four methods of generating second‐order accurate confidence intervals for non‐standardized and standardized communalities in exploratory factor analysis under the normality assumption. The methods to generate the intervals employ, respectively, the Cornish–Fisher expansion and the approximate bootstrap confidence (ABC), and the bootstrap‐t and the bias‐corrected and accelerated bootstrap (BCa). The former two are analytical and the latter two are numerical. Explicit expressions of the asymptotic bias and skewness of the communality estimators, used in the analytical methods, are derived. A Monte Carlo experiment reveals that the performance of central intervals based on normal approximations is a consequence of imbalance of miscoverage on the left‐ and right‐hand sides. The second‐order accurate intervals do not require symmetry around the point estimates of the usual intervals and achieve better balance, even when the sample size is not large. The behaviours of the second‐order accurate intervals were similar to each other, particularly for large sample sizes, and no method performed consistently better than the others.  相似文献   

10.
A Monte Carlo study compared the statistical performance of standard and robust multilevel mediation analysis methods to test indirect effects for a cluster randomized experimental design under various departures from normality. The performance of these methods was examined for an upper-level mediation process, where the indirect effect is a fixed effect and a group-implemented treatment is hypothesized to impact a person-level outcome via a person-level mediator. Two methods—the bias-corrected parametric percentile bootstrap and the empirical-M test—had the best overall performance. Methods designed for nonnormal score distributions exhibited elevated Type I error rates and poorer confidence interval coverage under some conditions. Although preliminary, the findings suggest that new mediation analysis methods may provide for robust tests of indirect effects.  相似文献   

11.
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the population sampled has the covariance structure assumed. Commonly used covariance structure analysis software uses parametric methods for estimating parameters and standard errors. When the population sampled has the covariance structure assumed, but fails to have the distributional form assumed, the parameter estimates usually remain consistent, but the standard error estimates do not. This has motivated the introduction of a variety of nonparametric standard error estimates that are consistent when the population sampled fails to have the distributional form assumed. The only distributional assumption these require is that the covariance structure be correctly specified. As noted, even this assumption is not required for the infinitesimal jackknife. The relation between the infinitesimal jackknife and other nonparametric standard error estimators is discussed. An advantage of the infinitesimal jackknife over the jackknife and the bootstrap is that it requires only one analysis to produce standard error estimates rather than one for every jackknife or bootstrap sample.  相似文献   

12.
Cross validation is a useful way of comparing predictive generalizability of theoretically plausible a priori models in structural equation modeling (SEM). A number of overall or local cross validation indices have been proposed for existing factor-based and component-based approaches to SEM, including covariance structure analysis and partial least squares path modeling. However, there is no such cross validation index available for generalized structured component analysis (GSCA) which is another component-based approach. We thus propose a cross validation index for GSCA, called Out-of-bag Prediction Error (OPE), which estimates the expected prediction error of a model over replications of so-called in-bag and out-of-bag samples constructed through the implementation of the bootstrap method. The calculation of this index is well-suited to the estimation procedure of GSCA, which uses the bootstrap method to obtain the standard errors or confidence intervals of parameter estimates. We empirically evaluate the performance of the proposed index through the analyses of both simulated and real data.  相似文献   

13.
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate normality. For nonlinear PCA, however, standard options for establishing stability are not provided. The authors use the nonparametric bootstrap procedure to assess the stability of nonlinear PCA results, applied to empirical data. They use confidence intervals for the variable transformations and confidence ellipses for the eigenvalues, the component loadings, and the person scores. They discuss the balanced version of the bootstrap, bias estimation, and Procrustes rotation. To provide a benchmark, the same bootstrap procedure is applied to linear PCA on the same data. On the basis of the results, the authors advise using at least 1,000 bootstrap samples, using Procrustes rotation on the bootstrap results, examining the bootstrap distributions along with the confidence regions, and merging categories with small marginal frequencies to reduce the variance of the bootstrap results.  相似文献   

14.
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z′ under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z′ interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code.  相似文献   

15.
Several general correlation patterns are shown in this paper which give exact F tests in an ANOVA procedure with dependent observations. This paper presents the most general correlation patterns one can assume in a one-way and two-way layout and still have the F tests be valid. Exact F tests are given for various designs. These include the unbalanced ANOVA design, analysis of covariance, random effects models, and mixed models. Bartlett's test for homogeneity of variances is shown to be exact when the independence assumption is relaxed. An example is provided to illustrate how the general correlation can occur in an experimental design.  相似文献   

16.
In the applied context, short time-series designs are suitable to evaluate a treatment effect. These designs present serious problems given autocorrelation among data and the small number of observations involved. This paper describes analytic procedures that have been applied to data from short time series, and an alternative which is a new version of the generalized least squares method to simplify estimation of the error covariance matrix. Using the results of a simulation study and assuming a stationary first-order autoregressive model, it is proposed that the original observations and the design matrix be transformed by means of the square root or Cholesky factor of the inverse of the covariance matrix. This provides a solution to the problem of estimating the parameters of the error covariance matrix. Finally, the results of the simulation study obtained using the proposed generalized least squares method are compared with those obtained by the ordinary least squares approach. The probability of Type I error associated with the proposed method is close to the nominal value for all values of rho1 and n investigated, especially for positive values of rho1. The proposed generalized least squares method corrects the effect of autocorrelation on the test's power.  相似文献   

17.
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of freedom heteroscedastic statistic for independent and correlated groups designs in order to achieve robustness to the biasing effects of nonnormality and variance heterogeneity. The authors describe a nonparametric bootstrap methodology that can provide improved Type I error control. In addition, the authors indicate how researchers can set robust confidence intervals around a robust effect size parameter estimate. In an online supplement, the authors use several examples to illustrate the application of an SAS program to implement these statistical methods.  相似文献   

18.
Commonly used formulae for standard error (SE) estimates in covariance structure analysis are derived under the assumption of a correctly specified model. In practice, a model is at best only an approximation to the real world. It is important to know whether the estimates of SEs as provided by standard software are consistent when a model is misspecified, and to understand why if not. Bootstrap procedures provide nonparametric estimates of SEs that automatically account for distribution violation. It is also necessary to know whether bootstrap estimates of SEs are consistent. This paper studies the relationship between the bootstrap estimates of SEs and those based on asymptotics. Examples are used to illustrate various versions of asymptotic variance–covariance matrices and their validity. Conditions for the consistency of the bootstrap estimates of SEs are identified and discussed. Numerical examples are provided to illustrate the relationship of different estimates of SEs and covariance matrices.  相似文献   

19.
This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.  相似文献   

20.
This paper is a presentation of an essential part of the sampling theory of the error variance and the standard error of measurement. An experimental assumption is that several equivalent tests with equal variances are available. These may be either final forms of the same test or obtained by dividing one test into several parts. The simple model of independent and normally distributed errors of measurement with zero mean is employed. No assumption is made about the form of the distributions of true and observed scores. This implies unrestricted freedom in defining the population. First, maximum-likelihood estimators of the error variance and the standard error of measurement are obtained, their sampling distributions given, and their properties investigated. Then unbiased estimators are defined and their distributions derived. The accuracy of estimation is given special consideration from various points of view. Next, rigorous statistical tests are developed to test hypotheses about error variances on the basis of one and two samples. Also the construction of confidence intervals is treated. Finally, Bartlett's test of homogeneity of variances is used to provide a multi-sample test of equality of error variances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号