首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Generalized orthogonal linear derivative (GOLD) estimates were proposed to correct a problem of correlated estimation errors in generalized local linear approximation (GLLA). This paper shows that GOLD estimates are related to GLLA estimates by the Gram–Schmidt orthogonalization process. Analytical work suggests that GLLA estimates are derivatives of an approximating polynomial and GOLD estimates are linear combinations of these derivatives. A series of simulation studies then further investigates and tests the analytical properties derived. The first study shows that when approximating or smoothing noisy data, GLLA outperforms GOLD, but when interpolating noisy data GOLD outperforms GLLA. The second study shows that when data are not noisy, GLLA always outperforms GOLD in terms of derivative estimation. Thus, when data can be smoothed or are not noisy, GLLA is preferred whereas when they cannot then GOLD is preferred. The last studies show situations where GOLD can produce biased estimates. In spite of these possible shortcomings of GOLD to produce accurate and unbiased estimates, GOLD may still provide adequate or improved model estimation because of its orthogonal error structure. However, GOLD should not be used purely for derivative estimation because the error covariance structure is irrelevant in this case. Future research should attempt to find orthogonal polynomial derivative estimators that produce accurate and unbiased derivatives with an orthogonal error structure.  相似文献   

2.
A Monte Carlo study was used to compare four approaches to growth curve analysis of subjects assessed repeatedly with the same set of dichotomous items: A two‐step procedure first estimating latent trait measures using MULTILOG and then using a hierarchical linear model to examine the changing trajectories with the estimated abilities as the outcome variable; a structural equation model using modified weighted least squares (WLSMV) estimation; and two approaches in the framework of multilevel item response models, including a hierarchical generalized linear model using Laplace estimation, and Bayesian analysis using Markov chain Monte Carlo (MCMC). These four methods have similar power in detecting the average linear slope across time. MCMC and Laplace estimates perform relatively better on the bias of the average linear slope and corresponding standard error, as well as the item location parameters. For the variance of the random intercept, and the covariance between the random intercept and slope, all estimates are biased in most conditions. For the random slope variance, only Laplace estimates are unbiased when there are eight time points.  相似文献   

3.
Discretized multivariate normal structural models are often estimated using multistage estimation procedures. The asymptotic properties of parameter estimates, standard errors, and tests of structural restrictions on thresholds and polychoric correlations are well known. It was not clear how to assess the overall discrepancy between the contingency table and the model for these estimators. It is shown that the overall discrepancy can be decomposed into a distributional discrepancy and a structural discrepancy. A test of the overall model specification is proposed, as well as a test of the distributional specification (i.e., discretized multivariate normality). Also, the small sample performance of overall, distributional, and structural tests, as well as of parameter estimates and standard errors is investigated under conditions of correct model specification and also under mild structural and/or distributional misspecification. It is found that relatively small samples are needed for parameter estimates, standard errors, and structural tests. Larger samples are needed for the distributional and overall tests. Furthermore, parameter estimates, standard errors, and structural tests are surprisingly robust to distributional misspecification. This research was supported by the Department of Universities, Research and Information Society (DURSI) of the Catalan Government, and by grants BSO2000-0661 and BSO2003-08507 of the Spanish Ministry of Science and Technology.  相似文献   

4.
Quantile maximum likelihood (QML) is an estimation technique, proposed by Heathcote, Brown, and Mewhort (2002), that provides robust and efficient estimates of distribution parameters, typically for response time data, in sample sizes as small as 40 observations. In view of the computational difficulty inherent in implementing QML, we provide open-source Fortran 90 code that calculates QML estimates for parameters of the ex-Gaussian distribution, as well as standard maximum likelihood estimates. We show that parameter estimates from QML are asymptotically unbiased and normally distributed. Our software provides asymptotically correct standard error and parameter intercorrelation estimates, as well as producing the outputs required for constructing quantile—quantile plots. The code is parallelizable and can easily be modified to estimate parameters from other distributions. Compiled binaries, as well as the source code, example analysis files, and a detailed manual, are available for free on the Internet.  相似文献   

5.
In this article, we formulate a nonlinear structural equation model (SEM) that can accommodate covariates in the measurement equation and nonlinear terms of covariates and exogenous latent variables in the structural equation. The covariates can come from continuous or discrete distributions. A Bayesian approach is developed to analyze the proposed model. Markov chain Monte Carlo methods for obtaining Bayesian estimates and their standard error estimates, highest posterior density intervals, and a PP p value are developed. Results obtained from two simulation studies are reported to respectively reveal the empirical performance of the proposed Bayesian estimation in analyzing complex nonlinear SEMs, and in analyzing nonlinear SEMs with the normal assumption of the exogenous latent variables violated. The proposed methodology is further illustrated by a real example. Detailed interpretation about the interaction terms is presented.  相似文献   

6.
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable. The EFA model is specified for these underlying continuous variables rather than the observed ordinal variables. Although these underlying continuous variables are not observed directly, their correlations can be estimated from the ordinal variables. These correlations are referred to as polychoric correlations. This article is concerned with ordinary least squares (OLS) estimation of parameters in EFA with polychoric correlations. Standard errors and confidence intervals for rotated factor loadings and factor correlations are presented. OLS estimates and the associated standard error estimates and confidence intervals are illustrated using personality trait ratings from 228 college students. Statistical properties of the proposed procedure are explored using a Monte Carlo study. The empirical illustration and the Monte Carlo study showed that (a) OLS estimation of EFA is feasible with large models, (b) point estimates of rotated factor loadings are unbiased, (c) point estimates of factor correlations are slightly negatively biased with small samples, and (d) standard error estimates and confidence intervals perform satisfactorily at moderately large samples.  相似文献   

7.
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized method of moments (GMM) estimation techniques in multilevel modeling, the authors present a series of estimators along a robust to efficient continuum. This continuum depends on the assumptions that the analyst makes regarding the extent of the correlated effects. It is shown that the GMM approach provides an overarching framework that encompasses well-known estimators such as fixed and random effects estimators and also provides more options. These GMM estimators can be expressed as instrumental variable (IV) estimators which enhances their interpretability. Moreover, by exploiting the hierarchical structure of the data, the current technique does not require additional variables unlike traditional IV methods. Further, statistical tests are developed to compare the different estimators. A simulation study examines the finite sample properties of the estimators and tests and confirms the theoretical order of the estimators with respect to their robustness and efficiency. It further shows that not only are regression coefficients biased, but variance components may be severely underestimated in the presence of correlated effects. Empirical standard errors are employed as they are less sensitive to correlated effects when compared to model-based standard errors. An example using student achievement data shows that GMM estimators can be effectively used in a search for the most efficient among unbiased estimators. This research was supported by the National Academy of Education/Spencer Foundation and the National Science Foundation, grant number SES-0436274. We thank the editor, associate editor, and referees for detailed feedback that helped improve the paper.  相似文献   

8.
Many variables that are used in social and behavioural science research are ordinal categorical or polytomous variables. When more than one polytomous variable is involved in an analysis, observations are classified in a contingency table, and a commonly used statistic for describing the association between two variables is the polychoric correlation. This paper investigates the estimation of the polychoric correlation when the data set consists of misclassified observations. Two approaches for estimating the polychoric correlation have been developed. One assumes that the probabilities in relation to misclassification are known, and the other uses a double sampling scheme to obtain information on misclassification. A parameter estimation procedure is developed, and statistical properties for the estimates are discussed. The practicability and applicability of the proposed approaches are illustrated by analysing data sets that are based on real and generated data. Excel programmes with visual basic for application (VBA) have been developed to compute the estimate of the polychoric correlation and its standard error. The use of the structural equation modelling programme Mx to find parameter estimates in the double sampling scheme is discussed.  相似文献   

9.
A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies.  相似文献   

10.
In the literature on the measurement of change,reliable change is usually determined by means of a confidence interval around an observed value of a statistic that estimates thetrue change. In recent literature on the efficacy of psychotherapies, attention has been particularly directed at the improvement of the estimation of the true change. Reliable Change Indices, incorporating thereliability-weighted measure of individual change, also known as Kelley's formula, have been proposed. According to current practice, these indices are defined as the ratio of such an estimator and an intuitively appealing criterion and then regarded as standard normally distributed statistics. However, because the authors fail to adopt an adequate standard error of the estimator, the statistical properties of their indices are unclear. In this article, it is shown that this can lead to paradoxical conclusions. The adjusted standard error is derived.  相似文献   

11.
A new concept of weakly parallel tests, in contrast to strongly parallel tests in latent trait theory, is proposed. Some criticisms of the fundamental concepts in classical test theory, such as the reliability of a test and the standard error of estimation, are given.  相似文献   

12.
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration approach, a general pseudo maximum likelihood estimation method based on a conveniently decomposed form of the likelihood. It is both consistent and computationally efficient, and produces point estimates and estimated standard errors which are practically identical to those obtained by maximum likelihood. Simulations suggest that improved regression calibration, which is easy to implement in standard software, works well in a range of situations.  相似文献   

13.
Psychophysical studies with infants or with patients often are unable to use pilot data, training, or large numbers of trials. To evaluate threshold estimates under these conditions, computer simulations of experiments with small numbers of trials were performed by using psychometric functions based on a model of two types of noise: stimulus-related noise (affecting slope) and extraneous noise (affecting upper asymptote). Threshold estimates were biased and imprecise when extraneous noise was high, as were the estimates of extraneous noise. Strategies were developed for rejecting data sets as too noisy for unbiased and precise threshold estimation; these strategies were most successful when extraneous noise was low for most of the data sets. An analysis of 1,026 data sets from visual function tests of infants and toddlers showed that extraneous noise is often considerable, that experimental paradigms can be developed that minimize extraneous noise, and that data analysis that does not consider the effects of extraneous noise may underestimate test-retest reliability and overestimate interocular differences.  相似文献   

14.
Many statistics packages print skewness and kurtosis statistics with estimates of their standard errors. The function most often used for the standard errors (e.g., in SPSS) assumes that the data are drawn from a normal distribution, an unlikely situation. Some textbooks suggest that if the statistic is more than about 2 standard errors from the hypothesized value (i.e., an approximate value for the critical value from the t distribution for moderate or large sample sizes when α = 5%), the hypothesized value can be rejected. This is an inappropriate practice unless the standard error estimate is accurate and the sampling distribution is approximately normal. We show distributions where the traditional standard errors provided by the function underestimate the actual values, often being 5 times too small, and distributions where the function overestimates the true values. Bootstrap standard errors and confidence intervals are more accurate than the traditional approach, although still imperfect. The reasons for this are discussed. We recommend that if you are using skewness and kurtosis statistics based on the 3rd and 4th moments, bootstrapping should be used to calculate standard errors and confidence intervals, rather than using the traditional standard. Software in the freeware R for this article provides these estimates.  相似文献   

15.
Chan W  Chan DW 《心理学方法》2004,9(3):369-385
The standard Pearson correlation coefficient is a biased estimator of the true population correlation, rho, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, rc, has been recommended, and a standard formula based on asymptotic results for estimating its standard error is also available. In the present study, the bootstrap standard-error estimate is proposed as an alternative. Monte Carlo simulation studies involving both normal and nonnormal data were conducted to examine the empirical performance of the proposed procedure under different levels of rho, selection ratio, sample size, and truncation types. Results indicated that, with normal data, the bootstrap standard-error estimate is more accurate than the traditional estimate, particularly with small sample size. With nonnormal data, performance of both estimates depends critically on the distribution type. Furthermore, the bootstrap bias-corrected and accelerated interval consistently provided the most accurate coverage probability for rho.  相似文献   

16.
Two new tests for a model for the response times on pure speed tests by Rasch (1960) are proposed. The model is based on the assumption that the test response times are approximately gamma distributed, with known index parameters and unknown rate parameters. The rate parameters are decomposed in a subject ability parameter and a test difficulty parameter. By treating the ability as a gamma distributed random variable, maximum marginal likelihood (MML) estimators for the test difficulty parameters and the parameters of the ability distribution are easily derived. Also the model tests proposed here pertain to the framework of MML. Two tests or modification indices are proposed. The first one is focused on the assumption of local stochastic independence, the second one on the assumption of the test characteristic functions. The tests are based on Lagrange multiplier statistics, and can therefore be computed using the parameter estimates under the null model. Therefore, model violations for all items and pairs of items can be assessed as a by-product of one single estimation run. Power studies and applications to real data are included as numerical examples.  相似文献   

17.
Psychophysical studies with infants or with patients often are unable to use pilot data, training, or large numbers of trials. To evaluate threshold estimates under these conditions, computer simulations of experiments with small numbers of trials were performed by using psychometric functions based on a model of two types of noise:stimulus-related noise (affecting slope) andextraneous noise (affecting upper asymptote). Threshold estimates were biased and imprecise when extraneous noise was high, as were the estimates of extraneous noise. Strategies were developed for rejecting data sets as too noisy for unbiased and precise threshold estimation; these strategies were most successful when extraneous noise was low for most of the data sets. An analysis of 1,026 data sets from visual function tests of infants and toddlers showed that extraneous noise is often considerable, that experimental paradigms can be developed that minimize extraneous noise, and that data analysis that does not consider the effects of extraneous noise may underestimate test-retest reliability and overestimate interocular differences.  相似文献   

18.
The method of finding the maximum likelihood estimates of the parameters in a multivariate normal model with some of the component variables observable only in polytomous form is developed. The main stratagem used is a reparameterization which converts the corresponding log likelihood function to an easily handled one. The maximum likelihood estimates are found by a Fletcher-Powell algorithm, and their standard error estimates are obtained from the information matrix. When the dimension of the random vector observable only in polytomous form is large, obtaining the maximum likelihood estimates is computationally rather labor expensive. Therefore, a more efficient method, the partition maximum likelihood method, is proposed. These estimation methods are demonstrated by real and simulated data, and are compared by means of a simulation study.  相似文献   

19.
叶宝娟  温忠麟 《心理学报》2012,44(12):1687-1694
在决定将多维测验分数合并成测验总分时, 应当考虑测验同质性。如果同质性太低, 合成总分没有什么意义。同质性高低可以用同质性系数来衡量。用来计算同质性系数的模型是近年来受到关注的双因子模型(既有全局因子又有局部因子), 测验的同质性系数定义为测验分数方差中全局因子分数方差所占的比例。本文用Delta法推导出计算同质性系数的标准误公式, 进而计算其置信区间。提供了简单的计算同质性系数及其置信区间的程序。用一个例子说明如何估计同质性系数及其置信区间, 通过模拟比较了用Delta法和用Bootstrap法计算的置信区间, 发现两者差异很小。  相似文献   

20.
Spiess  Martin  Jordan  Pascal  Wendt  Mike 《Psychometrika》2019,84(1):212-235

In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号