首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Examining the influence of culture on personality and its unbiased assessment is the main subject of cross-cultural personality research. Recent large-scale studies exploring personality differences across cultures share substantial methodological and psychometric shortcomings that render it difficult to differentiate between method and trait variance. One prominent example is the implicit assumption of cross-cultural measurement invariance in personality questionnaires. In the rare instances where measurement invariance across cultures was tested, scalar measurement invariance—which is required for unbiased mean-level comparisons of personality traits—did not hold. In this article, we present an item sampling procedure, ant colony optimization, which can be used to select item sets that satisfy multiple psychometric requirements including model fit, reliability, and measurement invariance. We constructed short scales of the IPIP-NEO-300 for a group of countries that are culturally similar (USA, Australia, Canada, and UK) as well as a group of countries with distinct cultures (USA, India, Singapore, and Sweden). In addition to examining factor mean differences across countries, we provide recommendations for cross-cultural research in general. From a methodological perspective, we demonstrate ant colony optimization's versatility and flexibility as an item sampling procedure to derive measurement invariant scales for cross-cultural research. © 2020 The Authors. European Journal of Personality published by John Wiley & Sons Ltd on behalf of European Association of Personality Psychology  相似文献   

2.
本文首次提出使用广义线性混合模型(Generalized Linear Mixed Model, GLMM)对概化理论(GT)和项目反应理论(IRT)进行统合,即在一次统计中就能同时获得GT和IRT所需要的估计结果。模拟研究结果显示:相比于传统的GT方差分量估计方法——期望均值平方(Expected Mean Squares, EMS),GLMM可以获得更准确的方差分量、G系数和Φ系数,而且GLMM获得的题目难度参数估计精度优于传统Rasch模型。实证研究展示GLMM在实际心理测量数据分析中的应用。  相似文献   

3.
This paper is a presentation of an essential part of the sampling theory of the error variance and the standard error of measurement. An experimental assumption is that several equivalent tests with equal variances are available. These may be either final forms of the same test or obtained by dividing one test into several parts. The simple model of independent and normally distributed errors of measurement with zero mean is employed. No assumption is made about the form of the distributions of true and observed scores. This implies unrestricted freedom in defining the population. First, maximum-likelihood estimators of the error variance and the standard error of measurement are obtained, their sampling distributions given, and their properties investigated. Then unbiased estimators are defined and their distributions derived. The accuracy of estimation is given special consideration from various points of view. Next, rigorous statistical tests are developed to test hypotheses about error variances on the basis of one and two samples. Also the construction of confidence intervals is treated. Finally, Bartlett's test of homogeneity of variances is used to provide a multi-sample test of equality of error variances.  相似文献   

4.
Ogilvie and Creelman have recently attempted to develop maximum likelihood estimates of the parameters of signal-detection theory from the data of yes-no ROC curves. Their method involved the assumption of a logistic distribution rather than the normal distribution in order to make the mathematics more tractable. The present paper presents a method of obtaining maximum likelihood estimates of these parameters using the assumption of underlying normal distributions.This research was supported in part by grants from the National Institutes of Health, MH-10449-02, and from the National Science Foundation, NSF GS-1466.  相似文献   

5.
A Monte Carlo experiment is conducted to investigate the performance of the bootstrap methods in normal theory maximum likelihood factor analysis both when the distributional assumption is satisfied and unsatisfied. The parameters and their functions of interest include unrotated loadings, analytically rotated loadings, and unique variances. The results reveal that (a) bootstrap bias estimation performs sometimes poorly for factor loadings and nonstandardized unique variances; (b) bootstrap variance estimation performs well even when the distributional assumption is violated; (c) bootstrap confidence intervals based on the Studentized statistics are recommended; (d) if structural hypothesis about the population covariance matrix is taken into account then the bootstrap distribution of the normal theory likelihood ratio test statistic is close to the corresponding sampling distribution with slightly heavier right tail.This study was carried out in part under the ISM cooperative research program (91-ISM · CRP-85, 92-ISM · CRP-102). The authors would like to thank the editor and three reviewers for their helpful comments and suggestions which improved the quality of this paper considerably.  相似文献   

6.
方差分量估计是进行概化理论分析的关键。采用MonteCarlo模拟技术,探讨心理与教育测量数据分布对概化理论各种方法估计方差分量的影响。数据分布包括正态、二项和多项分布,估计方法包括Traditional、Jackknife、Bootstrap和MCMC方法。结果表明:(1)Traditional方法估计正态分布和多项分布数据的方差分量相对较好,估计二项分布数据需要校正,Jackknife方法准确地估计了三种分布数据的方差分量,校正的Bootstrap方法和有先验信息的MCMC方法(MCMCinf)估计三种分布数据的方差分量结果较好;(2)心理与教育测量数据分布对四种方法估计概化理论方差分量有影响,数据分布制约着各种方差分量估计方法性能的发挥,需要加以区分地使用。  相似文献   

7.
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.  相似文献   

8.
Methods for meta-analyzing single-case designs (SCDs) are needed to inform evidence-based practice in clinical and school settings and to draw broader and more defensible generalizations in areas where SCDs comprise a large part of the research base. The most widely used outcomes in single-case research are measures of behavior collected using systematic direct observation, which typically take the form of rates or proportions. For studies that use such measures, one simple and intuitive way to quantify effect sizes is in terms of proportionate change from baseline, using an effect size known as the log response ratio. This paper describes methods for estimating log response ratios and combining the estimates using meta-analysis. The methods are based on a simple model for comparing two phases, where the level of the outcome is stable within each phase and the repeated outcome measurements are independent. Although auto-correlation will lead to biased estimates of the sampling variance of the effect size, meta-analysis of response ratios can be conducted with robust variance estimation procedures that remain valid even when sampling variance estimates are biased. The methods are demonstrated using data from a recent meta-analysis on group contingency interventions for student problem behavior.  相似文献   

9.
The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects. Simulations show that the estimates are unbiased under most conditions. Confidence intervals based on a normal approximation or a simulated sampling distribution perform well when the random effects are normally distributed but less so when they are nonnormally distributed. These methods are further developed to address hypotheses of moderated mediation in the multilevel context. An example demonstrates the feasibility and usefulness of the proposed methods.  相似文献   

10.
黎光明  张敏强 《心理科学》2013,36(1):203-209
方差分量估计是概化理论的必用技术,但受限于抽样,需要对其变异量进行探讨。采用Monte Carlo数据模拟技术,探讨非正态数据分布对四种方法估计概化理论方差分量变异量的影响。结果表明:(1)不同非正态数据分布下,各种估计方法的“性能”表现出差异性;(2)数据分布对方差分量变异量估计有影响,适合于非正态分布数据的方差分量变异量估计方法不一定适合于正态分布数据。  相似文献   

11.
黎光明  张敏强 《心理学报》2013,45(1):114-124
Bootstrap方法是一种有放回的再抽样方法, 可用于概化理论的方差分量及其变异量估计。用Monte Carlo技术模拟四种分布数据, 分别是正态分布、二项分布、多项分布和偏态分布数据。基于p×i设计, 探讨校正的Bootstrap方法相对于未校正的Bootstrap方法, 是否改善了概化理论估计四种模拟分布数据的方差分量及其变异量。结果表明:跨越四种分布数据, 从整体到局部, 不论是“点估计”还是“变异量”估计, 校正的Bootstrap方法都要优于未校正的Bootstrap方法, 校正的Bootstrap方法改善了概化理论方差分量及其变异量估计。  相似文献   

12.
孟祥斌 《心理科学》2016,39(3):727-734
近年来,项目反应时间数据的建模是心理和教育测量领域的热门方向之一。针对反应时间的对数正态模型和Box-Cox正态模型的不足,本文在van der Linden的分层模型框架下基于偏正态分布建立一个反应时间的对数线性模型,并成功给出模型参数估计的马尔科夫链蒙特卡罗(Markov Chain Monte Carlo, MCMC)算法。模拟研究和实例分析的结果均表明,与对数正态模型和Box-Cox正态模型相比,对数偏正态模型表现出更加优良的拟合效果,具有更强的灵活性和适用性。  相似文献   

13.
Criterion measures are frequently obtained by averaging ratings, but the number and kind of ratings available may differ from individual to individual. This raises issues as to the appropriateness of any single regression equation, about the relation of variance about regression to number and kind of criterion observations, and about the preferred estimate of regression parameters. It is shown that if criterion ratings all have the same true score the regression equation for predicting the average is independent of the number and kind of criterion scores averaged.Two cases are distinguished, one where criterion measures are assumed to have the same true score, and the other where criterion measures have the same magnitude of error of measurement as well. It is further shown that the variance about regression is a function of the number and kind of criterion ratings averaged, generally decreasing as the number of measures averaged increases. Maximum likelihood estimates for the regression parameters are derived for the two cases, assuming a joint normal distribution for predictors and criterion average within each subpopulation of persons for whom the same type of criterion average is available.  相似文献   

14.
Commonly used formulae for standard error (SE) estimates in covariance structure analysis are derived under the assumption of a correctly specified model. In practice, a model is at best only an approximation to the real world. It is important to know whether the estimates of SEs as provided by standard software are consistent when a model is misspecified, and to understand why if not. Bootstrap procedures provide nonparametric estimates of SEs that automatically account for distribution violation. It is also necessary to know whether bootstrap estimates of SEs are consistent. This paper studies the relationship between the bootstrap estimates of SEs and those based on asymptotics. Examples are used to illustrate various versions of asymptotic variance–covariance matrices and their validity. Conditions for the consistency of the bootstrap estimates of SEs are identified and discussed. Numerical examples are provided to illustrate the relationship of different estimates of SEs and covariance matrices.  相似文献   

15.
Relations between constructs are estimated based on correlations between measures of constructs corrected for measurement error. This process assumes that the true scores on the measure are linearly related to construct scores, an assumption that may not hold. We examined the extent to which differences in distribution shape reduce the correlation between true scores on a measure and scores on the underlying construct they are intended to measure. We found, via a series of Monte Carlo simulations, that when the actual construct distribution is normal, nonnormal distributions of true scores caused this correlation to drop by an average of only .02 across 15 conditions. When both construct and true score distributions assumed different combinations of nonnormal distributions, the average correlation was reduced by .05 across 375 conditions. We conclude that theory‐based scales intended to measure constructs usually correlate highly with the constructs they are constructed to measure. We show that, as a result, in most cases true score correlations only modestly underestimate correlations between different constructs. However, in cases in which the two constructs are redundant, this underestimation can lead to the false conclusion that the constructs are ‘correlated but distinct constructs,’ resulting in construct proliferation.  相似文献   

16.
Personality tests often consist of a set of dichotomous or Likert items. These response formats are known to be susceptible to an agreeing-response bias called acquiescence. The common assumption in balanced scales is that the sum of appropriately reversed responses should be reasonably free of acquiescence. However, inter-item correlation (or covariance) matrices can still be affected by the presence of variance due to acquiescence. To analyse these correlation matrices, we propose a method that is based on an unrestricted factor analysis and can be applied to multidimensional scales. This method obtains a factor solution in which acquiescence response variance is isolated in an independent factor. It is therefore possible, without the potentially confounding effect of acquiescence, to: (a) examine the dominant factors related to content latent variables; and (b) estimate participants’ factor scores on content latent variables. This method, which is illustrated by two empirical data examples, has proved to be useful for improving the simplicity of the factor structure. This research was partially supported by a grant from the Spanish Ministry of Science and Technology (SEJ2005-09170-C04-04/PSIC), and a grant from the Catalan Ministry of Universities, the Research and Information Society (2005SGR00017). The authors are obliged to the team of reviewers for helpful comments on an earlier version of this paper.  相似文献   

17.
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable estimates are then treated as observed measures of the true variables. This leads to a two-stage estimation procedure which constitutes an alternative to a joint model for the outcome variable and the responses given to the questionnaire. Simulation studies explore the effect of ignoring the true error structure and the performance of the proposed method. Two illustrative examples concern achievement data of university students. Particular attention is given to the Rasch model.  相似文献   

18.
Parameters of the two‐parameter logistic model are generally estimated via the expectation–maximization (EM) algorithm by the maximum‐likelihood (ML) method. In so doing, it is beneficial to estimate the common prior distribution of the latent ability from data. Full non‐parametric ML (FNPML) estimation allows estimation of the latent distribution with maximum flexibility, as the distribution is modelled non‐parametrically on a number of (freely moving) support points. It is generally assumed that EM estimation of the two‐parameter logistic model is not influenced by initial values, but studies on this topic are unavailable. Therefore, the present study investigates the sensitivity to initial values in FNPML estimation. In contrast to the common assumption, initial values are found to have notable influence: for a standard convergence criterion, item discrimination and difficulty parameter estimates as well as item characteristic curve (ICC) recovery were influenced by initial values. For more stringent criteria, item parameter estimates were mainly influenced by the initial latent distribution, whilst ICC recovery was unaffected. The reason for this might be a flat surface of the log‐likelihood function, which would necessitate setting a sufficiently tight convergence criterion for accurate recovery of item parameters.  相似文献   

19.
Many statistics packages print skewness and kurtosis statistics with estimates of their standard errors. The function most often used for the standard errors (e.g., in SPSS) assumes that the data are drawn from a normal distribution, an unlikely situation. Some textbooks suggest that if the statistic is more than about 2 standard errors from the hypothesized value (i.e., an approximate value for the critical value from the t distribution for moderate or large sample sizes when α = 5%), the hypothesized value can be rejected. This is an inappropriate practice unless the standard error estimate is accurate and the sampling distribution is approximately normal. We show distributions where the traditional standard errors provided by the function underestimate the actual values, often being 5 times too small, and distributions where the function overestimates the true values. Bootstrap standard errors and confidence intervals are more accurate than the traditional approach, although still imperfect. The reasons for this are discussed. We recommend that if you are using skewness and kurtosis statistics based on the 3rd and 4th moments, bootstrapping should be used to calculate standard errors and confidence intervals, rather than using the traditional standard. Software in the freeware R for this article provides these estimates.  相似文献   

20.
Missing not at random (MNAR) modeling for non-ignorable missing responses usually assumes that the latent variable distribution is a bivariate normal distribution. Such an assumption is rarely verified and often employed as a standard in practice. Recent studies for “complete” item responses (i.e., no missing data) have shown that ignoring the nonnormal distribution of a unidimensional latent variable, especially skewed or bimodal, can yield biased estimates and misleading conclusion. However, dealing with the bivariate nonnormal latent variable distribution with present MNAR data has not been looked into. This article proposes to extend unidimensional empirical histogram and Davidian curve methods to simultaneously deal with nonnormal latent variable distribution and MNAR data. A simulation study is carried out to demonstrate the consequence of ignoring bivariate nonnormal distribution on parameter estimates, followed by an empirical analysis of “don’t know” item responses. The results presented in this article show that examining the assumption of bivariate nonnormal latent variable distribution should be considered as a routine for MNAR data to minimize the impact of nonnormality on parameter estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号