首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Maximum likelihood estimation in confirmatory factor analysis requires large sample sizes, normally distributed item responses, and reliable indicators of each latent construct, but these ideals are rarely met. We examine alternative strategies for dealing with non‐normal data, particularly when the sample size is small. In two simulation studies, we systematically varied: the degree of non‐normality; the sample size from 50 to 1000; the way of indicator formation, comparing items versus parcels; the parcelling strategy, evaluating uniformly positively skews and kurtosis parcels versus those with counterbalancing skews and kurtosis; and the estimation procedure, contrasting maximum likelihood and asymptotically distribution‐free methods. We evaluated the convergence behaviour of solutions, as well as the systematic bias and variability of parameter estimates, and goodness of fit.  相似文献   

2.
A. J. Swain 《Psychometrika》1975,40(3):315-335
A general class of estimation procedures for the factor model is considered. The procedures are shown to yield estimates possessing the same asymptotic sampling properties as those from estimation by maximum likelihood or generalized least squares, both of which are special members of the class. General expressions for the derivatives needed for Newton-Raphson determination of the estimates are derived. Numerical examples are given, and the effect of the choice of estimation procedure is discussed.The author wishes to thank Dr. W. N. Venables for his encouragement and helpful suggestions throughout the preparation of this paper, and a reviewer whose comments on an earlier version led to the basic approach used in appendix B to the asymptotic theory.  相似文献   

3.
Four studies examined dyadic collaboration on quantitative estimation tasks. In accord with the tenets of "na?ve realism," dyad members failed to give due weight to a partner's estimates, especially those greatly divergent from their own. The requirement to reach joint estimates through discussion increased accuracy more than reaching agreement through a mere exchange of numerical "bids." However, even the latter procedure increased accuracy, relative to that of individual estimates (Study 1). Accuracy feedback neither increased weight given to partner's subsequent estimates nor produced improved accuracy (Study 2). Long-term dance partners, who shared a positive estimation bias, failed to improve accuracy when estimating their performance scores (Study 3). Having dyad members ask questions about the bases of partner's estimates produced greater yielding and accuracy increases than having them explain their own estimates (Study 4). The latter two studies provided additional direct and indirect evidence for the role of na?ve realism.  相似文献   

4.
There are two general methods of cross-validation: (a) empirical estimation, and (b) formula estimation. In choosing a specific cross-validation procedure, one should consider both costs (eg. inefficient use of available data in estimating regression parameters) and benefits (eg. accuracy in estimating population cross-validity). Empirical cross-validation methods involve significant costs, since they are typically laborious and wasteful of data, but under conditions represented in Monte Carlo studies, they are generally not more accurate than formula estimates. Consideration of costs and benefits suggests that empirical estimation methods are typically not worth the cost, except in a limited number of cases in which Monte Carlo sampling assumptions are not met in the derivation sample. Designs which use multiple samples to estimate the cross-validity of a single regression equation are clearly preferable to single-sample designs; the latter are never expected to be more accurate than formula estimates and thus are never worth the cost. Multi-equation designs are more accurate than single equation designs, but they appear to estimate the wrong parameter, and thus are difficult to interpret.  相似文献   

5.
We investigated how perspective-taking might be used to overcome bias and improve advice-based judgments. Decision makers often tend to underweight the opinions of others relative to their own, and thus fail to exploit the wisdom of others. We tested the idea that decision makers taking the perspective of another person engage a less egocentric mode of processing of advisory opinions and thereby improve their accuracy. In Studies 1–2, participants gave their initial opinions and then considered a sample of advisory opinions in two conditions. In one condition (self-perspective), they were asked to give their best advice-based estimates. In the second (other-perspective), they were asked to give advice-based estimates from the perspective of another judge. The dependent variables were the participants' accuracy and indices that traced their judgment policy. In the self-perspective condition participants adhered to their initial opinions, whereas in the other-perspective condition they were far less egocentric, weighted the available opinions more equally and produced more accurate estimates. In Study 3, initial estimates were not elicited, yet the data patterns were consistent with these conclusions. All the studies suggest that switching perspectives allows decision makers to generate advice-based judgments that are superior to those they would otherwise have produced. We discuss the merits of perspective-taking as a procedure for correcting bias, suggesting that it is theoretically justifiable, practicable, and effective.  相似文献   

6.
For mixed models generally, it is well known that modeling data with few clusters will result in biased estimates, particularly of the variance components and fixed effect standard errors. In linear mixed models, small sample bias is typically addressed through restricted maximum likelihood estimation (REML) and a Kenward-Roger correction. Yet with binary outcomes, there is no direct analog of either procedure. With a larger number of clusters, estimation methods for binary outcomes that approximate the likelihood to circumvent the lack of a closed form solution such as adaptive Gaussian quadrature and the Laplace approximation have been shown to yield less-biased estimates than linearization estimation methods that instead linearly approximate the model. However, adaptive Gaussian quadrature and the Laplace approximation are approximating the full likelihood rather than the restricted likelihood; the full likelihood is known to yield biased estimates with few clusters. On the other hand, linearization methods linearly approximate the model, which allows for restricted maximum likelihood and the Kenward-Roger correction to be applied. Thus, the following question arises: Which is preferable, a better approximation of a biased function or a worse approximation of an unbiased function? We address this question with a simulation and an illustrative empirical analysis.  相似文献   

7.
Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to great sampling error when the design has a small number of time points, as is typically the situation in single-case designs. Thus, a given observed autocorrelation may greatly over- or underestimate the corresponding population parameter. This article presents Bayesian estimates of the autocorrelation that greatly reduce the role of sampling error, as compared to past estimators. Simpler empirical Bayes estimates are presented first, in order to illustrate the fundamental notions of autocorrelation sampling error and shrinkage, followed by fully Bayesian estimates, and the difference between the two is explained. Scripts to do the analyses are available as supplemental materials. The analyses are illustrated using two examples from the single-case design literature. Bayesian estimation warrants wider use, not only in debates about the size of autocorrelations, but also in statistical methods that require an independent estimate of the autocorrelation to analyze the data.  相似文献   

8.
When sample information is combined, it is generally considered normative to weight information based on larger samples more heavily than information based on smaller samples. However, if samples appear likely to have been drawn from different subpopulations, it is reasonable to combine estimates of these subpopulation means (typically, the sample means) without weighting these estimates by sample size. This study investigated whether laypeople are influenced by the likelihood of samples coming from the same population when determining how to combine information. In two experiments we show that (1) implied binomial variability affected participants’ judgments of the likelihood that a sample was drawn from a given population, (2) participants' judgments were more affected by sample size when samples were implied to be drawn randomly from a general population, compared to when they were implied to be drawn from different subpopulations, and (3) people higher in numeracy gave more normative responses. We conclude that when determining how to weight and combine samples, laypeople use not only the provided data, but also information about likelihood and sampling processes that these data imply.  相似文献   

9.
10.
The calibration of the one-parameter logistic ability-based guessing (1PL-AG) model in item response theory (IRT) with a modest sample size remains a challenge for its implausible estimates and difficulty in obtaining standard errors of estimates. This article proposes an alternative Bayesian modal estimation (BME) method, the Bayesian Expectation-Maximization-Maximization (BEMM) method, which is developed by combining an augmented variable formulation of the 1PL-AG model and a mixture model conceptualization of the three-parameter logistic model (3PLM). By comparing with marginal maximum likelihood estimation (MMLE) and Markov Chain Monte Carlo (MCMC) in JAGS, the simulation shows that BEMM can produce stable and accurate estimates in the modest sample size. A real data example and the MATLAB codes of BEMM are also provided.  相似文献   

11.
In this paper, the constrained maximum likelihood estimation of a two-level covariance structure model with unbalanced designs is considered. The two-level model is reformulated as a single-level model by treating the group level latent random vectors as hypothetical missing-data. Then, the popular EM algorithm is extended to obtain the constrained maximum likelihood estimates. For general nonlinear constraints, the multiplier method is used at theM-step to find the constrained minimum of the conditional expectation. An accelerated EM gradient procedure is derived to handle linear constraints. The empirical performance of the proposed EM type algorithms is illustrated by some artifical and real examples.This research was supported by a Hong Kong UCG Earmarked Grant, CUHK 4026/97H. We are greatly indebted to D.E. Morisky and J.A. Stein for the use of their AIDS data in our example. We also thank the Editor, two anonymous reviewers, W.Y. Poon and H.T. Zhu for constructive suggestions and comments in improving the paper. The assistance of Michael K.H. Leung and Esther L.S. Tam is gratefully acknowledged.  相似文献   

12.
友伴网络结构与中学生的吸烟行为   总被引:1,自引:0,他引:1  
运用社会网络理论和分析技术考察友伴网络结构与中学生吸烟行为的关系。被试选自北京两所中学1091名初一至高三学生,要求他们最多提名10名友伴,并报告自己及其友伴的吸烟行为。然后,使用NEGOPY社会网络分析软件对中学生的友伴网络结构进行分析。研究结果表明,中学生的友伴网络结构存在团体成员、团体联系者和孤立者三种类型,团体成员所占比例超过团体联系者和孤立者,但随年级升高,团体成员有下降趋势。此外,团体成员的吸烟率显著低于孤立者和团体联系者,即使排除性别、年龄和学校类型的影响后,友伴网络结构与中学生的吸烟行为仍然有非常明显的关系。  相似文献   

13.
A Bayes estimation procedure is introduced that allows the nature and strength of prior beliefs to be easily specified and modal posterior estimates to be obtained as easily as maximum likelihood estimates. The procedure is based on constructing posterior distributions that are formally identical to likelihoods, but are based on sampled data as well as artificial data reflecting prior information. Improvements in performance of modal Bayes procedures relative to maximum likelihood estimation are illustrated for Rasch-type models. Improvements range from modest to dramatic, depending on the model and the number of items being considered.This research was supported by ORN Contact #00014-86-K0087. We wish to thank Sheng-Hui Chu and Dzung-Ji Lii for providing intelligent and energetic programming support for this article. We also thank one of the reviewers for pointing out several interesting and useful perspectives.  相似文献   

14.
The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.  相似文献   

15.
This study investigated self-reported problems in a sample of help-seeking Vietnam veterans, comparing the veteran's own view with clinician and spouse perspectives, with the aim of examining convergence in reports across different informants. Veterans with PTSD (N = 459) were asked to list and rate their five most serious problems. Spouses and treating clinicians completed the same questionnaire in relation to the veteran. Rates of endorsement for each problem area, and levels of agreement between raters, were calculated. Veterans, spouses, and clinicians were all likely to rate anger as a high priority, with veterans also likely to nominate anxiety and depression. Spouses were likely to nominate more observable behavioural problems such as interpersonal difficulties and avoidance, while clinicians were likely to nominate indications of psychopathology, such as anxiety, depression, and intrusive thoughts. Agreement across raters was generally high, although interpretation of agreement levels was complex.  相似文献   

16.
We seek to understand how the climate of courtship predicts people’s appraisals of the behavior of close friends and family members. To that end, we employ the relational turbulence model to examine the associations among intimacy, relational uncertainty, interference and facilitation from partners, and perceived network involvement. We conducted a cross‐sectional study in which 260 participants reported their perceptions of how much network members help and hinder their courtships. As we hypothesized, people perceived the least helpfulness and the most hindrance from network members at moderate levels of intimacy. Relationship uncertainty mediated the concave curvilinear association between intimacy and perceived helpfulness from network members, but interference from partners mediated the convex curvilinear association between intimacy and perceived hindrance from network members. We discuss how our findings (a) contribute to the literature on perceived network involvement, (b) illuminate nuances in perceived hindrance from network members, (c) extend the relational turbulence model, and (d) suggest the utility of educating people about how the climate of courtship may color their views of network members.  相似文献   

17.
Factorial experimental designs have many potential advantages for behavioral scientists. For example, such designs may be useful in building more potent interventions by helping investigators to screen several candidate intervention components simultaneously and to decide which are likely to offer greater benefit before evaluating the intervention as a whole. However, sample size and power considerations may challenge investigators attempting to apply such designs, especially when the population of interest is multilevel (e.g., when students are nested within schools, or when employees are nested within organizations). In this article, we examine the feasibility of factorial experimental designs with multiple factors in a multilevel, clustered setting (i.e., of multilevel, multifactor experiments). We conduct Monte Carlo simulations to demonstrate how design elements-such as the number of clusters, the number of lower-level units, and the intraclass correlation-affect power. Our results suggest that multilevel, multifactor experiments are feasible for factor-screening purposes because of the economical properties of complete and fractional factorial experimental designs. We also discuss resources for sample size planning and power estimation for multilevel factorial experiments. These results are discussed from a resource management perspective, in which the goal is to choose a design that maximizes the scientific benefit using the resources available for an investigation.  相似文献   

18.
Structural equation modeling is a well-known technique for studying relationships among multivariate data. In practice, high dimensional nonnormal data with small to medium sample sizes are very common, and large sample theory, on which almost all modeling statistics are based, cannot be invoked for model evaluation with test statistics. The most natural method for nonnormal data, the asymptotically distribution free procedure, is not defined when the sample size is less than the number of nonduplicated elements in the sample covariance. Since normal theory maximum likelihood estimation remains defined for intermediate to small sample size, it may be invoked but with the probable consequence of distorted performance in model evaluation. This article studies the small sample behavior of several test statistics that are based on maximum likelihood estimator, but are designed to perform better with nonnormal data. We aim to identify statistics that work reasonably well for a range of small sample sizes and distribution conditions. Monte Carlo results indicate that Yuan and Bentler's recently proposed F-statistic performs satisfactorily.  相似文献   

19.
We show that power and sample size tables developed by Cohen (1988, pp. 289–354, 381–389) produce incorrect estimates for factorial designs: power is underestimated, and sample size is overestimated. The source of this bias is shrinkage in the implied value of the noncentrality parameter, λ, caused by using Cohen’s adjustment ton for factorial designs (pp. 365 and 396). The adjustment was intended to compensate for differences in the actual versus presumed (by the tables) error degrees of freedom; however, more accurate estimates are obtained if the tables are used without adjustment. The problems with Cohen’s procedure were discovered while testing subroutines in DATASIM 1.2 for computing power and sample size in completely randomized, randomized-blocks, and split-plot factorial designs. The subroutines give the user the ability to generate power and sample size tables that are as easy to use as Cohen’s, but that eliminate the conservative bias of his tables. We also implemented several improvements relative to “manual” use of Cohen’s tables: (1) Since the user can control the specific values of 1- β,n, andf used on the rows and columns of the table, interpolation is never required; (2) exact as opposed to approximate solutions for the noncentralF distribution are employed; (3) solutions for factorial designs, including those with repeated measures factors, take into account the actual error degrees of freedom for the effect being tested; and (4) provision is made for the computation of power for applications involving the doubly noncentralF distribution.  相似文献   

20.
Two recent studies (C. R. Agnew, T. J. Loving, & S. M. Drigotas, 2001; T. K. MacDonald & M. Ross, 1999) investigated the relative ability of outsiders’ (network members) and daters’ perceptions of the daters’ romance to predict relationship fate. Careful analysis of these studies suggests that the types of network members asked and what is asked significantly impact the prognostic ability of outsiders’ perceptions. The current research replicates and extends this literature and highlights the challenges posed when collecting outsiders’ perspectives of their friends’ relationships. Daters and 2 friends (1 female, 1 male) were asked to provide their perceptions of the dating relationship on 2 indexes: a direct prediction of the likelihood that the relationship would last 6 months and an overall qualitative assessment of the dater’s commitment. Results highlight the need for researchers to carefully attend to the instruments and samples employed when obtaining multiple perspectives of the same dating relationship.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号