首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Missing data, such as item responses in multilevel data, are ubiquitous in educational research settings. Researchers in the item response theory (IRT) context have shown that ignoring such missing data can create problems in the estimation of the IRT model parameters. Consequently, several imputation methods for dealing with missing item data have been proposed and shown to be effective when applied with traditional IRT models. Additionally, a nonimputation direct likelihood analysis has been shown to be an effective tool for handling missing observations in clustered data settings. This study investigates the performance of six simple imputation methods, which have been found to be useful in other IRT contexts, versus a direct likelihood analysis, in multilevel data from educational settings. Multilevel item response data were simulated on the basis of two empirical data sets, and some of the item scores were deleted, such that they were missing either completely at random or simply at random. An explanatory IRT model was used for modeling the complete, incomplete, and imputed data sets. We showed that direct likelihood analysis of the incomplete data sets produced unbiased parameter estimates that were comparable to those from a complete data analysis. Multiple-imputation approaches of the two-way mean and corrected item mean substitution methods displayed varying degrees of effectiveness in imputing data that in turn could produce unbiased parameter estimates. The simple random imputation, adjusted random imputation, item means substitution, and regression imputation methods seemed to be less effective in imputing missing item scores in multilevel data settings.  相似文献   

2.
A survey of residual analysis in behavior‐analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic‐polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test.  相似文献   

3.
Several studies aimed at testing the validity of Holland's hexagonal and Roe's circular models of interests showed results on which the null hypothesis of random arrangement can be rejected, and the investigators concluded that the tested models were supported. None of these studies, however, tested each model in its entirety. The present study is based on the assumption that the rejection of the null hypothesis of chance is not rigorous enough. Reanalysis of 13 data sets of published studies, using a more rigorous method, reveals that although the random null hypothesis can in fact be rejected in 11 data sets, the hexagonal-circular model was supported by only 2 data sets and was rejected by 11 data sets. The hierarchical model for the structure of vocational interests (I. Gati, Journal of Vocational Behavior, 1979, 15, 90–106) was submitted to an identical test and was supported by 6 out of 10 data sets, including 4 data sets that rejected the hexagonal-circular model. The predictions of each of the models which tend to be discontinued by empirical data were identified. The implications of the findings for the structure of interests and occupational choice are discussed.  相似文献   

4.
To identify faking, bifactor models were applied to Big Five personality data in three studies of laboratory and applicant samples using within‐subjects designs. The models were applied to homogenous data sets from separate honest, instructed faking, applicant conditions, and to simulated applicant data sets containing random individual responses from honest and faking conditions. Factor scores from the general factor in a bifactor model were found to be most highly related to response condition in both types of data sets. Domain factor scores from the faking conditions were found less affected by faking in measurement of Big Five domains than summated scale scores across studies. We conclude that bifactor models are efficacious in assessing the Big Five domains while controlling for faking.  相似文献   

5.
The nonlinear random coefficient model has become increasingly popular as a method for describing individual differences in longitudinal research. Although promising, the nonlinear model it is not utilized as often as it might be because software options are still somewhat limited. In this article we show that a specialized version of the model can be fit to data using SEM software. The specialization is to a model in which the parameters that enter the function in a linear manner are random, whereas those that are nonlinear are common to all individuals. Although this kind of function is not as general as is the fully nonlinear model, it still is applicable to many different data sets. Two examples are presented to show how the models can be estimated using popular SEM computer programs.  相似文献   

6.
This study quantified the effects of 5 factors postulated to influence performance ratings: the ratee's general level of performance, the ratee's performance on a specific dimension, the rater's idiosyncratic rating tendencies, the rater's organizational perspective, and random measurement error. Two large data sets, consisting of managers (n = 2,350 and n = 2,142) who received developmental ratings on 3 performance dimensions from 7 raters (2 bosses, 2 peers, 2 subordinates, and self) were used. Results indicated that idiosyncratic rater effects (62% and 53%) accounted for over half of the rating variance in both data sets. The combined effects of general and dimensional ratee performance (21% and 25%) were less than half the size of the idiosyncratic rater effects. Small perspective-related effects were found in boss and subordinate ratings but not in peer ratings. Average random error effects in the 2 data sets were 11% and 18%.  相似文献   

7.
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or hierarchical approach in which the variance-covariance matrix of the random effects is assumed to be positive definite with nonzero values for the variances. When the number of fixed effects and random effects is unknown, the predominant approach to model building is a step-up method in which one starts with a limited model (e.g., few fixed and random intercepts) and then additional fixed effects and random effects are added based on statistical tests. A model building approach that has received less attention in psychology and education is a top-down method. In the top-down method, the initial model has a single random intercept but is loaded with fixed effects (also known as an “overelaborate” model). Based on the overelaborate fixed effects model, the need for additional random effects is determined. There has been little if any examination of the ability of these methods to identify a true population model (i.e., identifying the model that generated the data). The purpose of this article is to examine the performance of the step-up and top-down model building approaches for exploratory longitudinal data analysis. Student achievement data sets from the Chicago longitudinal study serve as the populations in the simulations.  相似文献   

8.
9.
The partial derivatives of the squared error loss function for the metric unfolding problem have a unique geometry which can be exploited to produce unfolding methods with very desirable properties. This paper details a simple unidimensional unfolding method which uses the geometry of the partial derivatives to find conditional global minima; i.e., one set of points is held fixed and the global minimum is found for the other set. The two sets are then interchanged. The procedure is very robust. It converges to a minimum very quickly from a random or non-random starting configuration and is particularly useful for the analysis of large data sets with missing entries.This paper benefits from many conversations with and suggestions from Howard Rosenthal.  相似文献   

10.
The log-linear model for contingency tables expresses the logarithm of a cell frequency as an additive function of main effects, interactions, etc., in a way formally identical with an analysis of variance model. Exact statistical tests are developed to test hypotheses that specific effects or sets of effects are zero, yielding procedures for exploring relationships among qualitative variables which are suitable for small samples. The tests are analogous to Fisher's exact test for a 2 × 2 contingency table. Given a hypothesis, the exact probability of the obtained table is determined, conditional on fixed marginals or other functions of the cell frequencies. The sum of the probabilities of the obtained table and of all less probable ones is the exact probability to be considered in testing the null hypothesis. Procedures for obtaining exact probabilities are explained in detail, with examples given.  相似文献   

11.
12.
Existing test statistics for assessing whether incomplete data represent a missing completely at random sample from a single population are based on a normal likelihood rationale and effectively test for homogeneity of means and covariances across missing data patterns. The likelihood approach cannot be implemented adequately if a pattern of missing data contains very few subjects. A generalized least squares rationale is used to develop parallel tests that are expected to be more stable in small samples. Three factors were varied for a simulation: number of variables, percent missing completely at random, and sample size. One thousand data sets were simulated for each condition. The generalized least squares test of homogeneity of means performed close to an ideal Type I error rate for most of the conditions. The generalized least squares test of homogeneity of covariance matrices and a combined test performed quite well also.Preliminary results on this research were presented at the 1999 Western Psychological Association convention, Irvine, CA, and in the UCLA Statistics Preprint No. 265 (http://www.stat.ucla.edu). The assistance of Ke-Hai Yuan and several anonymous reviewers is gratefully acknowledged.  相似文献   

13.
A Monte Carlo study was carried out in order to investigate the ability of ALSCAL to recover true structure inherent in simulated proximity measures when portions of the data are missing. All sets of simulated proximity measures were based on 30 stimuli and three dimensions, and selection of missing elements was done randomly. Properties of the simulated data varied according to (a) the number of individuals, (b) the level of random error, (c) the proportion of missing data, and (d) whether the same entries or different entries were deleted for each individual. Results showed that very accurate recovery of true distances, stimulus coordinates, and weight vectors could be achieved with as much as 60% missing data as long as sample size was sufficiently large and the level of random error was low.  相似文献   

14.
Williams (1974) has proposed a model that attempts to estimate true hypothesis behavior from inconsistent response patterns during sets of blank trials. This model includes an assumption of random responding during such blank-trial sets. Several kinds of data, however, suggest that inconsistent response patterns are produced by systematic processes. These patterns, therefore, may not contribute to a simple estimate of true hypothesis behavior.  相似文献   

15.
Many variables that are used in social and behavioural science research are ordinal categorical or polytomous variables. When more than one polytomous variable is involved in an analysis, observations are classified in a contingency table, and a commonly used statistic for describing the association between two variables is the polychoric correlation. This paper investigates the estimation of the polychoric correlation when the data set consists of misclassified observations. Two approaches for estimating the polychoric correlation have been developed. One assumes that the probabilities in relation to misclassification are known, and the other uses a double sampling scheme to obtain information on misclassification. A parameter estimation procedure is developed, and statistical properties for the estimates are discussed. The practicability and applicability of the proposed approaches are illustrated by analysing data sets that are based on real and generated data. Excel programmes with visual basic for application (VBA) have been developed to compute the estimate of the polychoric correlation and its standard error. The use of the structural equation modelling programme Mx to find parameter estimates in the double sampling scheme is discussed.  相似文献   

16.
Let each of several (generally interdependent) random vectors, taken separately, be influenced by a particular set of external factors. Under what kind of the joint dependence of these vectors on the union of these factor sets can one say that each vector is selectively influenced by “its own” factor set? The answer proposed and elaborated in this paper is: One can say this if and only if one can find a factor-independent random vector given whose value the vectors in question are conditionally independent, with their conditional distributions selectively influenced by the corresponding factor sets. Equivalently, the random vectors should be representable as deterministic functions of “their” factor sets and of some mutually independent and factor-independent random variables, some of which may be shared by several of the functions.  相似文献   

17.
The previously unknown asymptotic distribution of Cook's distance in polytomous logistic regression is established as a linear combination of independent chi‐square random variables with one degree of freedom. An exhaustive approach to the analysis of influential covariates is developed and a new measure for the accuracy of predictions based on such a distribution is provided. Two examples with real data sets (one with continuous covariates and the other with both qualitative and quantitative covariates) are presented to illustrate the approach developed.  相似文献   

18.
A marginalization model for the multidimensional unfolding analysis of ranking data is presented. A subject samples one of a number of random points that are multivariate normally distributed. The subject perceives the distances from the point to all the stimulus points fixed in the same multidimensional space. The distances are error perturbed in this perception process. He/she produces a ranking dependent on these error-perturbed distances. The marginal probability of a ranking is obtained according to this ranking model and by integrating out the subject (ideal point) parameters, assuming the above distribution. One advantage of the model is that the individual differences are captured using the posterior probabilities of subject points. Three sets of ranking data are analyzed by the model.  相似文献   

19.
PINDIS, as recently presented by Lingoes and Borg [1978] not only marks the latest development within the scope of individual differences scaling, but, may be of benefit in some closely related topics, such as target analysis. Decisions on whether the various models available from PINDIS fit fallible data are relatively arbitrary, however, since a statistical theory of the fit measures is lacking. Using Monte Carlo simulation, expected fit measures as well as some related statistics were therefore obtained by scaling sets of 4(4)24 random configurations of 5(5)30 objects in 2, 3, and 4 dimensions (individual differences case) and by fitting one random configuration to a fixed random target for 5(5)30 objects in 2, 3, and 4 dimensions (target analysis case). Applications are presented.  相似文献   

20.
Discrete choice experiments—selecting the best and/or worst from a set of options—are increasingly used to provide more efficient and valid measurement of attitudes or preferences than conventional methods such as Likert scales. Discrete choice data have traditionally been analyzed with random utility models that have good measurement properties but provide limited insight into cognitive processes. We extend a well‐established cognitive model, which has successfully explained both choices and response times for simple decision tasks, to complex, multi‐attribute discrete choice data. The fits, and parameters, of the extended model for two sets of choice data (involving patient preferences for dermatology appointments, and consumer attitudes toward mobile phones) agree with those of standard choice models. The extended model also accounts for choice and response time data in a perceptual judgment task designed in a manner analogous to best–worst discrete choice experiments. We conclude that several research fields might benefit from discrete choice experiments, and that the particular accumulator‐based models of decision making used in response time research can also provide process‐level instantiations for random utility models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号