首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent research has seen intraindividual variability become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. Intraindividual variability, as measured by individual standard deviations (ISDs), has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD as compared with the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

2.
本研究通过蒙特卡洛模拟考查了分类精确性指数Entropy及其变式受样本量、潜类别数目、类别距离和指标个数及其组合的影响情况。研究结果表明:(1)尽管Entropy值与分类精确性高相关,但其值随类别数、样本量和指标数的变化而变化,很难确定唯一的临界值;(2)其他条件不变的情况下,样本量越大,Entropy的值越小,分类精确性越差;(3)类别距离对分类精确性的影响具有跨样本量和跨类别数的一致性;(4)小样本(N=50~100)的情况下,指标数越多,Entropy的结果越好;(5)在各种条件下Entropy对分类错误率比其它变式更灵敏。  相似文献   

3.
It has been suggested that hierarchical regression analysis provides an unambiguous conclusion with regard to the existence of moderator effects (Arnold & Evans, 1979). This paper examines the impact of correlated error among the dependent and independent variables in order to explore whether or not artificial interaction terms can be generated. A Monte Carlo study was performed to investigate the effects of correlated error on noninteraction and interaction models. The results are clear-cut. Artifactual interaction cannot be created; true interactions can be attentuated. Some practical suggestions are provided for drawing inferences from hierarchical regression analysis.  相似文献   

4.
Preliminary tests of equality of variances used before a test of location are no longer widely recommended by statisticians, although they persist in some textbooks and software packages. The present study extends the findings of previous studies and provides further reasons for discontinuing the use of preliminary tests. The study found Type I error rates of a two‐stage procedure, consisting of a preliminary Levene test on samples of different sizes with unequal variances, followed by either a Student pooled‐variances t test or a Welch separate‐variances t test. Simulations disclosed that the twostage procedure fails to protect the significance level and usually makes the situation worse. Earlier studies have shown that preliminary tests often adversely affect the size of the test, and also that the Welch test is superior to the t test when variances are unequal. The present simulations reveal that changes in Type I error rates are greater when sample sizes are smaller, when the difference in variances is slight rather than extreme, and when the significance level is more stringent. Furthermore, the validity of the Welch test deteriorates if it is used only on those occasions where a preliminary test indicates it is needed. Optimum protection is assured by using a separate‐variances test unconditionally whenever sample sizes are unequal.  相似文献   

5.
In a recent study, Emery found that MONANOVA is “superior” to regression as a method of approximating data, whether the measurement scale of the criterion variable is interval or ordinal. However, there are three major reasons for these results. With comparison methods that are fairer to both analytical procedures, the performances of regression and of MONANOVA were found to be close even when the measurement scale of the criterion variable is ordinal.  相似文献   

6.
A non‐parametric procedure for Cattell's scree test is proposed, using the bootstrap method. Bentler and Yuan developed parametric tests for the linear trend of scree eigenvalues in principal component analysis. The proposed method is for cases where parametric assumptions are not realistic. We define the break in the scree trend in several ways, based on linear slopes defined with two or three consecutive eigenvalues, or all eigenvalues after the k largest. The resulting scree test statistics are evaluated under various data conditions, among which Gorsuch and Nelson's bootstrap CNG performs best and is reasonably consistent and efficient under leptokurtic and skewed conditions. We also examine the bias‐corrected and accelerated bootstrap method for these statistics, and the bias correction is found to be too unstable to be useful. Using seven published data sets which Bentler and Yuan analysed, we compare the bootstrap approach to the scree test with the parametric linear trend test.  相似文献   

7.
Monte Carlo procedures are used to study the sampling distribution of the Hoyt reliability coefficient. Estimates of mean, variance, and skewness are made for the case of the Bower-Trabasso concept identification model. Given the Bower-Trabasso assumptions, the Hoyt coefficient of a particular concept identification experiment is shown to be statistically unlikely.  相似文献   

8.
This Monte Carlo study examined the impact of misspecifying the 𝚺 matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the 𝚺 matrix usually resulted in overestimation of the variances of the random effects (e.g., τ00, ττ11 ) and standard errors of the corresponding growth parameter estimates (e.g., SEβ 0, SEβ 1). Overestimates of the standard errors led to lower statistical power in tests of the growth parameters. An unstructured 𝚺 matrix under the mixed model framework generally led to underestimates of standard errors of the growth parameter estimates. Underestimates of the standard errors led to inflation of the type I error rate in tests of the growth parameters. Implications of the compensatory relationship between the random effects of the growth parameters and the longitudinal error structure for model specification were discussed.  相似文献   

9.
The conditional power (CP) of the randomization test (RT) was investigated in a simulation study in which three different single-case effect size (ES) measures were used as the test statistics: the mean difference (MD), the percentage of nonoverlapping data (PND), and the nonoverlap of all pairs (NAP). Furthermore, we studied the effect of the experimental design on the RT’s CP for three different single-case designs with rapid treatment alternation: the completely randomized design (CRD), the randomized block design (RBD), and the restricted randomized alternation design (RRAD). As a third goal, we evaluated the CP of the RT for three types of simulated data: data generated from a standard normal distribution, data generated from a uniform distribution, and data generated from a first-order autoregressive Gaussian process. The results showed that the MD and NAP perform very similarly in terms of CP, whereas the PND performs substantially worse. Furthermore, the RRAD yielded marginally higher power in the RT, followed by the CRD and then the RBD. Finally, the power of the RT was almost unaffected by the type of the simulated data. On the basis of the results of the simulation study, we recommend at least 20 measurement occasions for single-case designs with a randomized treatment order that are to be evaluated with an RT using a 5% significance level. Furthermore, we do not recommend use of the PND, because of its low power in the RT.  相似文献   

10.
A small Monte Carlo study examined the performance of a form of taxometric analysis (the MAXCOV procedure) with fuzzy data sets. These combine taxonic (categorical) and nontaxonic (continuous) features, containing a subset of casts with intermediate degrees of category membership. Fuzzy data sets tended to yield taxonic findings on plot inspection and two popular consistency tests, even when the degree of fuzziness, i.e., the proportion of intermediate cases, was large. These results suggest that fuzzy categories represent a source of pseudotaxonic inferences, if on is understood in the usual binary, "either-or" fashion. This in turn implies that dichotomous causes cannot be confidently inferred when taxometric analyses yield apparently taxonic findings.  相似文献   

11.
12.
While conducting intervention research, researchers and practitioners are often interested in how the intervention functions not only at the group level, but also at the individual level. One way to examine individual treatment effects is through multiple-baseline studies analyzed with multilevel modeling. This analysis allows for the construction of confidence intervals, which are strongly recommended in the reporting guidelines of the American Psychological Association. The purpose of this study was to examine the accuracy of confidence intervals of individual treatment effects obtained from multilevel modeling of multiple-baseline data. Monte Carlo methods were used to examine performance across conditions varying in the number of participants, the number of observations per participant, and the dependency of errors. The accuracy of the confidence intervals depended on the method used, with the greatest accuracy being obtained when multilevel modeling was coupled with the Kenward—Roger method of estimating degrees of freedom.  相似文献   

13.
The magnetic properties of a Mn-doped armchair ZnO nanotube have been studied using Monte Carlo simulation. The variation of zero-field-cooled and field-cooled magnetisation with reduced temperatures for different values of the dilution x (where x is the Mn concentration: Zn1?xMnxO) are given. The freezing temperatures and magnetisation vs. crystal field are calculated for different dilutions x. Finally, the hysteresis loops for different dilutions and temperatures are given for a fixed reduced temperature and crystal field. Superparamagnetic behaviour is observed for small values of x and low temperatures.  相似文献   

14.
While the effect of selection in predictive validity studies has long been recognized and discussed in psychometric studies, little consideration has been given to this problem in the context of latent variable models. In a recent paper, Muthén & Hsu (1993) proposed and compared estimators of predictive validity of a multifactorial test. Both selectivity and measurement error were considered in the estimation of predictive validity. The purpose of the present paper is to expand on Muthén & Hsu (1993) by examining and comparing the sampling behaviour of three estimators for predictive validity, LQL (listwise, quasi-likelihood estimator), FQL (full, quasi-likelihood estimator) and FS (factor score estimator), using a Monte Carlo approach. Effects of selection procedures, selection ratios and sample sizes on the sampling behaviours of the estimators are also investigated. The results show that FQL and FS are the two preferred estimators and each has different strengths and weaknesses. A real data application is presented to illustrate the practical implementation of the estimators.  相似文献   

15.
16.
We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non‐normality (normal, moderately, and extremely non‐normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.  相似文献   

17.
Two analytical procedures for identifying young children as categorizers, the Monte Carlo Simulation and the Probability Estimate Model, were compared. Using a sequential touching method, children aged 12, 18, 24, and 30 months were given seven object sets representing different levels of categorical classification. From their touching performance, the probability that children were categorizing was then determined independently using Monte Carlo Simulation and the Probability Estimate Model. The two analytical procedures resulted in different percentages of children being classified as categorizers. Results using the Monte Carlo Simulation were more consistent with group-level analyses than results using the Probability Estimate Model. These findings recommend using the Monte Carlo Simulation for determining individual categorizer classification.  相似文献   

18.
Our aphasic subject appeared to demonstrate disorders related to phonological sequential constraint rules. In particular, he seemed to have difficulty in distinguishing between accidental and systematic gaps. Although he had little trouble repeating actual English lexical items, he was unable in the majority of cases to repeat possible but non-existent ones (as well as impossible ones). A reading test was administered in o der to control for errors related to the modality of the repetition test.In order for us to insure that his disorder was related to his aphasia, three normal subjects performed both the repetition test and the reading test. The aphasic's results were significantly different from theirs.We conclude that the aphasic's lexical redundancy system was impaired and that the phonological lexical redundancy rules of generative phonology form a functional neuropsychological unit the proper function of which can be selectively disrupted by brain damage.  相似文献   

19.
The article reports the findings from a Monte Carlo investigation examining the impact of faking on the criterion-related validity of Conscientiousness for predicting supervisory ratings of job performance. Based on a review of faking literature, 6 parameters were manipulated in order to model 4,500 distinct faking conditions (5 [magnitude] x 5 [proportion] x 4 [variability] x 3 [faking-Conscientiousness relationship] x 3 [faking-performance relationship] x 5 [selection ratio]). Overall, the results indicated that validity change is significantly affected by all 6 faking parameters, with the relationship between faking and performance, the proportion of fakers in the sample, and the magnitude of faking having the strongest effect on validity change. Additionally, the association between several of the parameters and changes in criterion-related validity was conditional on the faking-performance relationship. The results are discussed in terms of their practical and theoretical implications for using personality testing for employee selection.  相似文献   

20.
Numerous ways to meta-analyze single-case data have been proposed in the literature; however, consensus has not been reached on the most appropriate method. One method that has been proposed involves multilevel modeling. For this study, we used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach for the meta-analysis of single-case data. Specifically, we examined the fixed effects (e.g., the overall average treatment effect) and the variance components (e.g., the between-person within-study variance in the treatment effect) in a three-level multilevel model (repeated observations nested within individuals, nested within studies). More specifically, bias of the point estimates, confidence interval coverage rates, and interval widths were examined as a function of the number of primary studies per meta-analysis, the modal number of participants per primary study, the modal series length per primary study, the level of autocorrelation, and the variances of the error terms. The degree to which the findings of this study are supportive of using Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach to meta-analyzing single-case data depends on the particular parameter of interest. Estimates of the average treatment effect tended to be unbiased and produced confidence intervals that tended to overcover, but did come close to the nominal level as Level-3 sample size increased. Conversely, estimates of the variance in the treatment effect tended to be biased, and the confidence intervals for those estimates were inaccurate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号