首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article derives a standard normal-based power method polynomial transformation for Monte Carlo simulation studies, approximating distributions, and fitting distributions to data based on the method of percentiles. The proposed method is used primarily when (1) conventional (or L) moment-based estimators such as skew (or L-skew) and kurtosis (or L -kurtosis) are unknown or (2) data are unavailable but percentiles are known (e.g., standardized test score reports). The proposed transformation also has the advantage that solutions to polynomial coefficients are available in simple closed form and thus obviates numerical equation solving. A procedure is also described for simulating power method distributions with specified medians, inter-decile ranges, left-right tail-weight ratios (skew function), tail-weight factors (kurtosis function), and Spearman correlations. The Monte Carlo results presented in this study indicate that the estimators based on the method of percentiles are substantially superior to their corresponding conventional product-moment estimators in terms of relative bias. It is also shown that the percentile power method can be modified for generating nonnormal distributions with specified Pearson correlations. An illustration shows the applicability of the percentile power method technique to publicly available statistics from the Idaho state educational assessment.  相似文献   

2.
A nonparametric test of dispersion with paired replicates data is described which involves jackknifing logarithmic transformations of the ratio of variance estimates for the pre- and post-treatment populations. Results from a Monte Carlo simulation show that the test performs well underH o and has good power properties. Examples are given of applying the procedure on psychiatric data.A referee called our attention to valuable references which related to our work, and he suggested the inclusion of competing jackknife procedures into our studies.  相似文献   

3.
While conducting intervention research, researchers and practitioners are often interested in how the intervention functions not only at the group level, but also at the individual level. One way to examine individual treatment effects is through multiple-baseline studies analyzed with multilevel modeling. This analysis allows for the construction of confidence intervals, which are strongly recommended in the reporting guidelines of the American Psychological Association. The purpose of this study was to examine the accuracy of confidence intervals of individual treatment effects obtained from multilevel modeling of multiple-baseline data. Monte Carlo methods were used to examine performance across conditions varying in the number of participants, the number of observations per participant, and the dependency of errors. The accuracy of the confidence intervals depended on the method used, with the greatest accuracy being obtained when multilevel modeling was coupled with the Kenward—Roger method of estimating degrees of freedom.  相似文献   

4.
Serlin RC 《心理学方法》2000,5(2):230-240
Monte Carlo studies provide the information needed to help researchers select appropriate analytical procedures under design conditions in which the underlying assumptions of the procedures are not met. In Monte Carlo studies, the 2 errors that one could commit involve (a) concluding that a statistical procedure is robust when it is not or (b) concluding that it is not robust when it is. In previous attempts to apply standard statistical design principles to Monte Carlo studies, the less severe of these errors has been wrongly designated the Type I error. In this article, a method is presented for controlling the appropriate Type I error rate; the determination of the number of iterations required in a Monte Carlo study to achieve desired power is described; and a confidence interval for a test's true Type I error rate is derived. A robustness criterion is also proposed that is a compromise between W. G. Cochran's (1952) and J. V. Bradley's (1978) criteria.  相似文献   

5.
Beyond the typical design factors that impact a study’s power (e.g., participant sample size), planning longitudinal research involves additional considerations such as assessment frequency and participant retention. Because this type of research relies so strongly on individual commitment, investigators must be judicious in determining how much information is necessary to study the phenomena in question; collecting too little information will render the data less useful, but requiring excessive participant investment will likely lower participation rates. We conducted a simulation study to empirically examine statistical power and the trade-off between assessment quality (as a function of instrument length) and assessment frequency across a number of sample sizes with intermittently missing data or attrition. Results indicated that reductions in power resulting from shorter, less reliable measurements can be at least somewhat offset by increasing assessment frequency. Because study planning involves a number of factors competing for finite resources, equations were derived to find the balance points between pairs of design characteristics affecting statistical power. These equations allow researchers to calculate the amount that a particular design factor (e.g., assessment frequency) would need to increase to result in the same improvement in power as increasing an alternative factor (e.g., measurement reliability. Applications for the equations are discussed.  相似文献   

6.
方杰  温忠麟 《心理科学》2018,(4):962-967
比较了贝叶斯法、Monte Carlo法和参数Bootstrap法在2-1-1多层中介分析中的表现。结果发现:1)有先验信息的贝叶斯法的中介效应点估计和区间估计都最准确;2)无先验信息的贝叶斯法、Monte Carlo法、偏差校正和未校正的参数Bootstrap法的中介效应点估计和区间估计表现相当,但Monte Carlo法在第Ⅰ类错误率和区间宽度指标上表现略优于其他三种方法,偏差校正的Bootstrap法在统计检验力上表现略优于其他三种方法,但在第Ⅰ类错误率上表现最差;结果表明,当有先验信息时,推荐使用贝叶斯法;当先验信息不可得时,推荐使用Monte Carlo法。  相似文献   

7.
This paper proposes test statistics based on the likelihood ratio principle for testing equality of proportions in correlated data with additional incomplete samples. Powers of these tests are compared through Monte Carlo simulation with those of tests proposed recently by Ekbohm (based on an unbiased estimator) and Campbell (based on a Pearson-Chi-squared type statistic). Even though tests based on the maximum likelihood principle are theoretically expected to be superior to others, at least asymptotically, results from our simulations show that the gain in power could only be slight.  相似文献   

8.
A classic data-analytic problem is the statistical evaluation of the distributional form of interval-scale scores. The investigator may need to know whether the scores originate from a single Gaussian distribution or from a mixture of Gaussian distributions or from a different probability distribution. The relative merits of extant goodness-of-fit metrics are discussed. Monte Carlo power analyses are provided for several of the more powerful goodness-of-fit metrics.  相似文献   

9.
结构方程模型是心理学、管理学、社会学等学科中重要的统计工具之一。然而, 大量使用结构方程模型的研究忽视了对该方法的统计检验力进行必要的分析和报告, 在一定程度上降低了这些研究的结果的证明效力。结构方程模型的统计检验力分析方法主要有Satorra-Saris法、MacCallum法与Monte Carlo法三类。其中Satorra-Saris法适用于备择模型清晰、检验对象相对简单、检验方法基于χ2分布的情形; MacCallum法适用于基于χ2分布的模型拟合检验且备择模型不明的情形; Monte Carlo法适用于检验对象相对复杂、采用模拟或重抽样方法进行检验的情形。在实际应用中, 研究者应当首先判断检验的目的、方法以及是否有明确的备择模型, 并根据这些信息选择具体的分析方法。  相似文献   

10.
This study documents how the use of A. I. Huffcutt & W. A. Arthur's (1995) sample adjusted meta-analytic deviancy (SAMD) statistic for identifying outliers in correlational meta-analyses results in inaccuracies in mean r. Monte Carlo simulations found that use of the SAMD resulted in the overidentification of small relative to large correlations as outliers. Furthermore, this tendency to overidentify small correlations was found to increase as the magnitude of the population correlation increased and resulted in mean rs that overestimated the population correlation. The implications for meta-analysts are discussed, and 2 possible solutions are offered.  相似文献   

11.
Although many nonlinear models of cognition have been proposed in the past 50 years, there has been little consideration of corresponding statistical techniques for their analysis. In analyses with nonlinear models, unmodeled variability from the selection of items or participants may lead to asymptotically biased estimation. This asymptotic bias, in turn, renders inference problematic. We show, for example, that a signal detection analysis of recognition memory data leads to asymptotic underestimation of sensitivity. To eliminate asymptotic bias, we advocate hierarchical models in which participant variability, item variability, and measurement error are modeled simultaneously. By accounting for multiple sources of variability, hierarchical models yield consistent and accurate estimates of participant and item effects in recognition memory. This article is written in tutorial format; we provide an introduction to Bayesian statistics, hierarchical modeling, and Markov chain Monte Carlo computational techniques.  相似文献   

12.
The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.  相似文献   

13.
Monte Carlo techniques were used to evaluate the performance of an on-line paired-comparisons data collection procedure that makes use of a common computer sorting algorithm. The results revealed that the sorting method can reduce the number of trials per subject substantially even when a considerable amount of random error is present. While a complete paired-comparisons design requires N(N?1)/2 trials (where N is the number of objects), the sorting procedure requires a theoretical minimum of N(log2N) trials. The savings in the number of trials consequently increases with N. Furthermore, the negative effect of random error on the final ordering of the data from the sorting method is small and decreases with the number of stimuli. The data from a small empirical study reinforces the Monte Carlo observations. It is recommended that the sorting method be used in place of the complete paired-comparisons procedure whenever a substantial number of stimuli are included in the design.  相似文献   

14.
Implementing large‐scale empirical studies can be very expensive. Therefore, it is useful to optimize study designs without losing statistical power. In this paper, we show how study designs can be improved without changing statistical power by defining power equivalence, a relation between structural equation models (SEMs) that holds true if two SEMs have the same power on a likelihood ratio test to detect a given effect. We show systematic operations of SEMs that maintain power, and give an algorithm that efficiently reduces SEMs to power‐equivalent models with a minimal number of observed parameters. In this way, optimal study designs can be found without reducing statistical power. Furthermore, the algorithm can be used to drastically increase the speed of power computations when using Monte Carlo simulations or approximation methods.  相似文献   

15.
Determining a priori power for univariate repeated measures (RM) ANOVA designs with two or more within-subjects factors that have different correlational patterns between the factors is currently difficult due to the unavailability of accurate methods to estimate the error variances used in power calculations. The main objective of this study was to determine the effect of the correlation between the levels in one RM factor on the power of the other RM factor. Monte Carlo simulation procedures were used to estimate power for the A, B, and AB tests of a 2×3, a 2×6, a 2×9, a 3×3, a 3×6, and a 3×9 design under varying experimental conditions of effect size (small, medium, and large), average correlation (.4 and .8), alpha (.01 and .05), and sample size (n = 5, 10 ,15, 20, 25, and 30). Results indicated that the greater the magnitude of the differences between the average correlation among the levels of Factor A and the average correlation in the AB matrix, the lower the power for Factor B (and vice versa). Equations for estimating the error variance of each test of the two-way model were constructed by examining power and mean square error trends across different correlation matrices. Support for the accuracy of these formulae is given, thus allowing for direct analytic power calculations in future studies.  相似文献   

16.
This research concerns the estimation of polychoric correlations in the context of fitting structural equation models to observed ordinal variables by multistage estimation. The first main contribution of this research is to propose and evaluate a Monte Carlo estimator for the asymptotic covariance matrix (ACM) of the polychoric correlation estimates. In multistage estimation, the ACM plays a prominent role, as overall test statistics, derived fit indices, and parameter standard errors all depend on this quantity. The ACM, however, must itself be estimated. Established approaches to estimating the ACM use a sample-based version, which can yield poor estimates with small samples. A simulation study demonstrates that the proposed Monte Carlo estimator can be more efficient than its sample-based counterpart. This leads to better calibration for established test statistics, in particular with small samples. The second main contribution of this research is a further exploration of the consequences of violating the normality assumption for the underlying response variables. We show the consequences depend on the type of nonnormality, and the number and location of thresholds. The simulation study also demonstrates that overall test statistics have little power to detect the studied forms of nonnormality, regardless of the ACM estimator.  相似文献   

17.
In a variety of measurement situations, the researcher may wish to compare the reliabilities of several instruments administered to the same sample of subjects. This paper presents eleven statistical procedures which test the equality ofm coefficient alphas when the sample alpha coefficients are dependent. Several of the procedures are derived in detail, and numerical examples are given for two. Since all of the procedures depend on approximate asymptotic results, Monte Carlo methods are used to assess the accuracy of the procedures for sample sizes of 50, 100, and 200. Both control of Type I error and power are evaluated by computer simulation. Two of the procedures are unable to control Type I errors satisfactorily. The remaining nine procedures perform properly, but three are somewhat superior in power and Type I error control.A more detailed version of this paper is also available.  相似文献   

18.
Multilevel structural equation modeling (MSEM) has been proposed as a valuable tool for estimating mediation in multilevel data and has known advantages over traditional multilevel modeling, including conflated and unconflated techniques (CMM & UMM). Recent methodological research has focused on comparing the three methods for 2-1-1 designs, but in regards to 1-1-1 mediation designs, there are significant gaps in the published literature that prevent applied researchers from making educated decisions regarding which model to employ in their own specific research design. A Monte Carlo study was performed to compare MSEM, UMM, and CMM on relative bias, confidence interval coverage, Type I Error, and power in a 1-1-1 model with random slopes under varying data conditions. Recommendations for applied researchers are discussed and an empirical example provides context for the three methods.  相似文献   

19.
Multinomial processing tree models can provide for measures of underlying cognitive processes. In this paper, the Chechile [Chechile, R. A. (2004). New multinomial models for the Chechile-Meyer task. Journal of Mathematical Psychology, 48, 364-384] 6P model is described and applied to several applications involving clinical populations. The model provides for separate measures of storage and retrieval. Monte Carlo studies were conducted to examine the relative accuracy of two methods for obtaining an overall condition estimate for the 6P model, i.e., averaging estimates found for individuals versus pooling the multinomial frequency data before estimating the model parameters. The sampling studies showed that the pooling of frequencies resulted in more accurate parameter estimates. However, psychological assessment in clinical psychology requires precise measurement on an individual basis. In order to recover information about individuals from pooled frequency information, a modified jackknife method was advanced. The jackknife method is based on a contrast between the overall pooled frequency information and the pool frequency without the observations from a single individual. Another series of Monte Carlo simulations demonstrate that the new jackknife method resulted in better recovery of the correct individual parameter values relative to estimates based on only the data from the individual. Finally, the 6P model was used to examine the data from two previously reported studies with clinical populations. One application addressed the effect of alcohol-induced amnesia, and the other application dealt with Korsakoff amnesia. In both cases the pattern of storage and retrieval measurements resulted in a clarification of the underlying storage and retrieval differences between the clinical group and the control group.  相似文献   

20.
该文提出对由两个原因一起作用而产生的结果的定量归因判断的能力差异解释,认为对两个原因定量归因判断之间的差异主要取决于对两个原因能力评估之间的相对差异。两个实验发现,大学生被试一般能根据对两个原因能力评估的相对差异程度,对产生给定结果的贡献在两个竞争原因之间进行相应的比例分配;对两个原因贡献评估的差异随着对两个原因能力评估的相对差异程度的增大而增大。这支持对这种定量归因判断的能力差异解释。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号