首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.  相似文献   

2.
本研究通过蒙特卡洛模拟考查了分类精确性指数Entropy及其变式受样本量、潜类别数目、类别距离和指标个数及其组合的影响情况。研究结果表明:(1)尽管Entropy值与分类精确性高相关,但其值随类别数、样本量和指标数的变化而变化,很难确定唯一的临界值;(2)其他条件不变的情况下,样本量越大,Entropy的值越小,分类精确性越差;(3)类别距离对分类精确性的影响具有跨样本量和跨类别数的一致性;(4)小样本(N=50~100)的情况下,指标数越多,Entropy的结果越好;(5)在各种条件下Entropy对分类错误率比其它变式更灵敏。  相似文献   

3.
Relapse is the recovery of a previously suppressed response. Animal models have been useful in examining the mechanisms underlying relapse (e.g., reinstatement, renewal, reacquisition, resurgence). However, there are several challenges to analyzing relapse data using traditional approaches. For example, null hypothesis significance testing is commonly used to determine whether relapse has occurred. However, this method requires several a priori assumptions about the data, as well as a large sample size for between‐subjects comparisons or repeated testing for within‐subjects comparisons. Monte Carlo methods may represent an improved analytic technique, because these methods require no prior assumptions, permit smaller sample sizes, and can be tailored to account for all of the data from an experiment instead of some limited set. In the present study, we conducted reanalyses of three studies of relapse (Berry, Sweeney, & Odum, 2014 ; Galizio et al., 2018 ; Odum & Shahan, 2004 ) using Monte Carlo techniques to determine if relapse occurred and if there were differences in rate of response based on relevant independent variables (such as group membership or schedule of reinforcement). These reanalyses supported the previous findings. Finally, we provide general recommendations for using Monte Carlo methods in studies of relapse.  相似文献   

4.
An algorithm for assessing additivity conjunctively via both axiomatic conjoint analysis and numerical conjoint scaling is described. The algorithm first assesses the degree of individual differences among sets of rankings of stimuli, and subsequently examines either individual or averaged data for violations of axioms necessary for an additive model. The axioms are examined at a more detailed level than has been previously done. Violations of the axioms are broken down into different types. Finally, a nonmetric scaling of the data can be done based on either or both of two different badness-of-fit scaling measures. The advantages of combining all of these features into one algorithm for improving the diagnostic value of axiomatic conjoint measurement in evaluating additivity are discussed.  相似文献   

5.
沐守宽  周伟 《心理科学进展》2011,19(7):1083-1090
缺失数据普遍存在于心理学研究中, 影响着统计推断。极大似然估计(MLE)与基于贝叶斯的多重借补(MI)是处理缺失数据的两类重要方法。期望-极大化算法(EM)是寻求MLE的一种强有力的方法。马尔可夫蒙特卡洛方法(MCMC)可以相对简易地实现MI, 而且可以适用于复杂情况下的缺失数据处理。结合研究的需要讨论了实现这两类方法的适用软件。  相似文献   

6.
This paper provides an empirical comparison of two methods of attribute valuation: the analytic hierarchy process (AHP) and conjoint analysis. Variants within each approach are also examined. The results of two empirical studies indicate that the methods differ in their predictive and convergent validity. Within the AHP methods no significant difference in predictive validity was found. Within the conjoint methods, the ranking method significantly outperformed the rating method. The difference in predictive validity between the AHP and conjoint methods was significant in the second study but not in the first study, suggesting superior performance of the AHP over conjoint analysis in complex problems. Copyright© 1998 John Wiley & Sons, Ltd.  相似文献   

7.
Discounting is the process by which outcomes lose value. Much of discounting research has focused on differences in the degree of discounting across various groups. This research has relied heavily on conventional null hypothesis significance tests that are familiar to psychologists, such as t‐tests and ANOVAs. As discounting research questions have become more complex by simultaneously focusing on within‐subject and between‐group differences, conventional statistical testing is often not appropriate for the obtained data. Generalized estimating equations (GEE) are one type of mixed‐effects model that are designed to handle autocorrelated data, such as within‐subject repeated‐measures data, and are therefore more appropriate for discounting data. To determine if GEE provides similar results as conventional statistical tests, we compared the techniques across 2,000 simulated data sets. The data sets were created using a Monte Carlo method based on an existing data set. Across the simulated data sets, the GEE and the conventional statistical tests generally provided similar patterns of results. As the GEE and more conventional statistical tests provide the same pattern of result, we suggest researchers use the GEE because it was designed to handle data that has the structure that is typical of discounting data.  相似文献   

8.
Exploring how people represent natural categories is a key step toward developing a better understanding of how people learn, form memories, and make decisions. Much research on categorization has focused on artificial categories that are created in the laboratory, since studying natural categories defined on high-dimensional stimuli such as images is methodologically challenging. Recent work has produced methods for identifying these representations from observed behavior, such as reverse correlation (RC). We compare RC against an alternative method for inferring the structure of natural categories called Markov chain Monte Carlo with People (MCMCP). Based on an algorithm used in computer science and statistics, MCMCP provides a way to sample from the set of stimuli associated with a natural category. We apply MCMCP and RC to the problem of recovering natural categories that correspond to two kinds of facial affect (happy and sad) from realistic images of faces. Our results show that MCMCP requires fewer trials to obtain a higher quality estimate of people's mental representations of these two categories.  相似文献   

9.
方杰  张敏强 《心理学报》2012,44(10):1408-1420
针对中介效应ab的抽样分布往往不是正态分布的问题,学者近年提出了三类无需对ab的抽样分布进行任何限制且适用于中、小样本的方法,包括乘积分布法、非参数Bootstrap和马尔科夫链蒙特卡罗(MCMC)方法.采用模拟技术比较了三类方法在中介效应分析中的表现.结果发现:1)有先验信息的MCMC方法的ab点估计最准确;2)有先验信息的MCMC方法的统计功效最高,但付出了低估第Ⅰ类错误率的代价,偏差校正的非参数百分位Bootstrap方法的统计功效其次,但付出了高估第Ⅰ类错误率的代价;3)有先验信息的MCMC方法的中介效应区间估计最准确.结果表明,当有先验信息时,推荐使用有先验信息的MCMC方法;当先验信息不可得时,推荐使用偏差校正的非参数百分位Bootstrap方法.  相似文献   

10.
马泽威  全鹏 《心理科学》2015,(2):379-382
考察抑郁在青少年核心自我评价与自杀意念间的中介作用。对502名高中生进行量表测评。通过偏差校正的Bootstrap法和有先验信息的MCMC法求出中介效应值的95%置信区间分别为[-.030,-.011]和[-.024,-.014],提示抑郁的中介效应显著。效应量k2、R2med分别为.124、.104,偏差校正的Bootstrap法抽样5000次后,构建的效应量的95%置信区间分别为[.070,.178]、[.063,.156],两种指标共同验证效应量为中等。研究结果说明抑郁在核心自我评价与自杀意念间起部分中介作用,效应量中等。  相似文献   

11.
The article reports the findings from a Monte Carlo investigation examining the impact of faking on the criterion-related validity of Conscientiousness for predicting supervisory ratings of job performance. Based on a review of faking literature, 6 parameters were manipulated in order to model 4,500 distinct faking conditions (5 [magnitude] x 5 [proportion] x 4 [variability] x 3 [faking-Conscientiousness relationship] x 3 [faking-performance relationship] x 5 [selection ratio]). Overall, the results indicated that validity change is significantly affected by all 6 faking parameters, with the relationship between faking and performance, the proportion of fakers in the sample, and the magnitude of faking having the strongest effect on validity change. Additionally, the association between several of the parameters and changes in criterion-related validity was conditional on the faking-performance relationship. The results are discussed in terms of their practical and theoretical implications for using personality testing for employee selection.  相似文献   

12.
Synthetic data are used to examine how well axiomatic and numerical conjoint measurement methods, individually and comparatively, recover simple polynomial generators in three dimensions. The study illustrates extensions of numerical conjoint measurement (NCM) to identify and model distributive and dual-distributive, in addition to the usual additive, data structures. It was found that while minimum STRESS was the criterion of fit, another statistic, predictive capability, provided a better diagnosis of the known generating model. That NCM methods were able to better identify generating models conflicts with Krantz and Tversky's assertion that, in general, the direct axiom tests provide a more powerful diagnostic test between alternative composition rules than does evaluation of numerical correspondence. For all methods, dual-distributive models are most difficult to recover, while consistent with past studies, the additive model is the most robust of the fitted models.Douglas Emery is now at the Krannert Graduate School of Management, Purdue University, West Lafayette, IN, on leave from the University of Calgary.  相似文献   

13.
中介效应的三类区间估计方法   总被引:1,自引:0,他引:1  
由于中介效应ab的估计量通常不是正态分布, 因此需用不对称置信区间进行中介效应分析。详述了三类获得不对称置信区间的方法, 包括乘积分布法(M法和经验M法)、Bootstrap方法(偏差校正和未校正的非参数百分位Bootstrap方法、偏差校正和未校正的参数百分位残差Bootstrap方法)和马尔科夫链蒙特卡罗(MCMC)方法。比较了三类方法在单层(简单和多重)和多层中介效应分析中的表现, 发现三类方法的表现相近, 与乘积分布法相比, 偏差校正的百分位Bootstrap方法表现较好, 但有先验信息的MCMC方法能更有效降低均方误。最后对中介效应不对称置信区间研究的拓展方向做了展望。  相似文献   

14.
Morris water maze data are most commonly analyzed using repeated measures analysis of variance in which daily test sessions are analyzed as an unordered categorical variable. This approach, however, may lack power, relies heavily on post hoc tests of daily performance that can complicate interpretation, and does not target the nonlinear trends evidenced in learning data. The present project used Monte Carlo simulation to compare the relative strengths of the traditional approach with both linear and nonlinear mixed effects modeling that identifies the learning function for each animal and condition. Both trend-based mixed effects modeling approaches showed much greater sensitivity to identifying real effects, and the nonlinear approach provided uniformly better fits of learning trends. The common practice of removing a rat from the maze after 90 s, however, proved more problematic for the nonlinear approach and produced an underestimate of y-axis intercepts.  相似文献   

15.
We used a discrete choice conjoint experiment to model the bullying prevention recommendations of 845 students from grades 5 to 8 (aged 9-14). Students made choices between experimentally varied combinations of 14 four-level prevention program attributes. Latent class analysis yielded three segments. The high impact segment (27.1%) recommended uniforms, mandatory recess activities, four playground supervisors, surveillance cameras, and 4-day suspensions when students bully. The moderate impact segment (49.5%) recommended discretionary uniforms and recess activities, four playground supervisors, and 3-day suspensions. Involvement as a bully or bully-victim was associated with membership in a low impact segment (23.4%) that rejected uniforms and surveillance cameras. They recommended fewer anti-bullying activities, discretionary recess activities, fewer playground supervisors, and the 2-day suspensions. Simulations predicted most students would recommend a program maximizing student involvement combining prevention with moderate consequences. The simulated introduction of mandatory uniforms, surveillance cameras, and long suspensions reduced overall support for a comprehensive program, particularly among students involved as bullies or bully-victims.  相似文献   

16.
A Monte Carlo study assessed the effect of sampling error and model characteristics on the occurrence of nonconvergent solutions, improper solutions and the distribution of goodness-of-fit indices in maximum likelihood confirmatory factor analysis. Nonconvergent and improper solutions occurred more frequently for smaller sample sizes and for models with fewer indicators of each factor. Effects of practical significance due to sample size, the number of indicators per factor and the number of factors were found for GFI, AGFI, and RMR, whereas no practical effects were found for the probability values associated with the chi-square likelihood ratio test.James Anderson is now at the J. L. Kellogg Graduate School of Management, Northwestern University. The authors gratefully acknowledge the comments and suggestions of Kenneth Land and the reviewers, and the assistance of A. Narayanan with the analysis. Support for this research was provided by the Graduate School of Business and the University Research Institute of the University of Texas at Austin.  相似文献   

17.
Fleishman's power method is frequently used to simulate non-normal data with a desired skewness and kurtosis. Fleishman's method requires solving a system of nonlinear equations to find the third-order polynomial weights that transform a standard normal variable into a non-normal variable with desired moments. Most users of the power method seem unaware that Fleishman's equations have multiple solutions for typical combinations of skewness and kurtosis. Furthermore, researchers lack a simple method for exploring the multiple solutions of Fleishman's equations, so most applications only consider a single solution. In this paper, we propose novel methods for finding all real-valued solutions of Fleishman's equations. Additionally, we characterize the solutions in terms of differences in higher order moments. Our theoretical analysis of the power method reveals that there typically exists two solutions of Fleishman's equations that have noteworthy differences in higher order moments. Using simulated examples, we demonstrate that these differences can have remarkable effects on the shape of the non-normal distribution, as well as the sampling distributions of statistics calculated from the data. Some considerations for choosing a solution are discussed, and some recommendations for improved reporting standards are provided.  相似文献   

18.
A monte carlo investigation of recovery of structure by alscal   总被引:1,自引:0,他引:1  
A Monte Carlo study was carried out to investigate the ability of ALSCAL to recover true structure inherent in simulated proximity measures. The nature of the simulated data varied according to (a) number of stimuli, (b) number of individuals, (c) number of dimensions, and (d) level of random error. Four aspects of recovery were studied: (a) SSTRESS, (b) recovery of true distances, (c) recovery of stimulus dimensions, and (d) recovery of individual weights. Results indicated that all four measures were rather strongly affected by random error. Also, SSTRESS improved with fewer stimuli in more dimensions, but the other three indices behaved in the opposite fashion. Most importantly, it was found that the number of individuals, over the range studied, did not have a substantial effect on any of the four measures of recovery. Practical implications and suggestions for further research are discussed.The authors wish to thank Drs. Forrest W. Young, Paul D. Isaac and Thomas E. Nygren, who provided many helpful comments during this project.  相似文献   

19.
认知诊断作为21世纪一种新的测量范式,在国内外越来越受到重视。该文运用MCMC算法实现了R-RUM的参数估计,并采用Monte Carlo模拟方法探讨其性能。研究结果表明:(1)R-RUM参数估计方法可行,估计精度较高;(2)Q矩阵复杂性和模型参数水平对模型参数估计精度有较大影响,随着r_(jk)*值的增大和Q矩阵复杂性的增加,项目参数和被试参数估计精度逐渐下降;(3)在特定情形下,R-RUM具有一定的稳健性。  相似文献   

20.
温聪聪  朱红 《心理科学进展》2021,29(10):1773-1782
传统的潜在转变分析属于单水平分析, 但其同样也可以看作二水平分析。Muthén和Asparouhov就以二水平分析的视角在单水平分析框架内提出了随机截距潜在转变分析(RI-LTA), 其中跨时间点产生的自我转变可以看作在水平1进行分析, 跨时间点不变的个案间差异可以看作在水平2进行分析, 使个案的自我转变和个案间的初始差异分离, 避免了高估保留在初始类别的概率。某研究型大学2016级本科生的追踪调查数据被用于演示使用随机截距潜在转变分析的过程。该方法的最大优势是通过引入随机截距避免了高估保留在本类别的转变概率。未来研究可以运用蒙特卡洛模拟研究探究随机截距潜在转变分析模型的适用性, 也可以用多水平分析的思路为灵感, 探究多水平随机截距潜在转变分析在统计软件中的实现。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号