首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The magnetic properties of a Mn-doped armchair ZnO nanotube have been studied using Monte Carlo simulation. The variation of zero-field-cooled and field-cooled magnetisation with reduced temperatures for different values of the dilution x (where x is the Mn concentration: Zn1?xMnxO) are given. The freezing temperatures and magnetisation vs. crystal field are calculated for different dilutions x. Finally, the hysteresis loops for different dilutions and temperatures are given for a fixed reduced temperature and crystal field. Superparamagnetic behaviour is observed for small values of x and low temperatures.  相似文献   

2.
Exploring how people represent natural categories is a key step toward developing a better understanding of how people learn, form memories, and make decisions. Much research on categorization has focused on artificial categories that are created in the laboratory, since studying natural categories defined on high-dimensional stimuli such as images is methodologically challenging. Recent work has produced methods for identifying these representations from observed behavior, such as reverse correlation (RC). We compare RC against an alternative method for inferring the structure of natural categories called Markov chain Monte Carlo with People (MCMCP). Based on an algorithm used in computer science and statistics, MCMCP provides a way to sample from the set of stimuli associated with a natural category. We apply MCMCP and RC to the problem of recovering natural categories that correspond to two kinds of facial affect (happy and sad) from realistic images of faces. Our results show that MCMCP requires fewer trials to obtain a higher quality estimate of people's mental representations of these two categories.  相似文献   

3.
The existence of a discrete class of people vulnerable to schizophrenia spectrum disorders is the most replicated finding of taxometric research. Evidence for such a “taxon” has been obtained with diverse measures of schizotypy in clinical, high-risk, and normal samples. However, recent demonstrations that skewed indicators of a latent dimension can yield a spuriously taxonic pattern of results may call some of these findings into question. Normal adults (N = 1073) completed measures of positive (perceptual aberration, magical ideation) and negative (physical and social anhedonia) components of schizotypy. Taxometric curves resembled those obtained previously, but when a simulation procedure took skew into account, dimensional models of schizotypy received stronger support than taxonic models for most schizotypy components, with findings for magical thinking inconclusive. A re-evaluation of previous taxonic conclusions regarding the latent structure of schizotypy is indicated.  相似文献   

4.
The article reports the findings from a Monte Carlo investigation examining the impact of faking on the criterion-related validity of Conscientiousness for predicting supervisory ratings of job performance. Based on a review of faking literature, 6 parameters were manipulated in order to model 4,500 distinct faking conditions (5 [magnitude] x 5 [proportion] x 4 [variability] x 3 [faking-Conscientiousness relationship] x 3 [faking-performance relationship] x 5 [selection ratio]). Overall, the results indicated that validity change is significantly affected by all 6 faking parameters, with the relationship between faking and performance, the proportion of fakers in the sample, and the magnitude of faking having the strongest effect on validity change. Additionally, the association between several of the parameters and changes in criterion-related validity was conditional on the faking-performance relationship. The results are discussed in terms of their practical and theoretical implications for using personality testing for employee selection.  相似文献   

5.
It has been suggested that hierarchical regression analysis provides an unambiguous conclusion with regard to the existence of moderator effects (Arnold & Evans, 1979). This paper examines the impact of correlated error among the dependent and independent variables in order to explore whether or not artificial interaction terms can be generated. A Monte Carlo study was performed to investigate the effects of correlated error on noninteraction and interaction models. The results are clear-cut. Artifactual interaction cannot be created; true interactions can be attentuated. Some practical suggestions are provided for drawing inferences from hierarchical regression analysis.  相似文献   

6.
Recent research has seen intraindividual variability become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. Intraindividual variability, as measured by individual standard deviations (ISDs), has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD as compared with the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

7.
本研究通过蒙特卡洛模拟考查了分类精确性指数Entropy及其变式受样本量、潜类别数目、类别距离和指标个数及其组合的影响情况。研究结果表明:(1)尽管Entropy值与分类精确性高相关,但其值随类别数、样本量和指标数的变化而变化,很难确定唯一的临界值;(2)其他条件不变的情况下,样本量越大,Entropy的值越小,分类精确性越差;(3)类别距离对分类精确性的影响具有跨样本量和跨类别数的一致性;(4)小样本(N=50~100)的情况下,指标数越多,Entropy的结果越好;(5)在各种条件下Entropy对分类错误率比其它变式更灵敏。  相似文献   

8.
Levenson's Self-Report Psychopathy scale (Levenson, Kiehl, & Fitzpatrick, 1995) was administered to 1,972 male and female federal prison inmates, the results of which were subjected to taxometric analysis. We employed 4 taxometric procedures in this study: mean above minus below a cut (Meehl & Yonce, 1994), maximum slope (Grove & Meehl, 1993), maximum eigenvalue (Waller & Meehl, 1998), and latent-mode factor analysis (Waller & Meehl, 1998). The results showed consistent support for a dimensional interpretation of the latent structure of psychopathy, corroborating previous research conducted on the Psychopathy Checklist (e.g., Psychopathy Checklist-Revised; Hare, 2003) and Psychopathic Personality Inventory (Lilienfeld & Andrews, 1996) and denoting that psychopathy is a dimensional construct (degree of psychopathic characteristics) rather than a qualitatively distinct category of behavior (psychopath).  相似文献   

9.
While conducting intervention research, researchers and practitioners are often interested in how the intervention functions not only at the group level, but also at the individual level. One way to examine individual treatment effects is through multiple-baseline studies analyzed with multilevel modeling. This analysis allows for the construction of confidence intervals, which are strongly recommended in the reporting guidelines of the American Psychological Association. The purpose of this study was to examine the accuracy of confidence intervals of individual treatment effects obtained from multilevel modeling of multiple-baseline data. Monte Carlo methods were used to examine performance across conditions varying in the number of participants, the number of observations per participant, and the dependency of errors. The accuracy of the confidence intervals depended on the method used, with the greatest accuracy being obtained when multilevel modeling was coupled with the Kenward—Roger method of estimating degrees of freedom.  相似文献   

10.
We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non‐normality (normal, moderately, and extremely non‐normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.  相似文献   

11.
Numerous ways to meta-analyze single-case data have been proposed in the literature; however, consensus has not been reached on the most appropriate method. One method that has been proposed involves multilevel modeling. For this study, we used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach for the meta-analysis of single-case data. Specifically, we examined the fixed effects (e.g., the overall average treatment effect) and the variance components (e.g., the between-person within-study variance in the treatment effect) in a three-level multilevel model (repeated observations nested within individuals, nested within studies). More specifically, bias of the point estimates, confidence interval coverage rates, and interval widths were examined as a function of the number of primary studies per meta-analysis, the modal number of participants per primary study, the modal series length per primary study, the level of autocorrelation, and the variances of the error terms. The degree to which the findings of this study are supportive of using Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach to meta-analyzing single-case data depends on the particular parameter of interest. Estimates of the average treatment effect tended to be unbiased and produced confidence intervals that tended to overcover, but did come close to the nominal level as Level-3 sample size increased. Conversely, estimates of the variance in the treatment effect tended to be biased, and the confidence intervals for those estimates were inaccurate.  相似文献   

12.
Researchers in the field of conjoint analysis know the index-of-fit values worsen as the judgmental error of evaluation increases. This simulation study provides guidelines on the goodness of fit based on distribution of index-of-fit for different conjoint analysis designs. The study design included the following factors: number of profiles, number of attributes, algorithm used and judgmental model used. Critical values are provided for deciding the statistical significance of conjoint analysis results. Using these cumulative distributions, the power of the test used to reject the null hypothesis of random ranking is calculated. The test is found to be quite powerful except for the case of very small residual degrees of freedom.The authors thank the editor, the three reviewers and Ellen Foxman for helpful comments on the paper. Sanjay Mishra was a doctoral student at Washington State University at the time this research was completed. He is currently in the Department of Marketing at the University of Kansas.  相似文献   

13.
14.
15.
Monte Carlo procedures are used to study the sampling distribution of the Hoyt reliability coefficient. Estimates of mean, variance, and skewness are made for the case of the Bower-Trabasso concept identification model. Given the Bower-Trabasso assumptions, the Hoyt coefficient of a particular concept identification experiment is shown to be statistically unlikely.  相似文献   

16.
Several authors have cautioned against using Fisher's z‐transformation in random‐effects meta‐analysis of correlations, which seems to perform poorly in some situations, especially with substantial inter‐study heterogeneity. Attributing this performance largely to the direct z‐to‐r transformation (DZRT) of Fisher z results (e.g. point estimate of mean correlation), in a previous paper Hafdahl (2009) proposed point and interval estimators of the mean Pearson r correlation that instead use an integral z‐to‐r transformation (IZRT). The present Monte Carlo study of these IZRT Fisher z estimators includes comparisons with their DZRT counterparts and with estimators based on Pearson r correlations. The IZRT point estimator was usually more accurate and efficient than its DZRT counterpart and comparable to the two Pearson r point estimators – better in some conditions but worse in others. Coverage probability for the IZRT confidence intervals (CIs) was often near nominal, much better than for the DZRT CIs, and comparable to coverage for the Pearson r CIs; every approach's CI fell markedly below nominal in some conditions. The IZRT estimators contradict warnings about Fisher z estimators' poor performance. Recommendations for practising research synthesists are offered, and an Appendix provides computing code to implement the IZRT as in the real‐data example.  相似文献   

17.
18.
We study the coalescence of neighbouring voids along close-packed directions in recent computer simulation studies of void-lattice formation. The stability against coalescence of a developing void lattice is found to depend very much on the detailed geometry of the local void distribution. The possibility of void coalescence as an artifact, caused by the incorrect assumption of homogeneous void nucleation in these simulations, is suggested and discussed.  相似文献   

19.
Statistical analyses investigating latent structure can be divided into those that estimate structural model parameters and those that detect the structural model type. The most basic distinction among structure types is between categorical (discrete) and dimensional (continuous) models. It is a common, and potentially misleading, practice to apply some method for estimating a latent structural model such as factor analysis without first verifying that the latent structure type assumed by that method applies to the data. The taxometric method was developed specifically to distinguish between dimensional and 2-class models. This study evaluated the taxometric method as a means of identifying categorical structures in general. We assessed the ability of the taxometric method to distinguish between dimensional (1-class) and categorical (2-5 classes) latent structures and to estimate the number of classes in categorical datasets. Based on 50,000 Monte Carlo datasets (10,000 per structure type), and using the comparison curve fit index averaged across 3 taxometric procedures (Mean Above Minus Below A Cut, Maximum Covariance, and Latent Mode Factor Analysis) as the criterion for latent structure, the taxometric method was found superior to finite mixture modeling for distinguishing between dimensional and categorical models. A multistep iterative process of applying taxometric procedures to the data often failed to identify the number of classes in the categorical datasets accurately, however. It is concluded that the taxometric method may be an effective approach to distinguishing between dimensional and categorical structure but that other latent modeling procedures may be more effective for specifying the model.  相似文献   

20.
A growing number of child cognition researchers are using an object-manipulation, sequential-touching paradigm to explore young children’s conceptual abilities. Within this paradigm, it is essential to distinguish children’s intracategory touching sequences from those expected by chance. The sequentialtouching approach is congruent with a permutation testing model of statistical inference and is best modeled by sampled permutations as derived from Monte Carlo procedures. In this article, we introduce a computer program for generating Monte Carlo sequential-touching simulations. TouchStat permits users to determine their own specifications to simulate sequential touching to category exemplars across a wide range of task parameters. We also present Monte Carlo chance probabilities for the standard two-category, four-exemplar task, with sequences up to 30 touches long. Finally, we consider broader applications of the TouchStat program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号