首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A growing number of child cognition researchers are using an object-manipulation, sequential-touching paradigm to explore young children’s conceptual abilities. Within this paradigm, it is essential to distinguish children’s intracategory touching sequences from those expected by chance. The sequentialtouching approach is congruent with a permutation testing model of statistical inference and is best modeled by sampled permutations as derived from Monte Carlo procedures. In this article, we introduce a computer program for generating Monte Carlo sequential-touching simulations. TouchStat permits users to determine their own specifications to simulate sequential touching to category exemplars across a wide range of task parameters. We also present Monte Carlo chance probabilities for the standard two-category, four-exemplar task, with sequences up to 30 touches long. Finally, we consider broader applications of the TouchStat program.  相似文献   

2.
3.
4.
5.
A feature-integration account of sequential effects in the Simon task   总被引:2,自引:0,他引:2  
Recent studies have shown that the effects of irrelevant spatial stimulus-response (S-R) correspondence (i.e., the Simon effect) occur only after trials in which the stimulus and response locations corresponded. This has been attributed to the gating of irrelevant information or the suppression of an automatic S-R route after experiencing a noncorresponding trial—a challenge to the widespread assumption of direct, intentionally unmediated links between spatial stimulus and response codes. However, trial sequences in a Simon task are likely to produce effects of stimulus- and response-feature integration that may mimic the sequential dependencies of Simon effects. Four experiments confirmed that Simon effects are eliminated if the preceding trial involved a noncorresponding S-R pair. However, this was true even when the preceding response did not depend on the preceding stimulus or if the preceding trial required no response at all. These findings rule out gating/suppression accounts that attribute sequential dependencies to response selection difficulties. Moreover, they are consistent with a feature-integration approach and demonstrate that accounting for the sequential dependencies of Simon effects does not require the assumption of information gating or response suppression.  相似文献   

6.
The article reports the findings from a Monte Carlo investigation examining the impact of faking on the criterion-related validity of Conscientiousness for predicting supervisory ratings of job performance. Based on a review of faking literature, 6 parameters were manipulated in order to model 4,500 distinct faking conditions (5 [magnitude] x 5 [proportion] x 4 [variability] x 3 [faking-Conscientiousness relationship] x 3 [faking-performance relationship] x 5 [selection ratio]). Overall, the results indicated that validity change is significantly affected by all 6 faking parameters, with the relationship between faking and performance, the proportion of fakers in the sample, and the magnitude of faking having the strongest effect on validity change. Additionally, the association between several of the parameters and changes in criterion-related validity was conditional on the faking-performance relationship. The results are discussed in terms of their practical and theoretical implications for using personality testing for employee selection.  相似文献   

7.
A non‐parametric procedure for Cattell's scree test is proposed, using the bootstrap method. Bentler and Yuan developed parametric tests for the linear trend of scree eigenvalues in principal component analysis. The proposed method is for cases where parametric assumptions are not realistic. We define the break in the scree trend in several ways, based on linear slopes defined with two or three consecutive eigenvalues, or all eigenvalues after the k largest. The resulting scree test statistics are evaluated under various data conditions, among which Gorsuch and Nelson's bootstrap CNG performs best and is reasonably consistent and efficient under leptokurtic and skewed conditions. We also examine the bias‐corrected and accelerated bootstrap method for these statistics, and the bias correction is found to be too unstable to be useful. Using seven published data sets which Bentler and Yuan analysed, we compare the bootstrap approach to the scree test with the parametric linear trend test.  相似文献   

8.
The effect of repeating relevant (colour) and irrelevant (word) stimulus information is investigated in two Stroop tasks. Thomas (1977) observed that the Stroop effect is reduced when the irrelevant word is repeated from trial n -1 to trial n. A similar effect was observed in the Simon task (Notebaert, Soetens, & Melis, 2001; Notebaert & Soetens, 2003a). MacLeod (1991) interprets this effect as sustained suppression and relates it to negative priming. In this paper we investigate whether the reduced Stroop effect for word repetitions is indeed related to the negative priming effect. In Experiment 1 with a response-stimulus interval (RSI) of 50 ms, the Stroop effect is not influenced by the sequence of the word and there is no negative priming effect. In Experiment 2 with an RSI of 200 ms, the Stroop effect is reduced for word repetitions but there is still no negative priming effect. This does not support the sustained-suppression hypothesis. The reduced Stroop effect for word repetitions is explained in terms of response priming.  相似文献   

9.
Recent research has seen intraindividual variability become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. Intraindividual variability, as measured by individual standard deviations (ISDs), has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD as compared with the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

10.
Monte Carlo resampling methods to obtain probability values for chi-squared and likelihood-ratio test statistics for multiway contingency tables are presented. A resampling algorithm provides random arrangements of cell frequencies in a multiway contingency table, given fixed marginal frequency totals. Probability values are obtained from the proportion of resampled test statistic values equal to or greater than the observed test statistic value.  相似文献   

11.
12.
Numerous ways to meta-analyze single-case data have been proposed in the literature; however, consensus has not been reached on the most appropriate method. One method that has been proposed involves multilevel modeling. For this study, we used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach for the meta-analysis of single-case data. Specifically, we examined the fixed effects (e.g., the overall average treatment effect) and the variance components (e.g., the between-person within-study variance in the treatment effect) in a three-level multilevel model (repeated observations nested within individuals, nested within studies). More specifically, bias of the point estimates, confidence interval coverage rates, and interval widths were examined as a function of the number of primary studies per meta-analysis, the modal number of participants per primary study, the modal series length per primary study, the level of autocorrelation, and the variances of the error terms. The degree to which the findings of this study are supportive of using Van den Noortgate and Onghena's (2008) raw-data multilevel modeling approach to meta-analyzing single-case data depends on the particular parameter of interest. Estimates of the average treatment effect tended to be unbiased and produced confidence intervals that tended to overcover, but did come close to the nominal level as Level-3 sample size increased. Conversely, estimates of the variance in the treatment effect tended to be biased, and the confidence intervals for those estimates were inaccurate.  相似文献   

13.
Monte Carlo procedures are used to study the sampling distribution of the Hoyt reliability coefficient. Estimates of mean, variance, and skewness are made for the case of the Bower-Trabasso concept identification model. Given the Bower-Trabasso assumptions, the Hoyt coefficient of a particular concept identification experiment is shown to be statistically unlikely.  相似文献   

14.
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.  相似文献   

15.
16.
We present word frequencies based on subtitles of British television programmes. We show that the SUBTLEX-UK word frequencies explain more of the variance in the lexical decision times of the British Lexicon Project than the word frequencies based on the British National Corpus and the SUBTLEX-US frequencies. In addition to the word form frequencies, we also present measures of contextual diversity part-of-speech specific word frequencies, word frequencies in children programmes, and word bigram frequencies, giving researchers of British English access to the full range of norms recently made available for other languages. Finally, we introduce a new measure of word frequency, the Zipf scale, which we hope will stop the current misunderstandings of the word frequency effect.  相似文献   

17.
Researchers in the field of conjoint analysis know the index-of-fit values worsen as the judgmental error of evaluation increases. This simulation study provides guidelines on the goodness of fit based on distribution of index-of-fit for different conjoint analysis designs. The study design included the following factors: number of profiles, number of attributes, algorithm used and judgmental model used. Critical values are provided for deciding the statistical significance of conjoint analysis results. Using these cumulative distributions, the power of the test used to reject the null hypothesis of random ranking is calculated. The test is found to be quite powerful except for the case of very small residual degrees of freedom.The authors thank the editor, the three reviewers and Ellen Foxman for helpful comments on the paper. Sanjay Mishra was a doctoral student at Washington State University at the time this research was completed. He is currently in the Department of Marketing at the University of Kansas.  相似文献   

18.
When using latent class analysis to explore multivariate categorical data an important question is -- how many classes are appropriate for this data? An obvious candidate to answer the question is the likelihood ratio test of c[SUB0] against c[SUB1] classes. In this paper this test is investigated by Monte Carlo methods; results confirm that the usually assumed null distribution is inappropriate.  相似文献   

19.
While conducting intervention research, researchers and practitioners are often interested in how the intervention functions not only at the group level, but also at the individual level. One way to examine individual treatment effects is through multiple-baseline studies analyzed with multilevel modeling. This analysis allows for the construction of confidence intervals, which are strongly recommended in the reporting guidelines of the American Psychological Association. The purpose of this study was to examine the accuracy of confidence intervals of individual treatment effects obtained from multilevel modeling of multiple-baseline data. Monte Carlo methods were used to examine performance across conditions varying in the number of participants, the number of observations per participant, and the dependency of errors. The accuracy of the confidence intervals depended on the method used, with the greatest accuracy being obtained when multilevel modeling was coupled with the Kenward—Roger method of estimating degrees of freedom.  相似文献   

20.
探索性因素分析在测验编制中局限性的模拟实验   总被引:6,自引:0,他引:6  
刘红云  孟庆茂 《心理科学》2002,25(2):177-179
本文主要用模拟研究的方法,通过生成拟合优度验证性因素分析的数据,来考察探索性因素分析在测验编制中的局限性。结果表明探索性因素分析作为纯数据基础上的一种统计方法,在因素问相关程度较大时.得到与理论假设不一致的结论。本文还就测验中会聚效度的一些限制作了初步探讨,结合具体情况介绍了中等相关限定条件的实质。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号