首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Operating instructions for a series of analysis of variance programs for one-, two-, and three-treatment experimental designs is described. The emphasis is on versatility, speed, accuracy, and sufficiency of output. The on-line aspect of FOCAL allows extensive transformations of raw data. Procedures and terminology conform to Kirk (1968) to provide information for pooling error terms for various models, mean comparisons, and trends analysis. Patches are given for data tape input.  相似文献   

2.
We discuss the statistical testing of three relevant hypotheses involving Cronbach's alpha: one where alpha equals a particular criterion; a second testing the equality of two alpha coefficients for independent samples; and a third testing the equality of two alpha coefficients for dependent samples. For each of these hypotheses, various statistical tests have been proposed. Over the years, these tests have depended on progressively fewer assumptions. We propose a new approach to testing the three hypotheses that relies on even fewer assumptions, is especially suited for discrete item scores, and can be applied easily to tests containing large numbers of items. The new approach uses marginal modelling. We compared the Type I error rate and the power of the marginal modelling approach to several of the available tests in a simulation study using realistic conditions. We found that the marginal modelling approach had the most accurate Type I error rates, whereas the power was similar across the statistical tests.  相似文献   

3.
During serial self-paced choice response tasks mean reaction times (RTs) for responses which are made in order to correct errors are faster than mean RTs for other correct responses. Experiment I showed that subjects can accurately correct errors in a four-choice task by making the response which they should have made, even though they are given no indication that an error has occurred. Experiment 2 showed that subjects correct their errors faster and more accurately when they use correction procedure than when they make a common response to all errors. The implication that subjects can correct errors because they know what response they should have made allows some comments on the constraints which must be met by various models which have been proposed to explain error-correction.  相似文献   

4.
Three Ss scanned matrices of letters for 40 sessions in a test of Neisser’s claim that feature tests in high-speed searches operate independently and in parallel. In the multiple-target condition (MTC), the matrix contained any one of four target letters, while in the four single-target conditions (STC), the S knew which particular target was embedded in the list. In contrast to previous studies, the error rates for individual target letters in the MTC were analyzed separately rather than being pooled. Two Ss made more errors on the hardest target when searched for in the MTC than in the STC. This difference would be masked by pooling error rates. The third S’s scanning rate in the MTC was not as rapid as in the STC. Neither a sequential nor a strictly parallel feature processing model can account for these data.  相似文献   

5.
In this paper, we describe a general purpose data simulator, Datasim, which is useful for anyone conducting computer-based laboratory assignments in statistics. Simulations illustrating sampling distributions, the central limit theorem, Type I and Type II decision errors, the power of a test, the effects of violating assumptions, and the distinction between orthogonal and non-orthogonal contrasts are discussed. Simulations illustrating other statistical concepts—partial correlation, regression to the mean, heteroscedasticity, the partitioning of error terms in splitplot designs, and so on—can be developed easily. Simulations can be assigned as laboratory exercises, or the instructor can execute the simulations during class, integrate the results into an ongoing lecture, and use the results to initiate class discussion of the relevant statistical concepts.  相似文献   

6.
The assumptions of the model for factor analysis do not exclude a class of indeterminate covariances between factors and error variables (Grayson, 2003). The construction of all factors of the model for factor analysis is generalized to incorporate indeterminate factor-error covariances. A necessary and sufficient condition is given for indeterminate factor-error covariances to be arbitrarily small, for mean square convergence of the regression predictor of factor scores, and for the existence of a unique determinate factor and error variable. The determinate factor and error variable are uncorrelated and satisfy the defining assumptions of factor analysis. Several examples are given to illustrate the results. Requests for reprints should be sent to Wim P. Krijnen, Lisdodde 1, 9679 MC Scheemda, The Netherlands.  相似文献   

7.
Count data are commonly assumed to have a Poisson distribution, especially when there is no diagnostic procedure for checking this assumption. However, count data rarely fit the restrictive assumptions of the Poisson distribution. The violation of much of such assumptions commonly results in overdispersion, which invalidates the Poisson distribution. Undetected overdispersion may entail important misleading inferences, so its detection is essential. In this study, different overdispersion diagnostic tests are evaluated through two simulation studies. In Exp. 1, the nominal error rate is compared under different sample sizes and lamda conditions. Analysis shows a remarkable performance of the chi2 df test. In Exp. 2 and 3, statistical power is compared under different sample sizes, lamda, and overdispersion conditions. Chi2 and LR tests provide the highest statistical power.  相似文献   

8.
Subjects made magnitude estimates of the average loudness of pairs of 1,000-Hz tones varying in sound pressure. A test of fit of an averaging model employing an analysis of variance suggested that the judgments were internally consistent. However, estimates of the parameters of a two-stage model based on the assumption that power transformations were imposed in both input and output implied a nonlinear output function, inconsistent with the averaging model. Additional analyses employing a nonmetric scaling solution also suggested that output was nonlinear, indicating that this implication was not an artifact of the strong assumptions of the two-stage model. Large differences were found among the output functions of individual subjects, and it was suggested that these may have inflated the error term in the analysis of variance, reducing its power to detect violations of the additive model. Similar analyses were performed on data from judgments of average grayness collected by Weiss (1972).  相似文献   

9.
The Conners' Continuous Performance Test (CPT) is a neuropsychological task that has repeatedly been shown to differentiate ADHD from normal groups. Several variables may be derived from the Conners' CPT including errors of omission and commission, mean hit reaction time(RT), mean hit RT standard error, d', and . What each CPT parameter actually assesses has largely been based upon clinical assumptions and the face validity of each measure (e.g., omission errors measure inattention, commission errors measure impulsivity). This study attempts to examine relations between various CPT variables and phenotypic behaviors so as to better understand the various CPT variables. An epidemiological sample of 817 children was administered the Conners' CPT. Diagnostic interviews were conducted with parents to determine ADHD symptom profiles for all children. Children diagnosed with ADHD had more variable RTs, made more errors of commission and omission, and demonstrated poorer perceptual sensitivity than nondiagnosed children. Regarding specific symptoms, generalized estimating equations (GEE) and ANCOVAs were conducted to determine specific relationships between the 18 DSM-IV ADHD symptoms and 6 CPT parameters. CPT performance measures demonstrated significant relationships to ADHD symptoms but did not demonstrate symptom domain specificity according to a priori assumptions. Overall performance on the two signal detection measures, d' and , was highly related to all ADHD symptoms across symptom domains. Further, increased variability in RTs over time was related to most ADHD symptoms. Finally, it appears that at least 1 CPT variable, mean hit RT, is minimally related to ADHD symptoms as a whole, but does demonstrate some specificity in its link with symptoms of hyperactivity.  相似文献   

10.
The existence of tradeoffs between speed and accuracy is an important interpretative problem in choice reaction time (RT) experiments. A recently suggested solution to this problem is the use of complete speed-accuracy tradeoff functions as the primary dependent variable in choice RT ,experiments instead of a single mean RT and error rate. This paper reviews and compares existing procedures for generating empirical speed-accuracy tradeoff, functions for use as dependent variables in choice RT experiments. Two major types of tradeoff function are identified, and their experimental designs and computational procedures are discussed and evaluated. Systematic disparities are demonstrated between the two tradeoff functions in both empirical and computer-simulated data. Although all existing procedures for generating speed-accuracy tradeoff functions involve empirically untested assumptions, one procedure requires less stringent assumptions and is less sensitive to sources of experimental and statistical error. This procedure involves plotting accuracy against RT over a set of experimental conditions in which subjects’ criteria for speed vs. accuracy are systematically varied.  相似文献   

11.
A model for response latency in recognition memory is described which is a strength model incorporating the notion of multiple observations and with the additional assumptions that the variance of the strength distributions increase with set size and that the observer attempts to keep his error rate at a constant level over set size. It is shown that the model can, without recourse to particular parameter values, predict a near linear RT set-size function and, since it is a (TSD) model in its decision aspects, can account for errors and hence error latencies in the recognition task. After the model is described, two experiments are performed which test the prediction that correct mean latency is generally shorter than incorrect mean latency. The prediction is confirmed and this feature is discussed in general, the model being compared with that of Juola, Fischler, Wood, and Atkinson (1971) in this respect. Some possible modifications to the latter model are also considered.  相似文献   

12.
Several studies of choice behavior (risk taking) in achievement-oriented situations are reanalyzed. The usual ways of pooling all choices over trials and subjects conceal the series of subjects' decisions and the dynamics inherent in these decisions. A basic strategy of subjects in an achievement-oriented choice situation seems to be to start with an easy task, choose a more difficult one whenever you succeed, and stay mostly at the same difficulty level whenever you fail. A computer model, in which such simple assumptions are made, generates preference functions over the order of difficulty levels that are indistinguishable from those found in empirical studies. It is concluded that the study of choice behavior in achievement-oriented situations should be based on the analysis of the series of single decisions by one subject. For this we need models that allow the predictions of such decisions and the prediction of action-controlling cognitions and emotions.  相似文献   

13.
One approach to the analysis of repeated measures data allows researchers to model the covariance structure of the data rather than presume a certain structure, as is the case with conventional univariate and multivariate test statistics. This mixed-model approach was evaluated for testing all possible pairwise differences among repeated measures marginal means in a Between-Subjects x Within-Subjects design. Specifically, the authors investigated Type I error and power rates for a number of simultaneous and stepwise multiple comparison procedures using SAS (1999) PROC MIXED in unbalanced designs when normality and covariance homogeneity assumptions did not hold. J. P. Shaffer's (1986) sequentially rejective step-down and Y. Hochberg's (1988) sequentially acceptive step-up Bonferroni procedures, based on an unstructured covariance structure, had superior Type I error control and power to detect true pairwise differences across the investigated conditions.  相似文献   

14.
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator-to-outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. To explore the combined effect of measurement error and omitted confounders in the same model, the effect of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect.  相似文献   

15.
Olivier Roy  Eric Pacuit 《Synthese》2013,190(5):891-908
In this paper we study substantive assumptions in social interaction. By substantive assumptions we mean contingent assumptions about what the players know and believe about each other’s choices and information. We first explain why substantive assumptions are fundamental for the analysis of games and, more generally, social interaction. Then we show that they can be compared formally, and that there exist contexts where no substantive assumptions are being made. Finally we show that the questions raised in this paper are related to a number of issues concerning “large” structures in epistemic game theory.  相似文献   

16.
Failure of Engineering Artifacts: A Life Cycle Approach   总被引:1,自引:0,他引:1  
Failure is a central notion both in ethics of engineering and in engineering practice. Engineers devote considerable resources to assure their products will not fail and considerable progress has been made in the development of tools and methods for understanding and avoiding failure. Engineering ethics, on the other hand, is concerned with the moral and social aspects related to the causes and consequences of technological failures. But what is meant by failure, and what does it mean that a failure has occurred? The subject of this paper is how engineers use and define this notion. Although a traditional definition of failure can be identified that is shared by a large part of the engineering community, the literature shows that engineers are willing to consider as failures also events and circumstance that are at odds with this traditional definition. These cases violate one or more of three assumptions made by the traditional approach to failure. An alternative approach, inspired by the notion of product life cycle, is proposed which dispenses with these assumptions. Besides being able to address the traditional cases of failure, it can deal successfully with the problematic cases. The adoption of a life cycle perspective allows the introduction of a clearer notion of failure and allows a classification of failure phenomena that takes into account the roles of stakeholders involved in the various stages of a product life cycle.  相似文献   

17.
A concept learning model was developed and tested in two conjunctive attribute identification tasks. The model includes assumptions about the focus of attention, decision making, and memory for stimulus information and prior decisions. Predictions are made about how S changes his hypothesis following an error. Procedures in both tasks allowed inference of the subject's current hypothesis. The hypothesis selections and error statistics were in the majority of cases accurately predicted by the model. Deviations from predictions on hypothesis sampling occurred for naive Ss but not for trained Ss who were required to state a hypothesis on each trial.  相似文献   

18.
Serlin RC 《心理学方法》2000,5(2):230-240
Monte Carlo studies provide the information needed to help researchers select appropriate analytical procedures under design conditions in which the underlying assumptions of the procedures are not met. In Monte Carlo studies, the 2 errors that one could commit involve (a) concluding that a statistical procedure is robust when it is not or (b) concluding that it is not robust when it is. In previous attempts to apply standard statistical design principles to Monte Carlo studies, the less severe of these errors has been wrongly designated the Type I error. In this article, a method is presented for controlling the appropriate Type I error rate; the determination of the number of iterations required in a Monte Carlo study to achieve desired power is described; and a confidence interval for a test's true Type I error rate is derived. A robustness criterion is also proposed that is a compromise between W. G. Cochran's (1952) and J. V. Bradley's (1978) criteria.  相似文献   

19.
The purpose of this study was to evaluate a modified test of equivalence for conducting normative comparisons when distribution shapes are non‐normal and variances are unequal. A Monte Carlo study was used to compare the empirical Type I error rates and power of the proposed Schuirmann–Yuen test of equivalence, which utilizes trimmed means, with that of the previously recommended Schuirmann and Schuirmann–Welch tests of equivalence when the assumptions of normality and variance homogeneity are satisfied, as well as when they are not satisfied. The empirical Type I error rates of the Schuirmann–Yuen were much closer to the nominal α level than those of the Schuirmann or Schuirmann–Welch tests, and the power of the Schuirmann–Yuen was substantially greater than that of the Schuirmann or Schuirmann–Welch tests when distributions were skewed or outliers were present. The Schuirmann–Yuen test is recommended for assessing clinical significance with normative comparisons.  相似文献   

20.
The assumptions of the self made in the professional and managerial discourses of guidance are examined. It is suggested that these assumptions obstruct the capacity of guidance workers to explain their own practices, mystifying the social, economic and cultural processes of which they are part. This is important for two reasons. First, unquestioned assumptions can result in claims for guidance being made which are misleading. Second, if those assumptions are part of the professional formation of guidance workers, there may well need to be changes made to training programmes. Drawing on contemporary debates over identity, modernity and postmodemity, the case is made for a more explicit and informed debate about the self in guidance  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号