首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A general procedure is provided for comparing correlation coefficients between optimal linear composites. The procedure allows computationally efficient significance tests on independent or dependent multiple correlations, partial correlations, and canonical correlations, with or without the assumption of multivariate normality. Evidence from some Monte Carlo studies on the effectiveness of the methods is also provided.This research was supported in part by an operating grant (#67-4640) to the first author from the National Sciences and Engineering Research Council of Canada. The authors would also like to acknowledge the helpful comments and encouragement of Alexander Shapiro, Stanley Nash, and Ingram Olkin.  相似文献   

2.
The problem of testing two correlated proportions with incomplete data is considered by means of Monte Carlo simulations studies. A test proposed in this paper, which can be regarded as a generalization of McNemar's test, is recommended in all cases with incomplete data and not too small samples.  相似文献   

3.
Many theories have been put forward on how people become synchronized or co-regulate each other in daily interactions. These theories are often tested by observing a dyad and coding the presence of multiple target behaviours in small time intervals. The sequencing and co-occurrence of the partners’ behaviours across time are then quantified by means of association measures (e.g., kappa coefficient, Jaccard similarity index, proportion of agreement). We demonstrate that the association values obtained are not easy to interpret, because they depend on the marginal frequencies and the amount of auto-dependency in the data. Moreover, often no inferential framework is available to test the significance of the association. Even if a significance test exists (e.g., kappa coefficient) auto-dependencies are not taken into account, which, as we will show, can seriously inflate the Type I error rate. We compare the effectiveness of a model- and a permutation-based framework for significance testing. Results of two simulation studies show that within both frameworks test variants exist that successfully account for auto-dependency, as the Type I error rate is under control, while power is good.  相似文献   

4.
Abstract

Extended redundancy analysis (ERA) combines linear regression with dimension reduction to explore the directional relationships between multiple sets of predictors and outcome variables in a parsimonious manner. It aims to extract a component from each set of predictors in such a way that it accounts for the maximum variance of outcome variables. In this article, we extend ERA into the Bayesian framework, called Bayesian ERA (BERA). The advantages of BERA are threefold. First, BERA enables to make statistical inferences based on samples drawn from the joint posterior distribution of parameters obtained from a Markov chain Monte Carlo algorithm. As such, it does not necessitate any resampling method, which is on the other hand required for (frequentist’s) ordinary ERA to test the statistical significance of parameter estimates. Second, it formally incorporates relevant information obtained from previous research into analyses by specifying informative power prior distributions. Third, BERA handles missing data by implementing multiple imputation using a Markov Chain Monte Carlo algorithm, avoiding the potential bias of parameter estimates due to missing data. We assess the performance of BERA through simulation studies and apply BERA to real data regarding academic achievement.  相似文献   

5.
Children, 3, 4, 5, 6, 8, and 10 years old, were randomly divided into three training conditions—a strategy modeling condition, a strategy modeling with overt self-verbalization condition, and a control condition. The subjects in the two modeling conditions were given training on four cognitive tasks, a signal task, a match-to-standard task, a paired-associates task, and a twenty-questions task. A 6 (age) × 2 (sex) × 3 (treatment) × 2 (trial) analysis of variance was performed on each of the dependent variables associated with each of the four tasks. The results of these analyses indicate that both modeling conditions facilitated performance on the signal and match-to-standard tasks for all six age groups. However, the two modeling procedures facilitated performance on the paired-associates and twenty-questions tasks only in the three older age groups. Since the two modeling procedures did not differ in effectiveness, it was suggested that strategy modeling without overt self-verbalization is the more practical and efficient procedure for facilitating cognitive performance in normal children.  相似文献   

6.
The effectiveness of two hypothesized change mechanisms in cognitive therapy was investigated: logical analysis and empirical hypothesis testing. Thirty-eight spider phobics, as determined by performance on a behavioral avoidance test, were randomly assigned to either one of these two conditions or to a no-treatment control condition. Subjects participated in three group sessions. Outcome phobia questionnaire data suggested that both mechanisms produced desirable changes in a short period of time, with stronger evidence that logical analysis was superior to the control. Outcome from the behavior avoidance test and self-efficacy ratings failed to reach statistical significance but the trends were in the direction of positive change. Results are discussed in terms of the tripartite response dessynchrony hypothesis. Suggestions for future process research in cognitive therapy are provided.William O'Donohue, Ph.D., is an assistant professor of psychology at Northern Illinois University.Jeff Szymanski is a graduate student in clinical psychology at Northern Illinois University.The authors would like to thank Christine Casselles, Melissa McKelvie, Thomas M. Brown, Jill C. Rudman, Bonnie Schrieber, Amy Ray, Anne Valle, Lisa Herold, Jacqueline Ryan, Heather Barta, and Angela Leek for their assistance in this project. Moreover, the authors are grateful to Sol Feldman and Jane Fisher for their comments on an earlier version of this paper.  相似文献   

7.
8.
In the context of covariance structure analysis, a unified approach to the asymptotic theory of alternative test criteria for testing parametric restrictions is provided. The discussion develops within a general framework that distinguishes whether or not the fitting function is asymptotically optimal, and allows the null and alternative hypothesis to be only approximations of the true model. Also, the equivalent of the information matrix, and the asymptotic covariance matrix of the vector of summary statistics, are allowed to be singular. When the fitting function is not asymptotically optimal, test statistics which have asymptotically a chi-square distribution are developed as a natural generalization of more classical ones. Issues relevant for power analysis, and the asymptotic theory of a testing related statistic, are also investigated.This research has been supported by the U.S.-Spanish Joint Committee for Cultural and Educational Cooperation, grant number V-B.854020. The author wishes to express his gratitude to P. M. Bentler who provided very helpful suggestions and research facilities—with an stimulating working environment—at the University of California, Los Angeles, where this work was undertaken. Thanks are also due to W. E. Saris who provided very valuable comments to earlier versions of this paper. Finally, it has also to be acknowledged the editor's and reviewers suggestions which led to substantial improvements of this article.  相似文献   

9.
The importance of appropriate test selection for a given research endeavor cannot be over-emphasized. Using samples drawn from eleven populations (differing in shape, peakedness, and density in the tails), this study investigates the small sample empirical powers of ninek-sample tests against ordered location alternatives under completely randomized designs. The results then are intended to aid the researcher in the selection of a particular procedure appropriate for a given endeavor. To highlight this an industrial psychology application involving work productivity is presented.Research was supported in part by the Scholastic Assistance Program, Baruch College. The author wishes to thank Professors Matthew Goldstein, Shulamith Gross, David Levine, and Edward Wolf for their helpful comments when writing this paper. In addition, the author wishes to thank the referees and editor for their useful suggestions for improving the paper.  相似文献   

10.
Language acquisition depends on the ability to detect and track the distributional properties of speech. Successful acquisition also necessitates detecting changes in those properties, which can occur when the learner encounters different speakers, topics, dialects, or languages. When encountering multiple speech streams with different underlying statistics but overlapping features, how do infants keep track of the properties of each speech stream separately? In four experiments, we tested whether 8‐month‐old monolingual infants (N = 144) can track the underlying statistics of two artificial speech streams that share a portion of their syllables. We first presented each stream individually. We then presented the two speech streams in sequence, without contextual cues signaling the different speech streams, and subsequently added pitch and accent cues to help learners track each stream separately. The results reveal that monolingual infants experience difficulty tracking the statistical regularities in two speech streams presented sequentially, even when provided with contextual cues intended to facilitate separation of the speech streams. We discuss the implications of our findings for understanding how infants learn and separate the input when confronted with multiple statistical structures.  相似文献   

11.
Many researchers face the problem of missing data in longitudinal research. Especially, high risk samples are characterized by missing data which can complicate analyses and the interpretation of results. In the current study, our aim was to find the most optimal and best method to deal with the missing data in a specific study with many missing data on the outcome variable. Therefore, different techniques to handle missing data were evaluated, and a solution to efficiently handle substantial amounts of missing data was provided. A simulation study was conducted to determine the most optimal method to deal with the missing data. Results revealed that multiple imputation (MI) using predictive mean matching was the most optimal method with respect to lowest bias and the smallest confidence interval (CI) while maintaining power. Listwise deletion and last observation carried backward also scored acceptable with respect to bias; however, CIs were much larger and sample size almost halved using these methods. Longitudinal research in high risk samples could benefit from using MI in future research to handle missing data. The paper ends with a checklist for handling missing data.  相似文献   

12.
The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.  相似文献   

13.
该文以平均数差异显著性检验为例,对实验数据进行假设检验后,继续对其统计检验力和效果大小进行估计的基本原理和方法作一介绍。  相似文献   

14.
Huitema and McKean (Psychological Bulletin, 110, 291–304, 1991) recently showed, in a Monte-Carlo study, that five conventional estimators of first-order autocorrelation perform poorly for small (< 50) sample sizes. They suggested a modified estimator and a test for autocorrelation. We examine an estimator not considered by Huitema and McKean: the C-statistic (Young, Annals of Mathematical Statistics, 12, 293–300, 1941). A Monte-Carlo study of the small sample properties of the C-statistic shows that it performs as well or better than the modified estimator suggested by Huitema and McKean (1991). The C-statistic is also shown to be closely related to the d-statistic of the widely used Durbin-Watson test.  相似文献   

15.
When large numbers of statistical tests are computed, such as in broad investigations of personality and behavior, the number of significant findings required before the total can be confidently considered beyond chance is typically unknown. Employing modern software, specially written code, and new procedures, the present article uses three sets of personality data to demonstrate how approximate randomization tests can evaluate (a) the number of significant correlations between a single variable and a large number of other variables, (b) the number of significant correlations between two large sets of variables, and (c) the average size of a large number of effects. Randomization tests can free researchers to fully explore large data sets and potentially have even wider applicability.  相似文献   

16.
Employing the autogenous-reactive model of obsessions (Behaviour Research and Therapy 41 (2003) 11-29), this study sought to test a hypothesized continuum where reactive obsessions fall in between autogenous obsessions and worry with respect to several thought characteristics concerning content appraisal, perceived form, and thought triggers. Nonclinical undergraduate students (n=435) were administered an online packet of questionnaires designed to examine the three different types of thoughts. Main data analyses included only those displaying moderate levels of obsessions or worries (n=252). According to the most distressing thought, three different groups were formed and compared: autogenous obsession (n=34), reactive obsession (n=76), and worry (n=142). Results revealed that (a) relative to worry, autogenous obsessions were perceived as more bizarre, more unacceptable, more unrealistic, and less likely to occur; (b) autogenous obsessions were more likely to take the form of impulses, urges, or images, whereas worry was more likely to take the form of doubts, apprehensions, or thoughts; and (c) worry was more characterized by awareness and identifiability of thought triggers, with reactive obsessions through these comparisons falling in between. Moreover, reactive obsessions, relative to autogenous obsessions, were more strongly associated with both severity of worry and use of worrying as a thought control strategy. Our data suggest that the reactive subtype represents more worry-like obsessions compared to the autogenous subtype.  相似文献   

17.
18.
While effect size estimates, post hoc power estimates, and a priori sample size determination are becoming a routine part of univariate analyses involving measured variables (e.g., ANOVA), such measures and methods have not been articulated for analyses involving latent means. The current article presents standardized effect size measures for latent mean differences inferred from both structured means modeling and MIMIC approaches to hypothesis testing about differences among means on a single latent construct. These measures are then related to post hoc power analysis, a priori sample size determination, and a relevant measure of construct reliability.I wish to convey my appreciation to the reviewers and Associate Editor, whose suggestions extended and strengthened the article's content immensely, and to Ralph Mueller of The George Washington University for enhancing the clarity of its presentation.  相似文献   

19.
Eysenck proposed that psychopathy is at the extreme end of the Psychoticism (P) personality dimension (Eysenck & Eysenck, 1976). This study examined (i) whether psychopathy-relevant P items of the EPQ-R can form psychometrically valid facets that map onto the conceptualization of the two-, three- or four-factor models of psychopathy using confirmatory factor analysis (N = 577) in a normal population; and (ii) whether those P-facets have criteria-related validity in associations with self-reported primary and secondary psychopathy, impulsivity (subsample N = 306), and measures of trait empathy and aggression (subsample N = 212). The four-factor model incorporating affective, interpersonal, impulsive, and antisocial facets of P was superior to the two-factor model; however, the three-factor conceptualization excluding the antisocial P-facet was the best fit. The facets show predicted divergent associations with primary and secondary self-reported psychopathy and trait measures. Findings are discussed in light of Eysenck’s P-psychopathy continuity hypothesis and the applicability of facet approaches to the prediction of psychopathic and antisocial tendencies.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号