首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a sandwich-type standard error estimator of independent data to multivariate time series data. One required element of this estimator is the asymptotic covariance matrix of concurrent and lagged correlations among manifest variables, whose closed-form expression has not been presented in the literature. The performance of the adapted sandwich-type standard error estimator is evaluated using a simulation study and further illustrated using an empirical example.  相似文献   

2.
In applications of covariance structure modeling in which an initial model does not fit sample data well, it has become common practice to modify that model to improve its fit. Because this process is data driven, it is inherently susceptible to capitalization on chance characteristics of the data, thus raising the question of whether model modifications generalize to other samples or to the population. This issue is discussed in detail and is explored empirically through sampling studies using 2 large sets of data. Results demonstrate that over repeated samples, model modifications may be very inconsistent and cross-validation results may behave erratically. These findings lead to skepticism about generalizability of models resulting from data-driven modifications of an initial model. The use of alternative a priori models is recommended as a preferred strategy.  相似文献   

3.
蒋浩 《心理科学进展》2018,26(9):1624-1631
任务转换是研究执行功能的常用范式。任务转换通常伴随着转换代价:执行转换任务比重复任务的反应时更长、错误率更高。转换代价可能反映了任务设置重构(重构理论), 也可能表明任务之间存在干扰(干扰理论)。与任务线索范式相比, 自主任务转换范式更具生态效度, 而且不仅能获得转换代价这个传统结果, 还引入了任务选择比例、任务转换率等新指标, 其结果倾向于支持重构理论。此外, 新近研究指出自主任务转换可能也包含干扰的作用。未来, 应通过进一步改进实验范式等方法, 实现两大理论的融合。  相似文献   

4.
This study investigates an account of atypical error patterns within the framework of an interactive spreading activation model. Martin and Saffran (1992) described a patient, NC, whose error pattern was unusual for the occurrence of higher rates of form-related than meaning-related word substitutions in naming and the production of semantic errors in repetition. They proposed that NC′s error pattern could be accounted for by a pathologically rapid decay of primed nodes in the semantic-lexical-phonological network that shifts the probabilities of error outcome in lexical retrieval. In the present study, Martin and Saffran′s account was tested and supported in a series of simulations that reproduce essential features of NC′s lexical error pattern in naming and repetition. Also described here are the results of a longitudinal study of NC′s naming and repetition, which revealed a shift in relative lexical error rates toward a qualitatively normal pattern. This change in error pattern was simulated by assuming that recovery reflects resolution of the rapid decay rate toward normal levels. The patient data and computational studies are discussed in terms of their significance for the understanding of aphasic impairments and their implications for models of lexical retrieval.  相似文献   

5.
Two-choice classification RTs were collected for eight conditions designed to vary the number of comparisons necessary between one or two visual patterns in perception and one or two in short-term memory (STM). Overall RT data supported both a serial self-terminating and parallel self-terminating model with distributed search times, while rejecting corresponding exhaustive models. Precise predictions for the parallel model proved difficult to derive; however, the serial model predicted the fine detail of the data surprisingly well. RTs suggested that Ss searched through all stimuli in memory first and that stimuli in both memory and perception were searched from right to left. Comparison times between identical stimuli were estimated to be longer than comparison times between different stimuli. Error rates increased with the number of hypothesized comparisons; predicted error rates, based on independence of rates within stages, also increased but failed to predict the empirical error rates very well.  相似文献   

6.
Missing data are a pervasive problem in many psychological applications in the real world. In this article we study the impact of dropout on the operational characteristics of several approaches that can be easily implemented with commercially available software. These approaches include the covariance pattern model based on an unstructured covariance matrix (CPM-U) and the true covariance matrix (CPM-T), multiple imputation-based generalized estimating equations (MI-GEE), and weighted generalized estimating equations (WGEE). Under the missing at random mechanism, the MI-GEE approach was always robust. The CPM-T and CPM-U methods were also able to control the error rates provided that certain minimum sample size requirements were met, whereas the WGEE was more prone to inflated error rates. In contrast, under the missing not at random mechanism, all evaluated approaches were generally invalid. Our results also indicate that the CPM methods were more powerful than the MI-GEE and WGEE methods and their superiority was often substantial. Furthermore, we note that little or no power was sacrificed by using CPM-U method in place of CPM-T, although both methods have less power in situations where some participants have incomplete data. Some aspects of the CPM-U and MI-GEE methods are illustrated using real data from 2 previously published data sets. The first data set comes from a randomized study of AIDS patients with advanced immune suppression, the second from a cohort of patients with schizotypal personality disorder enrolled in a prevention program for psychosis.  相似文献   

7.
For decisions between many alternatives, the benchmark result is Hick's Law: that response time increases log-linearly with the number of choice alternatives. Even when Hick's Law is observed for response times, divergent results have been observed for error rates-sometimes error rates increase with the number of choice alternatives, and sometimes they are constant. We provide evidence from two experiments that error rates are mostly independent of the number of choice alternatives, unless context effects induce participants to trade speed for accuracy across conditions. Error rate data have previously been used to discriminate between competing theoretical accounts of Hick's Law, and our results question the validity of those conclusions. We show that a previously dismissed optimal observer model might provide a parsimonious account of both response time and error rate data. The model suggests that people approximate Bayesian inference in multi-alternative choice, except for some perceptual limitations.  相似文献   

8.
It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between the latent variables and dichotomous observed variables, which may be responses to tests or questionnaires. It will be shown that the multilevel model with measurement error in the observed predictor variables can be estimated in a Bayesian framework using Gibbs sampling. In this article, handling measurement error via the normal ogive model is compared with alternative approaches using the classical true score model. Examples using real data are given.This paper is part of the dissertation by Fox (2001) that won the 2002 Psychometric Society Dissertation Award.  相似文献   

9.
A model for response latency in recognition memory is described which is a strength model incorporating the notion of multiple observations and with the additional assumptions that the variance of the strength distributions increase with set size and that the observer attempts to keep his error rate at a constant level over set size. It is shown that the model can, without recourse to particular parameter values, predict a near linear RT set-size function and, since it is a (TSD) model in its decision aspects, can account for errors and hence error latencies in the recognition task. After the model is described, two experiments are performed which test the prediction that correct mean latency is generally shorter than incorrect mean latency. The prediction is confirmed and this feature is discussed in general, the model being compared with that of Juola, Fischler, Wood, and Atkinson (1971) in this respect. Some possible modifications to the latter model are also considered.  相似文献   

10.
In learning environments, understanding the longitudinal path of learning is one of the main goals. Cognitive diagnostic models (CDMs) for measurement combined with a transition model for mastery may be beneficial for providing fine-grained information about students’ knowledge profiles over time. An efficient algorithm to estimate model parameters would augment the practicality of this combination. In this study, the Expectation–Maximization (EM) algorithm is presented for the estimation of student learning trajectories with the GDINA (generalized deterministic inputs, noisy, “and” gate) and some of its submodels for the measurement component, and a first-order Markov model for learning transitions is implemented. A simulation study is conducted to investigate the efficiency of the algorithm in estimation accuracy of student and model parameters under several factors—sample size, number of attributes, number of time points in a test, and complexity of the measurement model. Attribute- and vector-level agreement rates as well as the root mean square error rates of the model parameters are investigated. In addition, the computer run times for converging are recorded. The result shows that for a majority of the conditions, the accuracy rates of the parameters are quite promising in conjunction with relatively short computation times. Only for the conditions with relatively low sample sizes and high numbers of attributes, the computation time increases with a reduction parameter recovery rate. An application using spatial reasoning data is given. Based on the Bayesian information criterion (BIC), the model fit analysis shows that the DINA (deterministic inputs, noisy, “and” gate) model is preferable to the GDINA with these data.  相似文献   

11.
We examined the factor structure of the Schizotypal Personality Questionnaire (SPQ; Raine, 1991), using confirmatory factor analysis in 3 experiments, with an aim to better understand the construct of schizotypy. In Experiment 1 we tested the fit of 2-, 3-, and 4-factor models on SPQ data from a normal sample. The paranoid 4-factor model fit the data best but not adequately. Based on the strong basis for the Raine 3-factor model we attempted to improve the fit of the 3-factor model by making 3 modifications to the Raine model. These modifications produced a well-fitting model. In Experiment 2 the good fit of this modified 2-factor model to SPQ scores was replicated in an independent normal sample. In Experiment 3, the modified 3-factor model was successfully extended to include the 3 Chapman schizotypy scales. Together these 3 experiments indicate that the 3-factor model of the SPQ, albeit with some slight modifications, is a good model for schizotypy structure that is not restricted to 1 measure of schizotypal personality traits.  相似文献   

12.
Mexican American adolescents have higher rates of externalizing problems than their peers from other ethnic and racial groups. To begin the process of understanding factors related to externalizing problems in this population, this study used the social development model (SDM) and prospective data across the transition to junior high school from 750 diverse Mexican American families. In addition, the authors examined whether familism values provided a protective effect for relations within the model. Results showed that the SDM worked well for this sample. As expected, association with deviant peers was the primary predictor of externalizing behaviors. There was support for a protective effect in that adolescents with higher familism values had slower rates of increase in association with deviant peers from 5th to 7th grades than those with lower familism values. Future research needs to determine whether additional culturally appropriate modifications of the SDM would increase its usefulness for Mexican American adolescents.  相似文献   

13.
In sparse tables for categorical data well‐known goodness‐of‐fit statistics are not chi‐square distributed. A consequence is that model selection becomes a problem. It has been suggested that a way out of this problem is the use of the parametric bootstrap. In this paper, the parametric bootstrap goodness‐of‐fit test is studied by means of an extensive simulation study; the Type I error rates and power of this test are studied under several conditions of sparseness. In the presence of sparseness, models were used that were likely to violate the regularity conditions. Besides bootstrapping the goodness‐of‐fit usually used (full information statistics), corrected versions of these statistics and a limited information statistic are bootstrapped. These bootstrap tests were also compared to an asymptotic test using limited information. Results indicate that bootstrapping the usual statistics fails because these tests are too liberal, and that bootstrapping or asymptotically testing the limited information statistic works better with respect to Type I error and outperforms the other statistics by far in terms of statistical power. The properties of all tests are illustrated using categorical Markov models.  相似文献   

14.
Categorical moderators are often included in mixed-effects meta-analysis to explain heterogeneity in effect sizes. An assumption in tests of categorical moderator effects is that of a constant between-study variance across all levels of the moderator. Although it rarely receives serious thought, there can be statistical ramifications to upholding this assumption. We propose that researchers should instead default to assuming unequal between-study variances when analysing categorical moderators. To achieve this, we suggest using a mixed-effects location-scale model (MELSM) to allow group-specific estimates for the between-study variance. In two extensive simulation studies, we show that in terms of Type I error and statistical power, little is lost by using the MELSM for moderator tests, but there can be serious costs when an equal variance mixed-effects model (MEM) is used. Most notably, in scenarios with balanced sample sizes or equal between-study variance, the Type I error and power rates are nearly identical between the MEM and the MELSM. On the other hand, with imbalanced sample sizes and unequal variances, the Type I error rate under the MEM can be grossly inflated or overly conservative, whereas the MELSM does comparatively well in controlling the Type I error across the majority of cases. A notable exception where the MELSM did not clearly outperform the MEM was in the case of few studies (e.g., 5). With respect to power, the MELSM had similar or higher power than the MEM in conditions where the latter produced non-inflated Type 1 error rates. Together, our results support the idea that assuming unequal between-study variances is preferred as a default strategy when testing categorical moderators.  相似文献   

15.
Three Ss scanned matrices of letters for 40 sessions in a test of Neisser’s claim that feature tests in high-speed searches operate independently and in parallel. In the multiple-target condition (MTC), the matrix contained any one of four target letters, while in the four single-target conditions (STC), the S knew which particular target was embedded in the list. In contrast to previous studies, the error rates for individual target letters in the MTC were analyzed separately rather than being pooled. Two Ss made more errors on the hardest target when searched for in the MTC than in the STC. This difference would be masked by pooling error rates. The third S’s scanning rate in the MTC was not as rapid as in the STC. Neither a sequential nor a strictly parallel feature processing model can account for these data.  相似文献   

16.
传统的有中介的调节(mediated moderation, meMO)模型关于误差方差齐性的假设经常被违背, 应用研究中也缺乏测量meMO效应大小的指标。对于单层数据, 本文借助于两层建模的思想, 提出了一种可用于处理方差非齐性的两层有中介的调节(2meMO)模型; 给出了用于测量meMO分析中总调节效应、直接调节效应和有中介调节效应大小的效应量。通过Monte Carlo模拟研究, 比较了meMO和2meMO模型在参数和效应量估计上的表现。并通过实际案例解释了2meMO模型的应用以及效应量的计算和解释。  相似文献   

17.
在行为科学研究领域中,检验测量工具的测量不变性是进行群体差异比较的前提。目前,多组验证性因子分析(多组CFA)方法被广泛用于检验测量不变性,但是它对跨组等值的限制过于严格,在实际应用中常常存在大量局限。贝叶斯渐近测量不变性方法基于贝叶斯思想的优良特性,放宽了传统多组CFA方法对跨组差异的严格限制,避免了传统方法的问题,具有较高的应用价值。文章详细介绍了贝叶斯渐近测量不变性方法的原理及优势,同时通过实例展示了渐近测量不变性方法在Mplus软件中的具体分析过程。  相似文献   

18.
A multi‐group factor model is suitable for data originating from different strata. However, it often requires a relatively large sample size to avoid numerical issues such as non‐convergence and non‐positive definite covariance matrices. An alternative is to pool data from different groups in which a single‐group factor model is fitted to the pooled data using maximum likelihood. In this paper, properties of pseudo‐maximum likelihood (PML) estimators for pooled data are studied. The pooled data are assumed to be normally distributed from a single group. The resulting asymptotic efficiency of the PML estimators of factor loadings is compared with that of the multi‐group maximum likelihood estimators. The effect of pooling is investigated through a two‐group factor model. The variances of factor loadings for the pooled data are underestimated under the normal theory when error variances in the smaller group are larger. Underestimation is due to dependence between the pooled factors and pooled error terms. Small‐sample properties of the PML estimators are also investigated using a Monte Carlo study.  相似文献   

19.
Durkheim's nineteenth‐century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population‐level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross‐national research questions.  相似文献   

20.
Adapting Edgington's [J. Psychol. 90 (1975) 57] randomly determined intervention start-point model, Levin and Wampold [Sch. Psychol. Quart. 14 (1999) 59] proposed a set of nonparametric randomization tests for analyzing the data from single-case designs. In the present study, the performance of Levin and Wampold's four basic tests (independent start-point general and comparative effectiveness, simultaneous start-point general and comparative effectiveness) was examined with respect to their Type I error rates and statistical power. Of Levin and Wampold's four tests, all except the independent start-point comparative effectiveness test maintained their empirical Type I error rates and had acceptable power at larger sample-size and effect-size combinations. The one-tailed comparative intervention effectiveness test for the independent start-point model was found to be too liberal, in that it did not maintain its Type I error rate. Although a two-tailed application of that test was found to be conservative at longer series lengths, it had acceptable power at larger sample-size and effect-size combinations. The results support the utility of a versatile new class of single-case designs that permit both within- and between-unit statistical assessments of intervention effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号