The orienting of attention has been found to be influenced by the previous cueing status in a spatial-cueing paradigm. The explanation for this sequence effect remains uncertain. This study separated the involuntary and the voluntary components of arrow cueing by manipulating the predicted target locations. For example, a left arrow cue may have indicated that the target was more likely to appear at the up location. Therefore, three trial types were repeated or switched between trials: cued (targets appeared along the direction of the arrows), predicted (targets appeared at the locations predicted by the arrows), and unrelated (targets appeared at the other two locations, neither cued nor predicted). RTs of cued trials were found to be significantly facilitated after a previous cued trial; however, the same effect was not observed for predicted trials. The results suggest that significant sequence effects are induced only in the involuntary component of arrow cueing. The findings support the feature-integration hypothesis for the sequence effect of symbolic cueing.
The maximum likelihood classification rule is a standard method to classify examinee attribute profiles in cognitive diagnosis models (CDMs). Its asymptotic behaviour is well understood when the model is assumed to be correct, but has not been explored in the case of misspecified latent class models. This paper investigates the asymptotic behaviour of a two-stage maximum likelihood classifier under a misspecified CDM. The analysis is conducted in a general restricted latent class model framework addressing all types of CDMs. Sufficient conditions are proposed under which a consistent classification can be obtained by using a misspecified model. Discussions are also provided on the inconsistency of classification under certain model misspecification scenarios. Simulation studies and a real data application are conducted to illustrate these results. Our findings can provide some guidelines as to when a misspecified simple model or a general model can be used to provide a good classification result. 相似文献
In this study, the Perceived Perfectionism from God Scale (PPGS) was developed with Latter-day Saints (Mormons) across two samples. Sample 1 (N = 421) was used for EFA to select items for the Perceived Standards from God (5 items) and the Perceived Discrepancy from God (5 items) subscales. Sample 2 (N = 420) was used for CFA and cross-validated the 2-factor oblique model as well as a bifactor model. Perceived Standards from God scores had Cronbach alphas ranging from .73 to .78, and Perceived Discrepancy from God scores had Cronbach alphas ranging from .82 to .84. Standards from God scores were positively correlated with positive affect, whereas Discrepancy from God scores was positively correlated with negative affect, shame and guilt. Moreover, these two PPGS subscale scores added significant incremental variances in predicting associated variables over and above corresponding personal perfectionism scores. 相似文献