首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
L'Analyse de Fréquence des Configurations (AFC) est une méthode de recherche des classes et anticlasses dans les classifications croisées de catégories de variables. Les classes sont des modèles de catégories de variables qui apparaissent plus souvent que le hasard ne le laisse supposes. Les anticlasses sont des modèles de catégorie de variables qui apparaissent moins souvent que le hasard ne le laisse supposes. On présente quatre applications d'AFC: l'AFC de premier ordre, l'AFC prédictive, l'AFC de symétrie axiale, et un modèle d'AFC longitudinale. On examine les caractéristiques de l'AFC en insistant sur les contraintes et l'applicabilité. On compare aussi l'AFC avec d'autres méthodes permettant l'analyse multivariée de catégories de données, en particulier la modèlisation log-linéaire. Des sorties informatiques sont traitées.
Configural Frequency Analysis (CFA) is a method for the search for types and antitypes in cross-classifications of categorical variables. Types are patterns of variable categories that occur more often than expected from chance. Antitypes are patterns of variable categories that occur less often than expected from chance. Four applications of CFA are reviewed: the First Order CFA, the Prediction CFA, Axial Symmetry CFA, and one model of Longitudinal CFA. Characteristics of CFA are discussed, focusing on constraints posed and applicability. Comparisons are made of CFA with other methods for multivariate analysis of categorical data, in particular, log-linear modelling. Computational issues are discussed.  相似文献   

2.
This article reviews the premises of configural frequency analysis (CFA), including methods of choosing significance tests and base models, as well as protecting alpha, and discusses why CFA is a useful approach when conducting longitudinal person-oriented research. CFA operates at the manifest variable level. Longitudinal CFA seeks to identify those temporal patterns that stand out as more frequent (CFA types) or less frequent (CFA antitypes) than expected with reference to a base model. A base model that has been used frequently in CFA applications, prediction CFA, and a new base model, auto-association CFA, are discussed for analysis of cross-classifications of longitudinal data. The former base model takes the associations among predictors and among criteria into account. The latter takes the auto-associations among repeatedly observed variables into account. Application examples of each are given using data from a longitudinal study of domestic violence. It is demonstrated that CFA results are not redundant with results from log-linear modeling or multinomial regression and that, of these approaches, CFA shows particular utility when conducting person-oriented research.  相似文献   

3.
Configural frequency analysis (CFA) tests whether certain individual patterns in different variables are observed more frequently in a sample than expected by chance. In normative CFA, these patterns are derived from the subject's specific position in relation to sample characteristics such as the median or the mean. In ipsative CFA, patterns are defined within an individual reference system, e.g. relative to the subject's median of different variable scores. Normative CFA examines dimensionality of scales and is comparable to factor analysis in this respect. Ipsative CFA rather yields information about location of scores in different variables, in a similar way to ANOVA or Friedman testing. However, both normative and ipsative CFA may supply information not obtainable by means of the aforementioned methods. This is illustrated in a reanalysis of data in four scales of an anxiety inventory. © 1997 John Wiley & Sons, Ltd.  相似文献   

4.
After discussing diverse concepts of types or syndromes the definition of types, according to configural frequency analysis (CFA), is given. A type, in this theory, is assumed to be a configuration of categories belonging to different attributes. This configuration should occur with a probability which is higher than the conditional probability for given univariate marginal frequencies. The conditional probability is computed under the null hypothesis of independence of the attributes. Types are identified by simultaneous conditional binomial tests and interpreted by means of an interaction structure analysis in a multivariate contingency table. Two further versions of CFA are explained. By prediction CFA it is possible to predict certain configurations by other ones while by c-sample CFA it is possible to discriminate between populations by means of configurations. The procedures are illustrated by an example concerning the responses of patients to lumbar punctures.  相似文献   

5.
In cross‐national studies, mean levels of self‐reported phenomena are often not congruent with more objective criteria. One prominent explanation for such findings is that people make self‐report judgements in relation to culture‐specific standards (often called the reference group effect), thereby undermining the cross‐cultural comparability of the judgements. We employed a simple method called anchoring vignettes in order to test whether people from 21 different countries have varying standards for Conscientiousness, a Big Five personality trait that has repeatedly shown unexpected nation‐level relationships with external criteria. Participants rated their own Conscientiousness and that of 30 hypothetical persons portrayed in short vignettes. The latter type of ratings was expected to reveal individual differences in standards of Conscientiousness. The vignettes were rated relatively similarly in all countries, suggesting no substantial culture‐related differences in standards for Conscientiousness. Controlling for the small differences in standards did not substantially change the rankings of countries on mean self‐ratings or the predictive validities of these rankings for objective criteria. These findings are not consistent with mean self‐rated Conscientiousness scores being influenced by culture‐specific standards. The technique of anchoring vignettes can be used in various types of studies to assess the potentially confounding effects of reference levels. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Data in psychology are often collected using Likert‐type scales, and it has been shown that factor analysis of Likert‐type data is better performed on the polychoric correlation matrix than on the product‐moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real‐data example indicates that estimates by ridge GLS are 9–20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich‐type standard errors following the ridge GLS methods also perform reasonably well.  相似文献   

7.
Multitrait-Multimethod (MTMM) matrices are often analyzed by means of confirmatory factor analysis (CFA). However, fitting MTMM models often leads to improper solutions, or non-convergence. In an attempt to overcome these problems, various alternative CFA models have been proposed, but with none of these the problem of finding improper solutions was solved completely. In the present paper, an approach is proposed where improper solutions are ruled out altogether and convergence is guaranteed. The approach is based on constrained variants of components analysis (CA). Besides the fact that these methods do not give improper solutions, they have the advantage that they provide component scores which can later on be used to relate the components to external variables. The new methods are illustrated by means of simulated data, as well as empirical data sets.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the first author. The authors are obliged to three anonymous reviewers and an associate editor for constructive suggestions on the first version of this paper.  相似文献   

8.
Quite a few studies in the behavioral sciences result in hierarchical time profile data, with a number of time profiles being measured for each person under study. Associated research questions often focus on individual differences in profile repertoire, that is, differences between persons in the number and the nature of profile shapes that show up for each person. In this paper, we introduce a new method, called KSC-N, that parsimoniously captures such differences while neatly disentangling variability in shape and amplitude. KSC-N induces a few person clusters from the data and derives for each person cluster the types of profile shape that occur most for the persons in that cluster. An algorithm for fitting KSC-N is proposed and evaluated in a simulation study. Finally, the new method is applied to emotional intensity profile data.  相似文献   

9.
Occasionally, people are called upon to estimate probabilities after an event has occurred. In hindsight, was this an outcome we could have expected? Could things easily have turned out differently? One strategy for performing post hoc probability judgements would be to mentally turn the clock back and reconstruct one's expectations before the event. But if asked about the probability of an alternative, counterfactual outcome, a simpler strategy is available, based on this outcome's perceived closeness to what actually happened. The article presents five studies exploring the relationship between counterfactual closeness and counterfactual probability. The first study indicates that post hoc probabilities typically refer to the counterfactual rather than the factual outcome. Studies 2-5 show that physical, temporal, or conceptual proximity play a decisive role for post hoc probability assessments of counterfactual events. When margins are narrow, the probabilities of, for instance, winning a match (when losing), and of losing (when actually winning) may even be rated higher than the corresponding probabilities of what really happened. Closeness is also more often referred to, and rated to be a better reason for believing there is a “good chance” of the counterfactual rather than of the factual result occurring. Finally, the closeness of the alternative outcome in success and failure stories is shown to be significantly correlated to its rated probability.  相似文献   

10.
Ordinal data occur frequently in the social sciences. When applying principal component analysis (PCA), however, those data are often treated as numeric, implying linear relationships between the variables at hand; alternatively, non-linear PCA is applied where the obtained quantifications are sometimes hard to interpret. Non-linear PCA for categorical data, also called optimal scoring/scaling, constructs new variables by assigning numerical values to categories such that the proportion of variance in those new variables that is explained by a predefined number of principal components (PCs) is maximized. We propose a penalized version of non-linear PCA for ordinal variables that is a smoothed intermediate between standard PCA on category labels and non-linear PCA as used so far. The new approach is by no means limited to monotonic effects and offers both better interpretability of the non-linear transformation of the category labels and better performance on validation data than unpenalized non-linear PCA and/or standard linear PCA. In particular, an application of penalized optimal scaling to ordinal data as given with the International Classification of Functioning, Disability and Health (ICF) is provided.  相似文献   

11.
Research on compatibility of displays and controls has been a staple of basic and applied experimental psychology since the work by Paul Fitts and colleagues in the 1950s. Compatibility is often defined in terms of natural response tendencies, and many behavioral studies have been conducted examining various determinants of compatibility effects. Some compatibility phenomena are universal because of constant properties of the physical environments in which people live. Others, often called population stereotypes (Loveless, 1962), are specific to particular cultural groups due to experience with unique display-control relations. Determining which compatibility phenomena are universal and which are limited to certain populations is necessary for knowing how widely various compatibility principles can be expected to hold for performance. In this article we examine the universal and cultural aspects of display-control compatibility with an emphasis on implications for understanding human performance in general and for applying the knowledge to design of interfaces that will be maximally compatible with the characteristics of the intended users.  相似文献   

12.
Recently, a number of model selection heuristics (i.e. DIFFIT, CORCONDIA, the numerical convex hull based heuristic) have been proposed for choosing among Parafac and/or Tucker3 solutions of different complexity for a given three‐way three‐mode data set. Such heuristics are often validated by means of extensive simulation studies. However, these simulation studies are unrealistic in that it is assumed that the variance in real three‐way data can be split into two parts: structural variance, due to a true underlying Parafac or Tucker3 model of low complexity, and random noise. In this paper, we start from the much more reasonable assumption that the variance in any real three‐way data set is due to three different sources: (1) a strong Parafac or Tucker3 structure of low complexity, accounting for a considerable amount of variance, (2) a weak Tucker3 structure, capturing less prominent data aspects, and (3) random noise. As such, Parafac and Tucker3 simulation studies are run in which the data are generated by adding a weak Tucker3 structure to a strong Parafac or Tucker3 one and perturbing the resulting data with random noise. The design of these studies is based on the reanalysis of real data sets. In these studies, the performance of the numerical convex hull based model selection method is evaluated with respect to its capability of discriminating strong from weak underlying structures. The results show that in about two‐thirds of the simulated cases, the hull heuristic yields a model of the same complexity as the strong underlying structure and thus succeeds in disentangling strong and weak underlying structures. In the vast majority of the remaining third, this heuristic selects a solution that combines the strong structure and (part of) the weak structure.  相似文献   

13.
Many studies show a developmental advantage for transitive sentences with familiar verbs over those with novel verbs. It might be that once familiar verbs become entrenched in particular constructions, they would be more difficult to understand (than would novel verbs) in non‐prototypical constructions. We provide support for this hypothesis investigating German children using a forced‐choice pointing paradigm with reversed agent‐patient roles. We tested active transitive verbs in study 1. The 2‐year olds were better with familiar than novel verbs, while the 2½‐year olds pointed correctly for both. In study 2, we tested passives: 2½‐year olds were significantly below chance for familiar verbs and at chance for novel verbs, supporting the hypothesis that the entrenchment of the familiar verbs in the active transitive voice was interfering with interpreting them in the passive voice construction. The 3½‐year olds were also at chance for novel verbs but above chance with familiar verbs. We interpret this as reflecting a lessening of the verb‐in‐construction entrenchment as the child develops knowledge that particular verbs can occur in a range of constructions. The 4½‐year olds were above chance for both familiar and novel verbs. We discuss our findings in terms of the relative entrenchment of lexical and syntactic information and to interference between them.  相似文献   

14.
Violations of utility are often attributed to people's differential reactions to risk versus certainty or uncertainty, or more generally to the way that people perceive outcomes and consequences. However, a core feature of utility is additivity, and violations may also occur because of averaging effects. Averaging is pervasive in intuitive riskless judgement throughout many domains, as shown with Anderson's Information Integration approach. The present study extends these findings to judgement under risk. Five‐ to 10‐year old children showed a disordinal violation of utility because they averaged the part worths of duplex gambles rather than add them, as adults do, and as normatively prescribed. Thus adults realized that two prizes are better than one, but children preferred a high chance to win one prize to the same gamble plus an additional small chance to win a second prize. This result suggests that an additive operator may not be a natural component of the intuitive psychological concept of expected value that emerges in childhood. The implications of a developmental perspective for the study of judgement and decision are discussed. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

15.
In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types—continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.  相似文献   

16.
Implicit learning is often assumed to be an effortless process. However, some artificial grammar learning and sequence learning studies using dual tasks seem to suggest that attention is essential for implicit learning to occur. This discrepancy probably results from the specific type of secondary task that is used. Different secondary tasks may engage attentional resources differently and therefore may bias performance on the primary task in different ways. Here, we used a random number generation (RNG) task, which may allow for a closer monitoring of a participant's engagement in a secondary task than the popular secondary task in sequence learning studies: tone counting (TC). In the first two experiments, we investigated the interference associated with performing RNG concurrently with a serial reaction time (SRT) task. In a third experiment, we compared the effects of RNG and TC. In all three experiments, we directly evaluated participants' knowledge of the sequence with a subsequent sequence generation task. Sequence learning was consistently observed in all experiments, but was impaired under dual-task conditions. Most importantly, our data suggest that RNG is more demanding and impairs learning to a greater extent than TC. Nevertheless, we failed to observe effects of the secondary task in subsequent sequence generation. Our studies indicate that RNG is a promising task to explore the involvement of attention in the SRT task.  相似文献   

17.
Subgroup analyses allow us to examine the influence of a categorical moderator on the effect size in meta‐analysis. We conducted a simulation study using a dichotomous moderator, and compared the impact of pooled versus separate estimates of the residual between‐studies variance on the statistical performance of the Q B (P) and Q B (S) tests for subgroup analyses assuming a mixed‐effects model. Our results suggested that similar performance can be expected as long as there are at least 20 studies and these are approximately balanced across categories. Conversely, when subgroups were unbalanced, the practical consequences of having heterogeneous residual between‐studies variances were more evident, with both tests leading to the wrong statistical conclusion more often than in the conditions with balanced subgroups. A pooled estimate should be preferred for most scenarios, unless the residual between‐studies variances are clearly different and there are enough studies in each category to obtain precise separate estimates.  相似文献   

18.
This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.  相似文献   

19.
This study investigated how emotion changes within persons across different episodes of romantic relationship conflict. Presumably, changes in different types of emotion are linked to changes in the types of underlying adaptive concerns people have during conflict, which in turn are linked to changes in the types of emotion that one's partner is perceived to express. Over the span of 8 weeks, 105 college students in romantic relationships completed between 2 and 5 online assessments of a recent relationship conflict. Hierarchical linear modeling was used to distinguish within‐person effects from between‐person effects. Results confirmed expected differences between types of emotion and types of underlying concern, indicated that most effects occur at the within‐person level, and identified mediating pathways.  相似文献   

20.
In personality and attitude measurement, the presence of acquiescent responding can have an impact on the whole process of item calibration and test scoring, and this can occur even when sensible procedures for controlling acquiescence are used. This paper considers a bidimensional (content acquiescence) factor‐analytic model to be the correct model, and assesses the effects of fitting unidimensional models to theoretically unidimensional scales when acquiescence is in fact operating. The analysis considers two types of scales: non‐balanced and fully balanced. The effects are analysed at both the calibration and the scoring stages, and are of two types: bias in the item/respondent parameter estimates and model/person misfit. The results obtained theoretically are checked and assessed by means of simulation. The results and predictions are then assessed in an empirical study based on two personality scales. The implications of the results for applied personality research are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号