首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article reviews the premises of configural frequency analysis (CFA), including methods of choosing significance tests and base models, as well as protecting alpha, and discusses why CFA is a useful approach when conducting longitudinal person-oriented research. CFA operates at the manifest variable level. Longitudinal CFA seeks to identify those temporal patterns that stand out as more frequent (CFA types) or less frequent (CFA antitypes) than expected with reference to a base model. A base model that has been used frequently in CFA applications, prediction CFA, and a new base model, auto-association CFA, are discussed for analysis of cross-classifications of longitudinal data. The former base model takes the associations among predictors and among criteria into account. The latter takes the auto-associations among repeatedly observed variables into account. Application examples of each are given using data from a longitudinal study of domestic violence. It is demonstrated that CFA results are not redundant with results from log-linear modeling or multinomial regression and that, of these approaches, CFA shows particular utility when conducting person-oriented research.  相似文献   

2.
Mediation is a process that links a predictor and a criterion via a mediator variable. Mediation can be full or partial. This well-established definition operates at the level of variables even if they are categorical. In this article, two new approaches to the analysis of mediation are proposed. Both of these approaches focus on the analysis of categorical variables. The first involves mediation analysis at the level of configurations instead of variables. Thus, mediation can be incorporated into the arsenal of methods of analysis for person-oriented research. Second, it is proposed that Configural Frequency Analysis (CFA) can be used for both exploration and confirmation of mediation relationships among categorical variables. The implications of using CFA are first that mediation hypotheses can be tested at the level of individual configurations instead of variables. Second, this approach leaves the door open for different types of mediation processes to exist within the same set. Using a data example, it is illustrated that aggregate-level analysis can overlook mediation processes that operate at the level of individual configurations.  相似文献   

3.
Recently, a question was raised as to whether the multidimensionality of some professional licensing exams is due to the administration of subtests measuring conceptually distinct skills or, alternatively, strategic preparation on the part of groups of examinees attempting to cope with the demands of multiple hurdle certification systems. This article illustrates a way to investigate this issue with optimal appropriateness measurement (OAM) methods and confirmatory factor analysis (CFA). Specifically, using the former paper-and-pencil American Institute of Certified Public Accountants (AICPA) Uniform Examination as an example, OAM methods were used to identify examinees that appeared unmotivated on 2 of the 4 AICPA exam subtests. Dimensionality was studied by using CFA to compare the fit of single- and 4-factor models before and after removing flagged examinees. The results indicated that the 4-factor model provided better fit than a unidimensional model even after removing nearly 30% of respondents, thus weakening the claim that multidimensionality could be attributed solely to strategic preparation.  相似文献   

4.
The overarching purpose of this article is to present a nonmathematical introduction to the application of confirmatory factor analysis (CFA) within the framework of structural equation modeling as it applies to psychological assessment instruments. In the interest of clarity and ease of understanding, I model exploratory factor analysis (EFA) structure in addition to first- and second-order CFA structures. All factor analytic structures are based on the same measuring instrument, the Beck Depression Inventory-II (BDI-II; Beck, Steer, & Brown, 1996). Following a "walk" through the general process of CFA modeling, I identify several common misconceptions and improper application practices with respect to both EFA and CFA and tender caveats with a view to preventing further proliferation of these pervasive practices.  相似文献   

5.
Confirmatory factor analysis (CFA) is widely used for examining hypothesized relations among ordinal variables (e.g., Likert-type items). A theoretically appropriate method fits the CFA model to polychoric correlations using either weighted least squares (WLS) or robust WLS. Importantly, this approach assumes that a continuous, normal latent process determines each observed variable. The extent to which violations of this assumption undermine CFA estimation is not well-known. In this article, the authors empirically study this issue using a computer simulation study. The results suggest that estimation of polychoric correlations is robust to modest violations of underlying normality. Further, WLS performed adequately only at the largest sample size but led to substantial estimation difficulties with smaller samples. Finally, robust WLS performed well across all conditions.  相似文献   

6.
The concept of Granger causality can be used to examine putative causal relations between two series of scores. Based on regression models, it is asked whether one series can be considered the cause for the second series. In this article, we propose extending the pool of methods available for testing hypotheses that are compatible with Granger causation by adopting a configural perspective. This perspective allows researchers to assume that effects exist for specific categories only or for specific sectors of the data space, but not for other categories or sectors. Configural Frequency Analysis (CFA) is proposed as the method of analysis from a configural perspective. CFA base models are derived for the exploratory analysis of Granger causation. These models are specified so that they parallel the regression models used for variable-oriented analysis of hypotheses of Granger causation. An example from the development of aggression in adolescence is used. The example shows that only one pattern of change in aggressive impulses over time Granger-causes change in physical aggression against peers.  相似文献   

7.
Configural frequency analysis (CFA) is a widely used method of explorative data analysis. It tries to detect patterns in the data that occur significantly more or significantly less often than expected by chance. Patterns which occur more often than expected by chance are called CFA types, while those which occur less often than expected by chance are called CFA antitypes. The patterns detected are used to generate knowledge about the mechanisms underlying the data. We investigate the ability of CFA to detect adequate types and antitypes in a number of simulation studies. The basic idea of these studies is to predefine sets of types and antitypes and a mechanism which uses them to create a simulated data set. This simulated data set is then analysed with CFA and the detected types and antitypes are compared to the predefined ones. The predefined types and antitypes together with the method to generate the data are called a data generation model. The results of the simulation studies show that CFA can be used in quite different research contexts to detect structural dependencies in observed data. In addition, we can learn from these simulation studies how much data is necessary to enable CFA to reconstruct the predefined types and antitypes with sufficient accuracy. For one of the data generation models investigated, implicitly underlying knowledge space theory, it was shown that zero‐order CFA can be used to reconstruct the predefined types (which can be interpreted in this context as knowledge states) with sufficient accuracy. Theoretical considerations show that first‐order CFA cannot be used for this data generation model. Thus, it is wrong to consider first‐order CFA, as is done in many publications, as the standard or even only method of CFA.  相似文献   

8.
Using a confirmatory factor analytic (CFA) model as a paradigmatic basis for all comparisons, this article reviews and contrasts important features related to 3 of the most widely-used structural equation modeling (SEM) computer programs: AMOS 4.0 (Arbuckle, 1999), EQS 6 (Bentler, 2000), and LISREL 8 (Joreskog & Sorbom, 1996b). Comparisons focus on (a) key aspects of the programs that bear on the specification and testing of CFA models-preliminary analysis of data, and model specification, estimation, assessment, and misspecification; and (b) other important issues that include treatment of incomplete, nonnormally-distributed, or categorically-scaled data. It is expected that this comparative review will provide readers with at least a flavor of the approach taken by each program with respect to both the application of SEM within the framework of a CFA model, and the critically important issues, previously noted, related to data under study.  相似文献   

9.
After discussing diverse concepts of types or syndromes the definition of types, according to configural frequency analysis (CFA), is given. A type, in this theory, is assumed to be a configuration of categories belonging to different attributes. This configuration should occur with a probability which is higher than the conditional probability for given univariate marginal frequencies. The conditional probability is computed under the null hypothesis of independence of the attributes. Types are identified by simultaneous conditional binomial tests and interpreted by means of an interaction structure analysis in a multivariate contingency table. Two further versions of CFA are explained. By prediction CFA it is possible to predict certain configurations by other ones while by c-sample CFA it is possible to discriminate between populations by means of configurations. The procedures are illustrated by an example concerning the responses of patients to lumbar punctures.  相似文献   

10.
Abstract

CFAs of multidimensional constructs often fail to meet standards of good measurement (e.g., goodness-of-fit, measurement invariance, and well-differentiated factors). Exploratory structural equation modeling (ESEM) represents a compromise between exploratory factor analysis’ (EFA) flexibility, and CFA/SEM’s rigor and parsimony, but lacks parsimony (particularly in large models) and might confound constructs that need to be kept separate. In Set-ESEM, two or more a priori sets of constructs are modeled within a single model such that cross-loadings are permissible within the same set of factors (as in Full-ESEM) but are constrained to be zero for factors in different sets (as in CFA). The different sets can reflect the same set of constructs on multiple occasions, and/or different constructs measured within the same wave. Hence, Set-ESEM that represents a middle-ground between the flexibility of traditional-ESEM (hereafter referred to as Full-ESEM) and the rigor and parsimony of CFA/SEM. Thus, the purposes of this article are to provide an overview tutorial on Set-ESEM, juxtapose it with Full-ESEM, and to illustrate its application with simulated data and diverse “real” data applications with accessible, heuristic explanations of best practice.  相似文献   

11.
Structural equation modeling: reviewing the basics and moving forward   总被引:4,自引:0,他引:4  
This tutorial begins with an overview of structural equation modeling (SEM) that includes the purpose and goals of the statistical analysis as well as terminology unique to this technique. I will focus on confirmatory factor analysis (CFA), a special type of SEM. After a general introduction, CFA is differentiated from exploratory factor analysis (EFA), and the advantages of CFA techniques are discussed. Following a brief overview, the process of modeling will be discussed and illustrated with an example using data from a HIV risk behavior evaluation of homeless adults (Stein & Nyamathi, 2000). Techniques for analysis of nonnormally distributed data as well as strategies for model modification are shown. The empirical example examines the structure of drug and alcohol use problem scales. Although these scales are not specific personality constructs, the concepts illustrated in this article directly correspond to those found when analyzing personality scales and inventories. Computer program syntax and output for the empirical example from a popular SEM program (EQS 6.1; Bentler, 2001) are included.  相似文献   

12.
Creating single-subject (SS) graphs is challenging for many researchers and practitioners because it is a complex task with many steps. Although several authors have introduced guidelines for creating SS graphs, many users continue to experience frustration. The purpose of this article is to minimize these frustrations by providing a field-tested task analysis for creating SS graphs using Microsoft® Office Excel. Results from the field test are presented and the task analysis, which includes steps for creating a variety of SS graphs, is provided. The article includes various illustrations, a list of prerequisite skills, tips, and troubleshooting items.  相似文献   

13.
14.
Configural frequency analysis (CFA) tests whether certain individual patterns in different variables are observed more frequently in a sample than expected by chance. In normative CFA, these patterns are derived from the subject's specific position in relation to sample characteristics such as the median or the mean. In ipsative CFA, patterns are defined within an individual reference system, e.g. relative to the subject's median of different variable scores. Normative CFA examines dimensionality of scales and is comparable to factor analysis in this respect. Ipsative CFA rather yields information about location of scores in different variables, in a similar way to ANOVA or Friedman testing. However, both normative and ipsative CFA may supply information not obtainable by means of the aforementioned methods. This is illustrated in a reanalysis of data in four scales of an anxiety inventory. © 1997 John Wiley & Sons, Ltd.  相似文献   

15.
The squared multiple correlation coefficient has been widely employed to assess the goodness-of-fit of linear regression models in many applications. Although there are numerous published sources that present inferential issues and computing algorithms for multinormal correlation models, the statistical procedure for testing substantive significance by specifying the nonzero-effect null hypothesis has received little attention. This article emphasizes the importance of determining whether the squared multiple correlation coefficient is small or large in comparison with some prescribed standard and develops corresponding Excel worksheets that facilitate the implementation of various aspects of the suggested significance tests. In view of the extensive accessibility of Microsoft Excel software and the ultimate convenience of general-purpose statistical packages, the associated computer routines for interval estimation, power calculation, a nd samplesize determination are alsoprovided for completeness. The statistical methods and available programs of multiple correlation analysis described in this article purport to enhance pedagogical presentation in academic curricula and practical application in psychological research.  相似文献   

16.
Confirmatory factor analysis (CFA) is often used to verify measurement models derived from classical test theory: the parallel, tau-equivalent, and congeneric test models. In this application, CFA is traditionally applied to the observed covariance or correlation matrix, ignoring the observed mean structure. But CFA is easily extended to allow nonzero observed and latent means. The use of CFA with nonzero latent means in testing six measurement models derived from classical test theory is discussed. Three of these models have not been addressed previously in the context of CFA. The implications of the six models for observed mean and covariance structures are fully described. Three examples of the use of CFA in testing these models are presented. Some advantages and limitations in using CFA with nonzero latent means to verify classical measurement models are discussed.  相似文献   

17.
In recent years, researchers and practitioners in the behavioral sciences have profited from a growing literature on delay discounting. The purpose of this article is to provide readers with a brief tutorial on how to use Microsoft Office Excel 2010 and Excel for Mac 2011 to analyze discounting data to yield parameters for both the hyperbolic discounting model and area under the curve. This tutorial is intended to encourage the quantitative analysis of behavior in both research and applied settings by readers with relatively little formal training in nonlinear regression.  相似文献   

18.
Over 10 years have passed since the publication of Carr and Burkholder's (1998) technical article on how to construct single‐subject graphs using Microsoft Excel. Over the course of the past decade, the Excel program has undergone a series of revisions that make the Carr and Burkholder paper somewhat difficult to follow with newer versions. The present article provides task analyses for constructing various types of commonly used single‐subject design graphs in Microsoft Excel 2007. The task analyses were evaluated using a between‐subjects design that compared the graphing skills of 22 behavior‐analytic graduate students using Excel 2007 and either the Carr and Burkholder or newly developed task analyses. Results indicate that the new task analyses yielded more accurate and faster graph construction than the Carr and Burkholder instructions.  相似文献   

19.
《创造力研究杂志》2013,25(4):333-346
This article describes the empirical use of the Creative Product Semantic Scale (CPSS) to evaluate 3 creative products by 128 student participants in 2 "folk high schools" in western Norway. First, the factor structure of the model was examined and tested through exploratory principle components factor analysis and confirmatory factor analysis (CFA). Then, multivariate analysis of variance was used to confirm that the CPSS could detect differences perceived in the levels of the factors Novelty, Resolution, and Elaboration and Synthesis in the 3 products. In CFA, as hypothesized, a solution with 3 factors provided a better fit to the data for each of the 3 creative products than an alternative 2-factor solution. The results of this study established the usefulness of the CPSS to detect differences perceived by the participants among the 3 chairs along all 3 dimensions.  相似文献   

20.
If measurement invariance does not hold over 2 or more measurement occasions, differences in observed scores are not directly interpretable. Golembiewski, Billingsley, and Yeager (1976) identified 2 types of psychometric differences over time as beta change and gamma change. Gamma change is a fundamental change in thinking about the nature of a construct over time. Beta change can be described as respondents' change in calibration of the response scale over time. Recently, researchers have had considerable success establishing measurement invariance using confirmatory factor analytic (CFA) techniques. However, the use of item response theory (IRT) techniques for assessing item parameter drift can provide additional useful information regarding the psychometric equivalence of a measure over time that is not attainable with traditional CFA techniques. This article marries the terminology commonly used in CFA and IRT techniques and illustrates real advantages for identifying beta change over time with IRT methods rather than typical CFA methods, utilizing a longitudinal assessment of job satisfaction as an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号