首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Over the past decade, Mokken scale analysis (MSA) has rapidly grown in popularity among researchers from many different research areas. This tutorial provides researchers with a set of techniques and a procedure for their application, such that the construction of scales that have superior measurement properties is further optimized, taking full advantage of the properties of MSA. First, we define the conceptual context of MSA, discuss the two item response theory (IRT) models that constitute the basis of MSA, and discuss how these models differ from other IRT models. Second, we discuss dos and don'ts for MSA; the don'ts include misunderstandings we have frequently encountered with researchers in our three decades of experience with real‐data MSA. Third, we discuss a methodology for MSA on real data that consist of a sample of persons who have provided scores on a set of items that, depending on the composition of the item set, constitute the basis for one or more scales, and we use the methodology to analyse an example real‐data set.  相似文献   

2.
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to identify outliers in MSA. This adaptation involves choices with respect to the algorithm's objective function, selection of items from samples without outliers, and scalability criteria to be used in the forward search algorithm. The application of the adapted forward search algorithm for MSA is demonstrated using real data. Recommendations are given for its use in practical scale analysis.  相似文献   

3.
The hospital anxiety and depression scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken scale analysis and factor analysis and (b) the scale properties of the HADS. Mokken scale analysis and factor analysis suggested that three dimensions adequately capture the structure of the HADS. Of the three corresponding scales, two scales of five items each were found to be structurally sound and reliable. These scales covered the two key attributes of anxiety and (anhedonic) depression. The findings suggest that the HADS may be reduced to a 10-item questionnaire comprising two 5-item scales measuring anxiety and depressive symptoms.  相似文献   

4.
The Thought Suppression Inventory (TSI; Rassin, European Journal of Personality 17: 285-298, 2003) was designed to measure thought intrusion, thought suppression and successful thought suppression. Given the importance to distinguish between these three aspects of thought control, the aim of this study was to scrutinize the dimensionality of the TSI. In a sample of 333 Dutch senior citizins, we examined (1) the dimensionality of the TSI using various procedures such as PAF, Mokken scale analysis (MSA) and CFA, and (2) the scale properties of the TSI. PAF favored a two factor solution, however, MSA and CFA suggested that three dimensions most adequately capture the structure of the TSI. Although all scales obtained at least medium scalability coefficients, several items were identified that are psychometrically unsound and may benefit from rewording or replacement. The findings suggest that the TSI is a three-dimensional questionnaire as originally proposed by Rassin (European Journal of Personality 17: 285-298, 2003) measuring thought intrusion, thought suppression, and successful thought suppression.  相似文献   

5.
The empirical support for linkage analysis is steadily increasing, but the question remains as to what method of linking is the most effective. We compared a more theory‐based, dimensional behavioural approach with a rather pragmatic, multivariate behavioural approach with regard to their accuracy in linking serial sexual assaults in a UK sample of serial sexual assaults (n = 90) and one‐off sexual assaults (n = 129). Their respective linkage accuracy was assessed by (1) using seven dimensions derived by non‐parametric Mokken scale analysis (MSA) as predictors in discriminant function analysis (DFA) and (2) 46 crime scene characteristics simultaneously in a naive Bayesian classifier (NBC). The dimensional scales predicted 28.9% of the series correctly, whereas the NBC correctly identified 34.5% of the series. However, a subsequent inclusion of non‐serial offences in the target group decreased the amount of correct links in the dimensional approach (MSA–DFA: 8.9%; NBC: 32.2%). Receiver operating characteristic analysis was used as a more objective comparison of the two methods under both conditions, confirming that each achieved good accuracies (AUCs = .74–.89), but the NBC performed significantly better than the dimensional approach. The consequences for the practical implementation in behavioural case linkage are discussed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
A nonparametric item response theory model—the Mokken scale analysis (a stochastic elaboration of the deterministic Guttman scale)—and a computer program that performs this analysis are described. Three procedures of scaling are distinguished: a search procedure, an evaluation of the whole set of items, and an extension of an existing scale. All procedures provide a coefficient of scalability for all items that meet the criteria of the Mokken model and an item coefficient of scalability for every item. Four different types of reliability coefficient are computed both for the entire set of items and for the scalable items. A test of robustness of the found scale can be performed to analyze whether the scale is invariant across different subgroups or samples. This robustness test serves as a goodness of fit test for the established scale. The program is written in FORTRAN 77. Two versions are available, an SPSS-X procedure program (which can be used with the SPSS-X mainframe package) and a stand-alone program suitable for both mainframe and microcomputers.  相似文献   

7.
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate normality. For nonlinear PCA, however, standard options for establishing stability are not provided. The authors use the nonparametric bootstrap procedure to assess the stability of nonlinear PCA results, applied to empirical data. They use confidence intervals for the variable transformations and confidence ellipses for the eigenvalues, the component loadings, and the person scores. They discuss the balanced version of the bootstrap, bias estimation, and Procrustes rotation. To provide a benchmark, the same bootstrap procedure is applied to linear PCA on the same data. On the basis of the results, the authors advise using at least 1,000 bootstrap samples, using Procrustes rotation on the bootstrap results, examining the bootstrap distributions along with the confidence regions, and merging categories with small marginal frequencies to reduce the variance of the bootstrap results.  相似文献   

8.
The standard methods for decomposition and analysis of evoked potentials are bandpass filtering, identification of peak amplitudes and latencies, and principal component analysis (PCA). We discuss the limitations of these and other approaches and introduce wavelet packet analysis. Then we propose the "single-channel wavelet packet model," a new approach in which a unique decomposition is achieved using prior time-frequency information and differences in the responses of the components to changes in experimental conditions. Orthogonal sets of wavelet packets allow a parsimonious time-frequency representation of the components. The method allows energy in some wavelet packets to be shared among two or more components, so the components are not necessarily orthogonal. The single-channel wavelet packet model and PCA both require constraints to achieve a unique decomposition. In PCA, however, the constraints are defined by mathematical convenience and may be unrealistic. In the single-channel wavelet packet model, the constraints are based on prior scientific knowledge. We give an application of the method to auditory evoked potentials recorded from cats. The good frequency resolution of wavelet packets allows us to separate superimposed components in these data. Our present approach yields estimates of component waveforms and the effects of experiment conditions on the amplitude of the components. We discuss future extensions that will provide confidence intervals and p values, allow for latency changes, and represent multichannel data.  相似文献   

9.
This research note responds to the question of whether a convenience sample of undergraduate students may be successfully utilized in concept development and in scale construction, and in what way the results are comparable to the findings of a representative national sample. The results of a Mokken analysis in both samples support the hypothesis that convenience samples have utility in concept development and in developing measures that can also be used in representative samples.  相似文献   

10.
The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal variables and that it can handle and discover nonlinear relationships between variables. Also, nonlinear PCA can deal with variables at their appropriate measurement level; for example, it can treat Likert-type scales ordinally instead of numerically. Every observed value of a variable can be referred to as a category. While performing PCA, nonlinear PCA converts every category to a numeric value, in accordance with the variable's analysis level, using optimal quantification. The authors discuss how optimal quantification is carried out, what analysis levels are, which decisions have to be made when applying nonlinear PCA, and how the results can be interpreted. The strengths and limitations of the method are discussed. An example applying nonlinear PCA to empirical data using the program CATPCA (J. J. Meulman, W. J. Heiser, & SPSS, 2004) is provided.  相似文献   

11.
Token systems are widely used in clinical settings, necessitating the development of methods to evaluate the reinforcing value of these systems. In the current paper, we replicated the use of a multiple-schedule reinforcer assessment (MSA; Smaby, MacDonald, Ahearn, & Dube, 2007) to evaluate the components of a token economy system for 4 learners with autism. Token systems had reinforcing value similar to primary reinforcers for 2 of the 4 learners, but resulted in lower rates of responding than primary reinforcers for the other 2 learners. Differentiated responding across learners may warrant variation in clinical recommendations on the use of tokens. The results of this study support formal assessment of token system effectiveness, and the MSA procedure provides an efficient method by which to conduct such assessments.  相似文献   

12.
Parallel analysis (PA) is an often-recommended approach for assessment of the dimensionality of a variable set. PA is known in different variants, which may yield different dimensionality indications. In this article, the authors considered the most appropriate PA procedure to assess the number of common factors underlying ordered polytomously scored variables. They proposed minimum rank factor analysis (MRFA) as an extraction method, rather than the currently applied principal component analysis (PCA) and principal axes factoring. A simulation study, based on data with major and minor factors, showed that all procedures consistently point at the number of major common factors. A polychoric-based PA slightly outperformed a Pearson-based PA, but convergence problems may hamper its empirical application. In empirical practice, PA-MRFA with a 95% threshold based on polychoric correlations or, in case of nonconvergence, Pearson correlations with mean thresholds appear to be a good choice for identification of the number of common factors. PA-MRFA is a common-factor-based method and performed best in the simulation experiment. PA based on PCA with a 95% threshold is second best, as this method showed good performances in the empirically relevant conditions of the simulation experiment.  相似文献   

13.
In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix independently and concurrently, thus destroying the entire correlational structure of the data. This strategy is considered appropriate for assessing the significance of the PCA solution as a whole, but is not suitable for assessing the significance of the contribution of single variables. Alternatively, we propose a strategy involving permutation of one variable at a time, while keeping the other variables fixed. We compare the two approaches in a simulation study, considering proportions of Type I and Type II error. We use two corrections for multiple testing: the Bonferroni correction and controlling the False Discovery Rate (FDR). To assess the significance of the variance accounted for by the variables, permuting one variable at a time, combined with FDR correction, yields the most favorable results. This optimal strategy is applied to an empirical data set, and results are compared with bootstrap confidence intervals.  相似文献   

14.
Many studies yield multivariate multiblock data, that is, multiple data blocks that all involve the same set of variables (e.g., the scores of different groups of subjects on the same set of variables). The question then rises whether the same processes underlie the different data blocks. To explore the structure of such multivariate multiblock data, component analysis can be very useful. Specifically, 2 approaches are often applied: principal component analysis (PCA) on each data block separately and different variants of simultaneous component analysis (SCA) on all data blocks simultaneously. The PCA approach yields a different loading matrix for each data block and is thus not useful for discovering structural similarities. The SCA approach may fail to yield insight into structural differences, since the obtained loading matrix is identical for all data blocks. We introduce a new generic modeling strategy, called clusterwise SCA, that comprises the separate PCA approach and SCA as special cases. The key idea behind clusterwise SCA is that the data blocks form a few clusters, where data blocks that belong to the same cluster are modeled with SCA and thus have the same structure, and different clusters have different underlying structures. In this article, we use the SCA variant that imposes equal average cross-products constraints (ECP). An algorithm for fitting clusterwise SCA-ECP solutions is proposed and evaluated in a simulation study. Finally, the usefulness of clusterwise SCA is illustrated by empirical examples from eating disorder research and social psychology.  相似文献   

15.
The application of psychological measures often results in item response data that arguably are consistent with both unidimensional (a single common factor) and multidimensional latent structures (typically caused by parcels of items that tap similar content domains). As such, structural ambiguity leads to seemingly endless "confirmatory" factor analytic studies in which the research question is whether scale scores can be interpreted as reflecting variation on a single trait. An alternative to the more commonly observed unidimensional, correlated traits, or second-order representations of a measure's latent structure is a bifactor model. Bifactor structures, however, are not well understood in the personality assessment community and thus rarely are applied. To address this, herein we (a) describe issues that arise in conceptualizing and modeling multidimensionality, (b) describe exploratory (including Schmid-Leiman [Schmid & Leiman, 1957] and target bifactor rotations) and confirmatory bifactor modeling, (c) differentiate between bifactor and second-order models, and (d) suggest contexts where bifactor analysis is particularly valuable (e.g., for evaluating the plausibility of subscales, determining the extent to which scores reflect a single variable even when the data are multidimensional, and evaluating the feasibility of applying a unidimensional item response theory (IRT) measurement model). We emphasize that the determination of dimensionality is a related but distinct question from either determining the extent to which scores reflect a single individual difference variable or determining the effect of multidimensionality on IRT item parameter estimates. Indeed, we suggest that in many contexts, multidimensional data can yield interpretable scale scores and be appropriately fitted to unidimensional IRT models.  相似文献   

16.
Statistical inference (including interval estimation and model selection) is increasingly used in the analysis of behavioral data. As with many other fields, statistical approaches for these analyses traditionally use classical (i.e., frequentist) methods. Interpreting classical intervals and p‐values correctly can be burdensome and counterintuitive. By contrast, Bayesian methods treat data, parameters, and hypotheses as random quantities and use rules of conditional probability to produce direct probabilistic statements about models and parameters given observed study data. In this work, we reanalyze two data sets using Bayesian procedures. We precede the analyses with an overview of the Bayesian paradigm. The first study reanalyzes data from a recent study of controls, heavy smokers, and individuals with alcohol and/or cocaine substance use disorder, and focuses on Bayesian hypothesis testing for covariates and interval estimation for discounting rates among various substance use disorder profiles. The second example analyzes hypothetical environmental delay‐discounting data. This example focuses on using historical data to establish prior distributions for parameters while allowing subjective expert opinion to govern the prior distribution on model preference. We review the subjective nature of specifying Bayesian prior distributions but also review established methods to standardize the generation of priors and remove subjective influence while still taking advantage of the interpretive advantages of Bayesian analyses. We present the Bayesian approach as an alternative paradigm for statistical inference and discuss its strengths and weaknesses.  相似文献   

17.
In this paper we discuss the use of a recent dimension reduction technique called Locally Linear Embedding, introduced by Roweis and Saul, for performing an exploratory latent structure analysis. The coordinate variables from the locally linear embedding describing the manifold on which the data reside serve as the latent variable scores. We propose the use of semiparametric penalized spline methods for reconstruction of the manifold equations that approximate the data space. We also discuss a crossvalidation strategy that can guide in selecting an appropriate number of latent variables. Synthetic as well as real data sets are used to illustrate the proposed approach. A nonlinear latent structure representation of a data set also serves as a data visualization tool.  相似文献   

18.
Gait data are typically collected in multivariate form, so some multivariate analysis is often used to understand interrelationships between observed data. Principal Component Analysis (PCA), a data reduction technique for correlated multivariate data, has been widely applied by gait analysts to investigate patterns of association in gait waveform data (e.g., interrelationships between joint angle waveforms from different subjects and/or joints). Despite its widespread use in gait analysis, PCA is for two-mode data, whereas gait data are often collected in higher-mode form. In this paper, we present the benefits of analyzing gait data via Parallel Factor Analysis (Parafac), which is a component analysis model designed for three- or higher-mode data. Using three-mode joint angle waveform data (subjects×time×joints), we demonstrate Parafac's ability to (a) determine interpretable components revealing the primary interrelationships between lower-limb joints in healthy gait and (b) identify interpretable components revealing the fundamental differences between normal and perturbed subjects' gait patterns across multiple joints. Our results offer evidence of the complex interconnections that exist between lower-limb joints and limb segments in both normal and abnormal gaits, confirming the need for the simultaneous analysis of multi-joint gait waveform data (especially when studying perturbed gait patterns).  相似文献   

19.
The current study reports validation results for the Psychopathic Personality Inventory (PPI) and its subscales, and for a newly developed PPI-Short Form (PPI-SF) in forensic and non-forensic populations. We also provide criterion reference scores for the PPI and the PPI-SF. In Study 1, we used PPI data from 1,065 participants and supplementary PCL-R data from a subsample of 91 forensic offenders. Mokken scale analysis was used to construct the PPI-SF. In Study 2, PPI-SF and PCL-R data were collected from 60 participants. The study yielded promising but preliminary support for the construct validity of the PPI and the PPI-SF. The PPI-SF is of interest for risk assessment because of its (a) strong relationship with the PCL-R total score and (b) subscales known for their predictive value for violence and criminal recidivism.  相似文献   

20.
本研究以义务教育阶段学生识字量测验为工具,综合运用探索性结构方程建模(ESEM)以及非参数项目反应理论中的摩根量表(Mokken量表)和DETECT分析方法,探讨了识字能力的维度。探索性结构方程建模结果显示,识字的单维性模型优于多维模型,多维的结果更多的体现出一个难度维度的特征,即字频的作用。Mokken量表分析结果显示,1~2年级和3~9年级测验更倾向于单维量表的特征。DETECT分析结果显示,两个测验的D值趋近于零,表明识字能力是单维能力。结合三种分析方法,识字能力具有单维性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号