首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
This article presents GazeAlyze, a software package, written as a MATLAB (MathWorks Inc., Natick, MA) toolbox developed for the analysis of eye movement data. GazeAlyze was developed for the batch processing of multiple data files and was designed as a framework with extendable modules. GazeAlyze encompasses the main functions of the entire processing queue of eye movement data to static visual stimuli. This includes detecting and filtering artifacts, detecting events, generating regions of interest, generating spread sheets for further statistical analysis, and providing methods for the visualization of results, such as path plots and fixation heat maps. All functions can be controlled through graphical user interfaces. GazeAlyze includes functions for correcting eye movement data for the displacement of the head relative to the camera after calibration in fixed head mounts. The preprocessing and event detection methods in GazeAlyze are based on the software ILAB 3.6.8 Gitelman (Behav Res Methods Instrum Comput 34(4), 605-612, 2002). GazeAlyze is distributed free of charge under the terms of the GNU public license and allows code modifications to be made so that the program's performance can be adjusted according to a user's scientific requirements.  相似文献   

2.
Item response theory (IRT) models are the central tools in modern measurement and advanced psychometrics. We offer a MATLAB IRT modeling (IRTm) toolbox that is freely available and that follows an explicit design matrix approach, giving the end user control and flexibility in building a model that goes beyond standard models, such as the Rasch model (Rasch, 1960) and the two-parameter logistic model. As such, IRTm allows for a large variety of unidimensional IRT models for binary responses, the incorporation of additional person and item information, and deviations from common model assumptions. An exclusive key feature of the toolbox is the inclusion of copula IRT models to handle local item dependencies. Two appendixes for this report, containing example code and information on the general copula IRT in IRTm, may be downloaded from brm.psychonomic-journals.org/content/supplemental.  相似文献   

3.
4.
5.
In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low‐dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented.  相似文献   

6.
A reparameterization of a latent class model is presented to simultaneously classify and scale nominal and ordered categorical choice data. Latent class-specific probabilities are constrained to be equal to the preference probabilities from a probabilistic ideal-point or vector model that yields a graphical, multidimensional representation of the classification results. In addition, background variables can be incorporated as an aid to interpreting the latent class-specific response probabilities. The analyses of synthetic and real data sets illustrate the proposed method.The authors thank Yosiho Takane, the editor and referees for their valuable suggestions. Authors are listed in reverse alphabetical order.  相似文献   

7.
8.
Mediation analysis and categorical variables: The final frontier   总被引:2,自引:0,他引:2  
Many scholars are interested in understanding the process by which an independent variable affects a dependent variable, perhaps in part directly and perhaps in part indirectly, occurring through the activation of a mediator. Researchers are facile at testing for mediation when all the variables are continuous, but a definitive answer had been lacking heretofore as to how to analyze the data when the mediator or dependent variable is categorical. This paper describes the problems that arise as well as the potential solutions. In the end, a solution is recommended that is both optimal in its statistical qualities as well as practical and easily implemented: compute zMediation.  相似文献   

9.
Iacobucci (2012) provides a conceptually appealing, readily implemented measure to assess mediation for a far wider range of data type combinations than traditional OLS-based analyses permit. Here, we consider potential applications and extensions along several lines, particularly in terms of random utility models, simulation-based estimation, and potential nonlinearities, as well as some methodological and cultural impediments.  相似文献   

10.
The cognitive neurosciences combine behavioral experiments with acquiring physiological data from different modalities, such as electroencephalography, magnetoencephalography, transcranial magnetic stimulation, and functional magnetic resonance imaging, all of which require excellent timing. A simple framework is proposed in which uni- and multimodal experiments can be conducted with minimal adjustments when one switches between modalities. The framework allows the beginner to quickly become productive and the expert to be flexible and not constrained by the tool by building on existing software such as MATLAB and the Psychophysics Toolbox, which already are serving a large community. The framework allows running standard experiments but also supports and facilitates exciting new possibilities for real-time neuroimaging and state-dependent stimulation.  相似文献   

11.
Methods for the treatment of item non-response in attitudinal scales and in large-scale assessments under the pairwise likelihood (PL) estimation framework and under a missing at random (MAR) mechanism are proposed. Under a full information likelihood estimation framework and MAR, ignorability of the missing data mechanism does not lead to biased estimates. However, this is not the case for pseudo-likelihood approaches such as the PL. We develop and study the performance of three strategies for incorporating missing values into confirmatory factor analysis under the PL framework, the complete-pairs (CP), the available-cases (AC) and the doubly robust (DR) approaches. The CP and AC require only a model for the observed data and standard errors are easy to compute. Doubly-robust versions of the PL estimation require a predictive model for the missing responses given the observed ones and are computationally more demanding than the AC and CP. A simulation study is used to compare the proposed methods. The proposed methods are employed to analyze the UK data on numeracy and literacy collected as part of the OECD Survey of Adult Skills.  相似文献   

12.
13.
14.
Parallel analysis has been well documented to be an effective and accurate method for determining the number of factors to retain in exploratory factor analysis. The O'Connor (2000) procedure for parallel analysis has many benefits and is widely applied, yet it has a few shortcomings in dealing with missing data and ordinal variables. To address these technical issues, we adapted and modified the O'Connor procedure to provide an alternative method that better approximates the ordinal data by factoring in the frequency distributions of the variables (e.g., the number of response categories and the frequency of each response category per variable). The theoretical and practical differences between the modified procedure and the O'Connor procedure are discussed. The SAS syntax for implementing this modified procedure is also provided.  相似文献   

15.
Relations are examined between latent trait and latent class models for item response data. Conditions are given for the two-latent class and two-parameter normal ogive models to agree, and relations between their item parameters are presented. Generalizationss are then made to continuous models with more than one latent trait and discrete models with more than two latent classes, and methods are presented for relating latent class models to factor models for dichotomized variables. Results are illustrated using data from the Law School Admission Test, previously analyzed by several authors.  相似文献   

16.
Statistical analyses investigating latent structure can be divided into those that estimate structural model parameters and those that detect the structural model type. The most basic distinction among structure types is between categorical (discrete) and dimensional (continuous) models. It is a common, and potentially misleading, practice to apply some method for estimating a latent structural model such as factor analysis without first verifying that the latent structure type assumed by that method applies to the data. The taxometric method was developed specifically to distinguish between dimensional and 2-class models. This study evaluated the taxometric method as a means of identifying categorical structures in general. We assessed the ability of the taxometric method to distinguish between dimensional (1-class) and categorical (2-5 classes) latent structures and to estimate the number of classes in categorical datasets. Based on 50,000 Monte Carlo datasets (10,000 per structure type), and using the comparison curve fit index averaged across 3 taxometric procedures (Mean Above Minus Below A Cut, Maximum Covariance, and Latent Mode Factor Analysis) as the criterion for latent structure, the taxometric method was found superior to finite mixture modeling for distinguishing between dimensional and categorical models. A multistep iterative process of applying taxometric procedures to the data often failed to identify the number of classes in the categorical datasets accurately, however. It is concluded that the taxometric method may be an effective approach to distinguishing between dimensional and categorical structure but that other latent modeling procedures may be more effective for specifying the model.  相似文献   

17.
This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program’s functions can be adjusted according to a user’s requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).  相似文献   

18.
Dual scaling is a set of related techniques for the analysis of a wide assortment of categorical data types including contingency tables and multiple-choice, rank order, and paired comparison data. When applied to a contingency table, dual scaling also goes by the name "correspondence analysis," and when applied to multiple-choice data in which there are more than 2 items, "optimal scaling" and "multiple correspondence analysis. " Our aim of this article was to explain in nontechnical terms what dual scaling offers to an analysis of contingency table and multiple-choice data.  相似文献   

19.
Previous work on a general class of multidimensional latent variable models for analysing ordinal manifest variables is extended here to allow for direct covariate effects on the manifest ordinal variables and covariate effects on the latent variables. A full maximum likelihood estimation method is used to estimate all the model parameters simultaneously. Goodness‐of‐fit statistics and standard errors are discussed. Two examples from the 1996 British Social Attitudes Survey are used to illustrate the methodology.  相似文献   

20.
This paper proposes a method to assess the local influence of minor perturbations for a structural equation model with continuous and ordinal categorical variables. The key idea is to treat the latent variables as hypothetical missing data and then apply Cook's approach to the conditional expectation of the complete‐data log‐likelihood function in the corresponding EM algorithm for deriving the normal curvature and the conformal normal curvature. Building blocks for achieving the diagnostic measures are computed via observations generated by the Gibbs sampler. It is shown that the proposed methodology is relatively simple to implement, computationally efficient, and feasible for a wide variety of perturbation schemes. Two illustrative real examples are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号