首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The field of multidimensional scaling is dominated by models that lack inherent parameters. Correcting parameters have been introduced, e.g. INDSCAL, to increase power of prediction. Although a nonparametirc model with correcting parameters may exhibit a very good fit to data, a parametric model is intrinsically superior. The general parametric model proposed here yields measures of both absolute and relative subjective differences (dissimilarity) in addition to similarity. It is basically unidimensioanal. Rules for combining values of attributes into a single multidimensional value may be applied either to the input or to the output of the model. One of the resulting functions is a generalization of the Eisler-Ekman similarity function. A special case of another function is identical to the Minkowski class of distance functions (including INDSCAL). The model is not limited to pairwise relations. It yields unitary measures for any number of objects.  相似文献   

2.
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/.  相似文献   

3.
A class of four simultaneous component models for the exploratory analysis of multivariate time series collected from more than one subject simultaneously is discussed. In each of the models, the multivariate time series of each subject is decomposed into a few series of component scores and a loading matrix. The component scores series reveal the latent data structure in the course of time. The interpretation of the components is based on the loading matrix. The simultaneous component models model not only intraindividual variability, but interindividual variability as well. The four models can be ordered hierarchically from weakly to severely constrained, thus allowing for big to small interindividual differences in the model. The use of the models is illustrated by an empirical example.This research has been made possible by funding from the Netherlands Organization of Scientific Research (NWO) to the first author. The authors are obliged to Tom A.B. Snijders, Jos M.F. ten Berge and three anonymous reviewers for comments on an earlier version of this paper, and to Kim Shifren for providing us with her data set, which was collected at Syracuse University.  相似文献   

4.
A multivariate reduced-rank growth curve model is proposed that extends the univariate reducedrank growth curve model to the multivariate case, in which several response variables are measured over multiple time points. The proposed model allows us to investigate the relationships among a number of response variables in a more parsimonious way than the traditional growth curve model. In addition, the method is more flexible than the traditional growth curve model. For example, response variables do not have to be measured at the same time points, nor the same number of time points. It is also possible to apply various kinds of basis function matrices with different ranks across response variables. It is not necessary to specify an entire set of basis functions in advance. Examples are given for illustration.The work reported in this paper was supported by Grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the second author. We thank Jennifer Stephan for her helpful comments on an earlier version of this paper. We also thank Patrick Curran and Terry Duncan for kindly letting us use the NLSY and substance use data, respectively. The substance use data were provided by Grant DA09548 from the National Institute on Drug Abuse.  相似文献   

5.
Traditional process models of old-new recognition have not addressed differences in accuracy and response time between individual stimuli. Two new process models of recognition are presented and applied to response time and accuracy data from 3 old-new recognition experiments. The 1st model is derived from a feature-sampling account of the time course of categorization, whereas the 2nd model is a generalization of a random-walk model of categorization. In the experiments, a new technique was used, which yielded reliable individual-stimulus data through repeated presentation of structurally equivalent items. The results from the experiments showed reliable differences in accuracy and response times between stimuli. The random-walk model provided the better account of the results from the 3 experiments. The implications of the results for process models of recognition are discussed.  相似文献   

6.
In the experiments reported here, individuals with experience in a multivariate prediction setting showed considerable moderation of subsequent univariate predictions, compared to those without such experience. We show that such moderation of prediction does not result from an abstract rule of regression to the mean; rather, it can be explained by the named error model. According to this model, missing predictors are treated as an error term, with their unknown values replaced by central tendencies. Experiment 1 demonstrates the phenomenon of moderation following multivariate experience and explores its generalization to novel predictors. Moderation occurs even for a perfectly valid predictor, contrary to normative application of a regression strategy. Experiment 2 shows that the phenomenon depends on lack of correlation among the multivariate predictors. This accords with the named error model, which asserts that if missing predictors are perceived to be correlated with the available predictor, their unknown values are replaced by extreme values rather than by central tendencies. Experiment 3 shows that mere exposure to additional predictors has no effect; experience in which multiple predictors are used to make numerical predictions seems to be necessary in order to obtain subsequent moderation. In Experiment 4, feedback is introduced. Moderation of prediction results even without prior multivariate experience. However, multivariate experience produces the moderation effect much more quickly.  相似文献   

7.
This article uses a general latent variable framework to study a series of models for nonignorable missingness due to dropout. Nonignorable missing data modeling acknowledges that missingness may depend not only on covariates and observed outcomes at previous time points as with the standard missing at random assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework with the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling. A new selection model not only allows an influence of the outcomes on missingness but allows this influence to vary across classes. Model selection is discussed. The missing data models are applied to longitudinal data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study, the largest antidepressant clinical trial in the United States to date. Despite the importance of this trial, STAR*D growth model analyses using nonignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout.  相似文献   

8.
A previous article was concerned with simultaneous linear prediction [1]. There one was given a set of predictor tests or items and one predicted a set of predictands (also tests or items, or perhaps criteria.) We proposed asimultaneous prediction which was a certain weighted sum of the predictors. In the present article the constraint that the prediction be a weighted sum is relaxed. We seek a general function of the predictors which will maximize the quantity chosen for measuring prediction efficiency. This quantity is the same as the one used in linear prediction and we justify this approach by showing it is the appropriate one when there is only one predictand. In order to solve the problem we restrict consideration to a vector of predictors having only a finite number of possible values, i.e., it possesses discrete probability distribution weights. This can be applied in the case of dichotomous items for instance. It may also be used in continuous distributions as an approximation, by first dividing the original range of values into a finite number of intervals. Then one attributes to the interval the weight corresponding to the probability mass it underlies in the original distribution.This work was initiated at Stanford University under contract 2-10-065 with U. S. Office of Education and was partly revised at the Université de Montréal.I wish to express my gratitude to Professor Herbert Solomon, Stanford University, for his unfailing assistance at all stages of my work and specially for bringing to my attention the problem of nonlinear prediction in the present context.  相似文献   

9.
Probabilistic models have recently received much attention as accounts of human cognition. However, most research in which probabilistic models have been used has been focused on formulating the abstract problems behind cognitive tasks and their optimal solutions, rather than on mechanisms that could implement these solutions. Exemplar models are a successful class of psychological process models in which an inventory of stored examples is used to solve problems such as identification, categorization, and function learning. We show that exemplar models can be used to perform a sophisticated form of Monte Carlo approximation known as importance sampling and thus provide a way to perform approximate Bayesian inference. Simulations of Bayesian inference in speech perception, generalization along a single dimension, making predictions about everyday events, concept learning, and reconstruction from memory show that exemplar models can often account for human performance with only a few exemplars, for both simple and relatively complex prior distributions. These results suggest that exemplar models provide a possible mechanism for implementing at least some forms of Bayesian inference.  相似文献   

10.
This paper generalizes thep* model for dichotomous social network data (Wasserman & Pattison, 1996) to the polytomous case. The generalization is achieved by transforming valued social networks into three-way binary arrays. This data transformation requires a modification of the Hammersley-Clifford theorem that underpins thep* class of models. We demonstrate that, provided that certain (non-observed) data patterns are excluded from consideration, a suitable version of the theorem can be developed. We also show that the approach amounts to a model for multiple logits derived from a pseudo-likelihood function. Estimation within this model is analogous to the separate fitting of multinomial baseline logits, except that the Hammersley-Clifford theorem requires the equating of certain parameters across logits. The paper describes how to convert a valued network into a data array suitable for fitting the model and provides some illustrative empirical examples.This research was supported by grants from the Australian Research Council, the National Science Foundation (#SBR96-30754), and the National Institute of Health (#PHS-1RO1-39829-01).  相似文献   

11.
Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into account and to assess parameter correlations. The model is estimated using Bayesian methods with weakly informative hyperprior distribution and a Gibbs sampler based on two steps of data augmentation. Estimation, model checks, and hypotheses tests are discussed. The new method is illustrated using a real data set, and its performance is evaluated in a simulation study.  相似文献   

12.
A new class of parametric models that generalize the multivariate probit model and the errors-in-variables model is developed to model and analyze ordinal data. A general model structure is assumed to accommodate the information that is obtained via surrogate variables. A hybrid Gibbs sampler is developed to estimate the model parameters. To obtain a rapidly converged algorithm, the parameter expansion technique is applied to the correlation structure of the multivariate probit models. The proposed model and method of analysis are demonstrated with real data examples and simulation studies.  相似文献   

13.
This paper aims to improve the prediction accuracy of Tropical Cyclone Tracks (TCTs) over the South China Sea (SCS) with 24 h lead time. The model proposed in this paper is a regularized extreme learning machine (ELM) ensemble using bagging. The method which turns the original problem into quadratic programming (QP) problem is proposed in this paper to solve lasso and elastic net problem in ELM. The forecast error of TCTs data set is the distance between real position and forecast position. Compared with the stepwise regression method widely used in TCTs, 8.26 km accuracy improvement is obtained by our model based on the dataset with 70/1680 testing/training records. By contrast, the improvement using this model is 16.49 km based on a smaller dataset with 30/720 testing/training records. Results show that the regularized ELM bagging has a general better generalization capacity on TCTs data set.  相似文献   

14.
Consider a set of data consisting of measurements ofn objects with respect top variables displayed in ann ×p matrix. A monotone transformation of the values in each column, represented as a linear combination of integrated basis splines, is assumed determined by a linear combination of a new set of values characterizing each row object. Two different models are used: one, an Eckart-Young decomposition model, and the other, a multivariate normal model. Examples for artificial and real data are presented. The results indicate that both methods are helpful in choosing dimensionality and that the Eckart-Young model is also helpful in displaying the relationships among the objects and the variables. Also, results suggest that the resulting transformations are themselves illuminating.  相似文献   

15.
The paper proposes a full information maximum likelihood estimation method for modelling multivariate longitudinal ordinal variables. Two latent variable models are proposed that account for dependencies among items within time and between time. One model fits item‐specific random effects which account for the between time points correlations and the second model uses a common factor. The relationships between the time‐dependent latent variables are modelled with a non‐stationary autoregressive model. The proposed models are fitted to a real data set.  相似文献   

16.
We propose a generalization of the speed–accuracy response model (SARM) introduced by Maris and van der Maas (Psychometrika 77:615–633, 2012). In these models, the scores that result from a scoring rule that incorporates both the speed and accuracy of item responses are modeled. Our generalization is similar to that of the one-parameter logistic (or Rasch) model to the two-parameter logistic (or Birnbaum) model in item response theory. An expectation–maximization (EM) algorithm for estimating model parameters and standard errors was developed. Furthermore, methods to assess model fit are provided in the form of generalized residuals for item score functions and saddlepoint approximations to the density of the sum score. The presented methods were evaluated in a small simulation study, the results of which indicated good parameter recovery and reasonable type I error rates for the residuals. Finally, the methods were applied to two real data sets. It was found that the two-parameter SARM showed improved fit compared to the one-parameter SARM in both data sets.  相似文献   

17.
Erickson and Kruschke (2002b) have shown that human subjects generalize category knowledge in a rule-like fashion when exposed to a rule-plus-exception categorization task. This result has remained a challenge to exemplar models of category learning. We show that these models can account for such performance, if they are augmented with exemplar-specific specificity or exemplar-specific attention. This result, however, is only achieved if the choice rule that converts evidence for competing categories into probabilities is sensitive to small differences between evidence values close to 0. Exemplar-specific attention provided the best overall approximation of the data. Exemplar-specific specificity provided a slightly worse approximation, but it predicted better the rule-like generalization pattern observed.  相似文献   

18.
The purpose of this article is to formalize the generalization criterion method for model comparison. The method has the potential to provide powerful comparisons of complex and nonnested models that may also differ in terms of numbers of parameters. The generalization criterion differs from the better known cross-validation criterion in the following critical procedure. Although both employ a calibration stage to estimate parameters, cross-validation employs a replication sample from the same design for the validation stage, whereas generalization employs a new design for the critical stage. Two examples of the generalization criterion method are presented that demonstrate its usefulness for selecting a model based on sound scientific principles out of a set that also contains models lacking sound scientific principles that are either overly complex or oversimplified. The main advantage of the generalization criterion is its reliance on extrapolations to new conditions. After all, accurate a priori predictions to new conditions are the hallmark of a good scientific theory. Copyright 2000 Academic Press.  相似文献   

19.
General recognition theory (GRT) is a multivariate generalization of signal detection theory. Past versions of GRT were static and lacked a process interpretation. This article presents a stochastic version of GRT that models moment-by-moment fluctuations in the output of perceptual channels via a multivariate diffusion process. A decision stage then computes a linear or quadratic function of the outputs from the perceptual channels, which drives a univariate diffusion process that determines the subject's response. Conditions are established under which the stochastic and static versions of GRT make identical accuracy predictions. These equivalence relations show that traditional estimates of perceptual noise may often be corrupted by decisional influences. Copyright 2000 Academic Press.  相似文献   

20.
Given a set of points on the plane and an assignment of values to them, an optimal linear partition is a division of the set into two subsets which are separated by a straight line and maximally contrast with each other in the values assigned to their points. We present a method for inspecting and rating all linear partitions of a finite set, and a package of three functions in the R language for executing the computations. One function is for finding the optimal linear partitions and corresponding separating lines, another for graphically representing the results, and a third for testing how well the data comply with the linear separability condition. We illustrate the method on possible data from a psychophysical experiment (concerning the size–weight illusion) and compare its performance with that of linear discriminant analysis and multiple logistic regression, adapted to dividing linearly a set of points on the plane.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号