首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 38 毫秒
1.
Recently, several authors have proposed the use of random graph theory to evaluate the adequacy of cluster analysis results. One such statistic is the minimum number of lines (edges) V needed to connect a random graph. Erdös and Rényi derived asymptotic distributions of V. Schultz and Hubert showed in a Monte Carlo study that the asymptotic approximations are poor for small sample sizes n typically used in data analysis applications. In this paper the exact probability distribution of V is given and the distributions for some values of n are tabulated and compared with existing Monte Carlo approximations.  相似文献   

2.
The present paper deals with the extension of two well-known static discrete choice theories to the dynamic situation in which individuals make choices at several points in (continuous) time. A dynamic version of Luce's Axiom, “independence from irrelevant alternatives”, is proposed and some of its implications are derived. In the static case Yellott (J. Math. Psych. 1977, 15, 109–146) and others have demonstrated that an independent random utility model generated from the extreme value distribution exp(?e?ax?b) becomes equivalent to Luce's Axiom. Yellott also introduced an axiom called “invariance under uniform expansions of the choice set”, and he proved that within the class of random utility models with independent identically distributed utilities (apart from a location shift) this axiom is equivalent to Luce's Axiom. These results are extended to the dynamic situation and it is shown that if the utility processes are expressed by so-called extremal processes the corresponding choice model is Markovian. A nonstationary generalization is proposed which is a substantial interest in applications where the parameters of the choice process are influenced by previous choice experience or by time-varying exogenous variables. In particular, it is demonstrated that the nonstationary model is Markovian if and only if the joint choice probabilities at two points in time have a particular form. Thus, the paper provides a rationale for applying a specific class of Markov models as the point of departure when modelling mobility processes that involve individual discrete decisions over time.  相似文献   

3.
The paper provides conceptual clarifications for the issues related to the dependence of jointly distributed systems of random entities on external factors. This includes the theory of selective influence as proposed in Dzhafarov [(2003a). Selective influence through conditional independence. Psychometrika, 68, 7-26] and generalized versions of the notions of probabilistic causality [Suppes, P., & Zanotti, M. (1981). When are probabilistic explanations possible? Synthese, 48, 191-199] and dimensionality in the latent variable models [Levine, M. V. (2003). Dimension in latent variable models. Journal of Mathematical Psychology, 47, 450-466]. One of the basic observations is that any system of random entities whose joint distribution depends on a factor set can be represented by functions of two arguments: a single factor-independent source of randomness and the factor set itself. In the case of random variables (i.e., real-valued random entities endowed with Borel sigma-algebras) the single source of randomness can be chosen to be any random variable with a continuous distribution (e.g., uniformly distributed between 0 and 1).  相似文献   

4.
Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.  相似文献   

5.
A new approach for evaluating spatial statistical models based on the (random) number 0 ≤ N(i, n) ≤ n of points whose nearest neighbor is i in an ensemble of n + 1 points is discussed. The second moment of N(i, n) offers a measure of the centrality of the ensemble. The asymptotic distribution of N(i, n) and the expected degree of centrality for several spatial and nonspatial point processes is described. The use of centrality as a diagnostic statistic for multidimensional scaling is explored.  相似文献   

6.
In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is proved that the item parameters and the abilities are identified if a difficulty parameter and a guessing parameter are fixed at zero. The second specification assumes that the abilities are mutually independent and identically distributed according to a distribution known up to the scale parameter. It is shown that the item parameters and the scale parameter are identified if a guessing parameter is fixed at zero. The third specification corresponds to a semi-parametric 1PL-G model, where the distribution G generating the abilities is a parameter of interest. It is not only shown that, after fixing a difficulty parameter and a guessing parameter at zero, the item parameters are identified, but also that under those restrictions the distribution G is not identified. It is finally shown that, after introducing two identification restrictions, either on the distribution G or on the item parameters, the distribution G and the item parameters are identified provided an infinite quantity of items is available.  相似文献   

7.
In this note we investigate the condition that the distribution of the maximum of a set of random variables does not depend on which variable attains the maximum. This problem arises in random utility theory. When the random variables are independent, the property implies that all the marginal distributions must be Double Exponential (with distribution function exp(?e?x) in standard form). When dependence is allowed the property characrerizes a much broader class consisting of arbitrary functions of arbitrary homogeneous functions of the variables e?xi, a result stated without proof in D. J. Strauss (Journal of Mathematical Psychology, 1979, 20, 35–52). These are the distributions such that the maximum has the same distribution (apart from a location shift) as the marginals, provided the marginals are the same.  相似文献   

8.
Multinomial random variables are used across many disciplines to model categorical outcomes. Under this framework, investigators often use a likelihood ratio test to determine goodness-of-fit. If the permissible parameter space of such models is defined by inequality constraints, then the maximum likelihood estimator may lie on the boundary of the parameter space. Under this condition, the asymptotic distribution of the likelihood ratio test is no longer a simple χ2 distribution. This article summarizes recent developments in the constrained inference literature as they pertain to the testing of multinomial random variables, and extends existing results by considering the case of jointly independent mutinomial random variables of varying categorical size. This article provides an application of this methodology to axiomatic measurement theory as a means of evaluating properly operationalized measurement axioms. This article generalizes Iverson and Falmagne’s [Iverson, G. J. & Falmagne, J. C. (1985). Statistical issues in measurement. Mathematical Social Sciences, 10, 131-153] seminal work on the empirical evaluation of measurement axioms and provides a classical counterpart to Myung, Karabatsos, and Iverson’s [Myung, J. I., Karabatsos, G. & Iverson, G. J. (2005). A Bayesian approach to testing decision making axioms. Journal of Mathematical Psychology, 49, 205-225] Bayesian methodology on the same topic.  相似文献   

9.
This paper is concerned with removing the influence of non‐normality in the classical t‐statistic for contrasting means. Using higher‐order expansion to quantify the effect of non‐normality, four corrected statistics are provided. Two aim to correct the mean bias and two to correct the overall distribution. The classical t‐statistic is also robust against non‐normality when the observed variables satisfy certain structures. A special case is when the marginal distributions of the contrast are independent and identically distributed.  相似文献   

10.
In the behavioral and social sciences, quasi-experimental and observational studies are used due to the difficulty achieving a random assignment. However, the estimation of differences between groups in observational studies frequently suffers from bias due to differences in the distributions of covariates. To estimate average treatment effects when the treatment variable is binary, Rosenbaum and Rubin (1983a) proposed adjustment methods for pretreatment variables using the propensity score. However, these studies were interested only in estimating the average causal effect and/or marginal means. In the behavioral and social sciences, a general estimation method is required to estimate parameters in multiple group structural equation modeling where the differences of covariates are adjusted. We show that a Horvitz–Thompson-type estimator, propensity score weighted M estimator (PWME) is consistent, even when we use estimated propensity scores, and the asymptotic variance of the PWME is shown to be less than that with true propensity scores. Furthermore, we show that the asymptotic distribution of the propensity score weighted statistic under a null hypothesis is a weighted sum of independent χ2 1 variables. We show the method can compare latent variable means with covariates adjusted using propensity scores, which was not feasible by previous methods. We also apply the proposed method for correlated longitudinal binary responses with informative dropout using data from the Longitudinal Study of Aging (LSOA). The results of a simulation study indicate that the proposed estimation method is more robust than the maximum likelihood (ML) estimation method, in that PWME does not require the knowledge of the relationships among dependent variables and covariates.  相似文献   

11.
Probabilistic independence among multiple random variables (e.g., among the outputs of multiple spatial-frequency channels) has been invoked to explain two effects found with many kinds of stimuli: increments in detection performance due to “probability summation” and decrements in detection and identification performance due to “extrinsic uncertainty.” Quantitative predictions of such effects, however, depend on the precise assumptions. Here we calculate predictions from multidimensional signal-detection theory assuming any of several different probability distributions characterizing the random variables (including two-state, Gaussian, exponential, and double-exponential distributions) and either of two rules for combining the multiple random variables into a single decision variable (taking the maximum or summing them). In general, the probability distributions predicting shallower ROC curves predict greater increments due to summation but smaller decrements due to extrinsic uncertainty. Some probability distributions yield steep-enough ROC curves to actually predict decrements due to summation in blocked-summation experiments. Probability distribution matters much less for intermixed-summation than for blocked-summation predictions. Of the two combination rules, the sum-of-outputs rule usually predicts both greater increments due to summation and greater decrements due to extrinsic uncertainty. Put another way, of the two combination rules, the sum-of-outputs rule usually predicts better performance on the compound stimulus under any condition but worse performance on simple stimuli under intermixed conditions.  相似文献   

12.
13.
The ACE and ADE models have been heavily exploited in twin studies to identify the genetic and environmental components in phenotypes. However, the validity of the likelihood ratio test (LRT) of the existence of a variance component, a key step in the use of such models, has been doubted because the true values of the parameters lie on the boundary of the parameter space of the alternative model for such tests, violating a regularity condition required for a LRT (e.g., Carey in Behav. Genet. 35:653–665, 2005; Visscher in Twin Res. Hum. Genet. 9:490–495, 2006). Dominicus, Skrondal, Gjessing, Pedersen, and Palmgren (Behav. Genet. 36:331–340, 2006) solve the problem of testing univariate components in ACDE models. Our current work as presented in this paper resolves the issue of LRTs in bivariate ACDE models by exploiting the theoretical frameworks of inequality constrained LRTs based on cone approximations. Our derivation shows that the asymptotic sampling distribution of the test statistic for testing a single bivariate component in an ACE or ADE model is a mixture of χ 2 distributions of degrees of freedom (dfs) ranging from 0 to 3, and that for testing both the A and C (or D) components is one of dfs ranging from 0 to 6. These correct distributions are stochastically smaller than the χ 2 distributions in traditional LRTs and therefore LRTs based on these distributions are more powerful than those used naively. Formulas for calculating the weights are derived and the sampling distributions are confirmed by simulation studies. Several invariance properties for normal data (at most) missing by person are also proved. Potential generalizations of this work are also discussed.  相似文献   

14.
The maximum and minimum of a sample from a probability distribution are extremely important random variables in many areas of psychological theory, methodology, and statistics. For instance, the behavior of the mean of the maximum or minimum processing time, as a function of the number of component random processing times (n), has been studied extensively in an effort to identify the underlying processing architecture (e.g., Townsend & Ashby, 1983; Colonius & Vorberg, 1994). Little is known concerning how measures of variability of the maximum or minimum change with n. Here, a new measure of random variability, the quantile spread, is introduced, which possesses sufficient strength to define distributional orderings and derive a number of results concerning variability of the maximum and the minimum statistics. The quantile spread ordering may be useful in many venues. Several interesting open problems are pointed out. This work was supported by an NIH Grant R01 MH57717 to the first author. Some of the collaboration took place during the year 2000 while J.T. Townsend was a Fellow at the Hanse Institute for Advanced Study (HWK), sponsored by H. Colonius at Oldenburg University.  相似文献   

15.
We present an extension of the secretary problem in which the decision maker (DM) sequentially observes up to n applicants whose values are random variables X1,X2,…,Xn drawn i.i.d. from a uniform distribution on [0,1]. The DM must select exactly one applicant, cannot recall released applicants, and receives a payoff of xt, the realization of Xt, for selecting the tth applicant. For each encountered applicant, the DM only learns whether the applicant is the best so far. We prove that the optimal policy dictates skipping the first sqrt(n)-1 applicants, and then selecting the next encountered applicant whose value is a maximum.  相似文献   

16.
In order to make the parallel analysis criterion for determining the number of factors easy to use, regression equations for predicting the logarithms of the latent roots of random correlation matrices, with squared multiple correlations on the diagonal, are presented. The correlation matrices were derived from distributions of normally distributed random numbers. The independent variables are log (N–1) and log {[n(n–1)/2]–[(i–1)n]}, whereN is the number of observations;n, the number of variables; andi, the ordinal position of the eigenvalue. The results were excellent, with multiple correlation coefficients ranging from .9948 to .9992.This research was supported by the Office of Naval Research under Contract N00014-67-A-0305-0012, Lloyd G. Humphreys, principal investigator, and by the Department of Computer Science of which Richard G. Montanelli, Jr., is a member.  相似文献   

17.
Yellott (1978) has shown that there are Thurstone models with probability distributions of different types that are equivalent for complete experiments with three alternatives. This note generalizes and extends his findings by showing that for any number of alternatives n, there exists a pair of Thurstone models with probability distributions of different types that are equivalent for complete experiments with n alternatives, but which are not equivalent for complete experiments with n + 1 alternatives.  相似文献   

18.
A survey of residual analysis in behavior‐analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic‐polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test.  相似文献   

19.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

20.
This paper introduces a two‐parameter family of distributions for modelling random variables on the (0,1) interval by applying the cumulative distribution function of one ‘parent’ distribution to the quantile function of another. Family members have explicit probability density functions, cumulative distribution functions and quantiles in a location parameter and a dispersion parameter. They capture a wide variety of shapes that the beta and Kumaraswamy distributions cannot. They are amenable to likelihood inference, and enable a wide variety of quantile regression models, with predictors for both the location and dispersion parameters. We demonstrate their applicability to psychological research problems and their utility in modelling real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号