首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
Gradable adjectives denote a function that takes an object and returns a measure of the degree to which the object possesses some gradable property [Kennedy, C. (1999). Projecting the adjective: The syntax and semantics of gradability and comparison. New York: Garland]. Scales, ordered sets of degrees, have begun to be studied systematically in semantics [Kennedy, C. (to appear). Vagueness and grammar: the semantics of relative and absolute gradable predicates. Linguistics and Philosophy; Kennedy, C. and McNally, L. (2005). Scale structure, degree modification, and the semantics of gradable predicates. Language, 81, 345-381; Rotstein, C., and Winter, Y. (2004). Total adjectives vs. partial adjectives: scale structure and higher order modifiers. Natural Language Semantics, 12, 259-288.]. We report four experiments designed to investigate the processing of absolute adjectives with a maximum standard (e.g., clean) and their minimum standard antonyms (dirty). The central hypothesis is that the denotation of an absolute adjective introduces a 'standard value' on a scale as part of the normal comprehension of a sentence containing the adjective (the "Obligatory Scale" hypothesis). In line with the predictions of Kennedy and McNally (2005) and Rotstein and Winter (2004), maximum standard adjectives and minimum standard adjectives systematically differ from each other when they are combined with minimizing modifiers like slightly, as indicated by speeded acceptability judgments. An eye movement recording study shows that, as predicted by the Obligatory Scale hypothesis, the penalty due to combining slightly with a maximum standard adjective can be observed during the processing of the sentence; the penalty is not the result of some after-the-fact inferencing mechanism. Further, a type of 'quantificational variability effect' may be observed when a quantificational adverb (mostly) is combined with a minimum standard adjective in sentences like "The dishes are mostly dirty", which may receive either a degree interpretation (e.g., 80% dirty) or a quantity interpretation (e.g., 80% of the dishes are dirty). The quantificational variability results provide suggestive support for the Obligatory Scale hypothesis by showing that the standard of a scalar adjective influences the preferred interpretation of other constituents in the sentence.  相似文献   

2.
This paper presents a procedure to test factorial invariance in multilevel confirmatory factor analysis. When the group membership is at level 2, multilevel factorial invariance can be tested by a simple extension of the standard procedure. However level‐1 group membership raises problems which cannot be appropriately handled by the standard procedure, because the dependency between members of different level‐1 groups is not appropriately taken into account. The procedure presented in this article provides a solution to this problem. This paper also shows Muthén's maximum likelihood (MUML) estimation for testing multilevel factorial invariance across level‐1 groups as a viable alternative to maximum likelihood estimation. Testing multilevel factorial invariance across level‐2 groups and testing multilevel factorial invariance across level‐1 groups are illustrated using empirical examples. SAS macro and Mplus syntax are provided.  相似文献   

3.
Rubin and Thayer recently presented equations to implement maximum likelihood (ML) estimation in factor analysis via the EM algorithm. They present an example to demonstrate the efficacy of the algorithm, and propose that their recovery of multiple local maxima of the ML function “certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation.” It is shown here, in contrast, that these second derivatives verify that Rubin and Thayer did not find multiple local maxima as claimed. The only known maximum remains the one found by Jöreskog over a decade earlier. The standard errors obtained from the second derivatives and the Fisher information matrix thus remain appropriate where ML assumptions are met. The advantages of the EM algorithm over other algorithms for ML factor analysis remain to be demonstrated.  相似文献   

4.
Mathematical models of cognition often contain unknown parameters whose values are estimated from the data. A question that generally receives little attention is how informative such estimates are. In a maximum likelihood framework, standard errors provide a measure of informativeness. Here, a standard error is interpreted as the standard deviation of the distribution of parameter estimates over multiple samples. A drawback to this interpretation is that the assumptions that are required for the maximum likelihood framework are very difficult to test and are not always met. However, at least in the cognitive science community, it appears to be not well known that standard error calculation also yields interpretable intervals outside the typical maximum likelihood framework. We describe and motivate this procedure and, in combination with graphical methods, apply it to two recent models of categorization: ALCOVE (Kruschke, 1992) and the exemplar-based random walk model (Nosofsky & Palmeri, 1997). The applications reveal aspects of these models that were not hitherto known and bring a mix of bad and good news concerning estimation of these models.  相似文献   

5.
Mathematical models of cognition often contain unknown parameters whose values are estimated from the data. A question that generally receives little attention is how informative such estimates are. In a maximum likelihood framework, standard errors provide a measure of informativeness. Here, a standard error is interpreted as the standard deviation of the distribution of parameter estimates over multiple samples. A drawback to this interpretation is that the assumptions that are required for the maximum likelihood framework are very difficult to test and are not always met. However, at least in the cognitive science community, it appears to be not well known that standard error calculation also yields interpretable intervals outside the typical maximum likelihood framework. We describe and motivate this procedure and, in combination with graphical methods, apply it to two recent models of categorization: ALCOVE (Kruschke, 1992) and the exemplar-based random walk model (Nosofsky & Palmeri, 1997). The applications reveal aspects of these models that were not hitherto known and bring a mix of bad and good news concerning estimation of these models.  相似文献   

6.
A definition ofessential independence is proposed for sequences of polytomous items. For items satisfying the reasonable assumption that the expected amount of credit awarded increases with examinee ability, we develop a theory ofessential unidimensionality which closely parallels that of Stout. Essentially unidimensional item sequences can be shown to have a unique (up to change-of-scale) dominant underlying trait, which can be consistently estimated by a monotone transformation of the sum of the item scores. In more general polytomous-response latent trait models (with or without ordered responses), anM-estimator based upon maximum likelihood may be shown to be consistent for under essentially unidimensional violations of local independence and a variety of monotonicity/identifiability conditions. A rigorous proof of this fact is given, and the standard error of the estimator is explored. These results suggest that ability estimation methods that rely on the summation form of the log likelihood under local independence should generally be robust under essential independence, but standard errors may vary greatly from what is usually expected, depending on the degree of departure from local independence. An index of departure from local independence is also proposed.This work was supported in part by Office of Naval Research Grant N00014-87-K-0277 and National Science Foundation Grant NSF-DMS-88-02556. The author is grateful to William F. Stout for many helpful comments, and to an anonymous reviewer for raising the questions addressed in section 2. A preliminary version of section 6 appeared in the author's Ph.D. thesis.  相似文献   

7.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

8.
Sentences that exhibit sensitivity to order (e.g. John and Mary arrived at school in that order and Mary and John arrived at school in that order) present a challenge for the standard formulation of plural logic. In response, some authors have advocated new versions of plural logic based on fine-grained notions of plural reference, such as serial reference [Hewitt 2012] and articulated reference [Ben-Yami 2013]. The aim of this article is to show that sensitivity to order should be accounted for without altering the standard formulation of plural logic. In particular, sensitivity to order does not call for a fine-grained notion of plural reference. We point out that the phenomenon in question is quite broad and that current proposals are not equipped to deal with the full range of cases in which order plays a role. Then we develop an alternative and unified account, which locates the phenomenon not in the way in which plural terms can refer, but in the meaning of special expressions such as in that order and respectively.  相似文献   

9.
The maximum likelihood classification rule is a standard method to classify examinee attribute profiles in cognitive diagnosis models (CDMs). Its asymptotic behaviour is well understood when the model is assumed to be correct, but has not been explored in the case of misspecified latent class models. This paper investigates the asymptotic behaviour of a two-stage maximum likelihood classifier under a misspecified CDM. The analysis is conducted in a general restricted latent class model framework addressing all types of CDMs. Sufficient conditions are proposed under which a consistent classification can be obtained by using a misspecified model. Discussions are also provided on the inconsistency of classification under certain model misspecification scenarios. Simulation studies and a real data application are conducted to illustrate these results. Our findings can provide some guidelines as to when a misspecified simple model or a general model can be used to provide a good classification result.  相似文献   

10.
Bayesian estimation and testing of structural equation models   总被引:2,自引:0,他引:2  
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, for example, output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters underidentified models, as we illustrate on a simple errors-in-variables model.We thank David Spiegelhalter for suggesting applying the Gibbs sampler to structural equation models to the first author at a 1994 workshop in Wiesbaden. We thank Ulf Böckenholt, Chris Meek, Marijtje van Duijn, Clark Glymour, Ivo Molenaar, Steve Klepper, Thomas Richardson, Teddy Seidenfeld, and Tom Snijders for helpful discussions, mathematical advice, and critiques of earlier drafts of this paper.  相似文献   

11.
尽管多阶段测验(MST)在保持自适应测验优点的同时允许测验编制者按照一定的约束条件去建构每一个模块和题板,但建构测验时若因忽视某些潜在的因素而导致题目之间出现局部题目依赖性(LID)时,也会对MST测验结果带来一定的危害。为探究"LID对MST的危害"这一问题,本研究首先介绍了MST和LID等相关概念;然后通过模拟研究比较探讨该问题,结果表明LID的存在会影响被试能力估计的精度但仍为估计偏差较小,且该危害不限于某一特定的路由规则;之后为消除该危害,使用了题组反应模型作为MST施测过程中的分析模型,结果表明尽管该方法能够消除部分危害但效果有限。这一方面表明LID对MST中被试能力估计精度所带来的危害确实值得关注,另一方面也表明在今后关于如何消除MST中由LID造成危害的方法仍值得进一步探究的。  相似文献   

12.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

13.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

14.
Group-level variance estimates of zero often arise when fitting multilevel or hierarchical linear models, especially when the number of groups is small. For situations where zero variances are implausible a priori, we propose a maximum penalized likelihood approach to avoid such boundary estimates. This approach is equivalent to estimating variance parameters by their posterior mode, given a weakly informative prior distribution. By choosing the penalty from the log-gamma family with shape parameter greater than 1, we ensure that the estimated variance will be positive. We suggest a default log-gamma(2,λ) penalty with λ→0, which ensures that the maximum penalized likelihood estimate is approximately one standard error from zero when the maximum likelihood estimate is zero, thus remaining consistent with the data while being nondegenerate. We also show that the maximum penalized likelihood estimator with this default penalty is a good approximation to the posterior median obtained under a noninformative prior. Our default method provides better estimates of model parameters and standard errors than the maximum likelihood or the restricted maximum likelihood estimators. The log-gamma family can also be used to convey substantive prior information. In either case—pure penalization or prior information—our recommended procedure gives nondegenerate estimates and in the limit coincides with maximum likelihood as the number of groups increases.  相似文献   

15.
Eric Maris 《Psychometrika》1995,60(4):523-547
In this paper, some psychometric models will be presented that belong to the larger class oflatent response models (LRMs). First, LRMs are introduced by means of an application in the field ofcomponential item response theory (Embretson, 1980, 1984). Second, a general definition of LRMs (not specific for the psychometric subclass) is given. Third, some more psychometric LRMs, and examples of how they can be applied, are presented. Fourth, a method for obtaining maximum likelihood (ML) and some maximum a posteriori (MAP) estimates of the parameters of LRMs is presented. This method is then applied to theconjunctive Rasch model. Fifth and last, an application of the conjunctive Rasch model is presented. This model was applied to responses to typical verbal ability items (open synonym items).This paper presents theoretical and empirical results of a research project supported by the Research Council [Onderzoeksraad] of the University of Leuven (grant number 89-9) to Paul De Boeck and Luc Delbeke.  相似文献   

16.
FOIL Axiomatized     
In an earlier paper, [5], I gave semantics and tableau rules for a simple firstorder intensional logic called FOIL, in which both objects and intensions are explicitly present and can be quantified over. Intensions, being non-rigid, are represented in FOIL as (partial) functions from states to objects. Scoping machinery, predicate abstraction, is present to disambiguate sentences like that asserting the necessary identity of the morning and the evening star, which is true in one sense and not true in another.In this paper I address the problem of axiomatizing FOIL. I begin with an interesting sublogic with predicate abstraction and equality but no quantifiers. In [2] this sublogic was shown to be undecidable if the underlying modal logic was at least K4, though it is decidable in other cases. The axiomatization given is shown to be complete for standard logics without a symmetry condition. The general situation is not known. After this an axiomatization for the full FOIL is given, which is straightforward after one makes a change in the point of view.This paper is a version of the invited talk given by the author at the conference Trends in Logic III, dedicated to the memory of A. MOSTOWSKI, H. RASIOWA and C. RAUSZER, and held in Warsaw and Ruciane-Nida from 23rd to 25th September 2005.  相似文献   

17.
Isoda  Eiko 《Studia Logica》1997,58(3):395-401
Kripke bundle [3] and C-set semantics [1] [2] are known as semantics which generalize standard Kripke semantics. In [3] and in [1], [2] it is shown that Kripke bundle and C-set semantics are stronger than standard Kripke semantics. Also it is true that C-set semantics for superintuitionistic logics is stronger than Kripke bundle semantics [5].In this paper, we show that Q-S4.1 is not Kripke bundle complete via C-set models. As a corollary we can give a simple proof showing that C-set semantics for modal logics are stronger than Kripke bundle semantics.  相似文献   

18.
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.  相似文献   

19.
Estimates about uncertain quantities can be expressed in terms of lower limits (more than X, minimum X), or upper limits (less than Y, maximum Y). It has been shown that lower limit statements generally occur much more often than upper limit statements (Halberg & Teigen, 2009). However, in a conversational context, preferences for upper and lower limit statements will be moderated by the concerns of the interlocutors. We report three studies asking speakers and listeners about their preferences for lower and upper limit statements, in the domains of distances, durations, and prices. It appears that travellers prefer information about maximum distances and maximum durations, and buyers (but not sellers) prefer to be told about maximum prices and maximum delivery times. Mistaken maxima are at the same time regarded as more “wrong” than mistaken minima. However, this preference for “worst case” information is not necessarily shared by providers of information (advisors), who are also concerned about being blamed if wrong.  相似文献   

20.
In the Italian public school system, local dialects are explicitly discouraged and children are pressured to master standard Italian. In this study, 95 southern Italian children were given a series of tasks to determine their level of dialect production and their attitudes toward their local dialect. Production of dialect decreases sharply from the first to the third grade, but then tends to stabilize, with a slight increase in dialect use by fourth and fifth grade boys. Hence the schools have not been entirely successful in eradicating dialect. However, attitude measures indicate that by the third grade children prefer Italian over the dialect at close to the 100% level. The schools have placed many children in a conflict situation, in which they have learned negative attitudes toward their own code but cannot completely master standard Italian. Sex differences may be related to a tendency to view dialect as more masculine. Implications of this study for bidialectical school programs in Italy and the United States are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号