首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study tests between two modern theories of decision making. Rank- and sign-dependent utility (RSDU) models, including cumulative prospect theory (CPT), imply stochastic dominance and two cumulative independence conditions. Configural weight models, with parameters estimated in previous research, predict systematic violations of these properties for certain choices. Experimental data systematically violate all three properties, contrary to RSDU but consistent with configural weight models. This study also tests whether violations of stochastic dominance can be explained by violations of transitivity. Violations of transitivity may be evidence of a dominance detecting mechanism. Although some transitivity violations were observed, most choice triads violated stochastic dominance without violating transitivity. Judged differences between gambles were not consistent with the CPT model. Data were not consistent with the editing principles of cancellation and combination. The main findings are interpreted in terms of coalescing, the principle that equal outcomes can be combined in a gamble by adding their probabilities. RSDU models imply coalescing but configural weight models violate it, allowing configural weighting to explain violations of stochastic dominance and cumulative independence.  相似文献   

2.
Discretized multivariate normal structural models are often estimated using multistage estimation procedures. The asymptotic properties of parameter estimates, standard errors, and tests of structural restrictions on thresholds and polychoric correlations are well known. It was not clear how to assess the overall discrepancy between the contingency table and the model for these estimators. It is shown that the overall discrepancy can be decomposed into a distributional discrepancy and a structural discrepancy. A test of the overall model specification is proposed, as well as a test of the distributional specification (i.e., discretized multivariate normality). Also, the small sample performance of overall, distributional, and structural tests, as well as of parameter estimates and standard errors is investigated under conditions of correct model specification and also under mild structural and/or distributional misspecification. It is found that relatively small samples are needed for parameter estimates, standard errors, and structural tests. Larger samples are needed for the distributional and overall tests. Furthermore, parameter estimates, standard errors, and structural tests are surprisingly robust to distributional misspecification. This research was supported by the Department of Universities, Research and Information Society (DURSI) of the Catalan Government, and by grants BSO2000-0661 and BSO2003-08507 of the Spanish Ministry of Science and Technology.  相似文献   

3.
The attention literature distinguishes two general mechanisms by which attention can benefit performance: gain (or resource) models and orienting (or switching) models. In gain models, processing efficiency is a function of a spatial distribution of capacity or resources; in orienting models, an attentional spotlight must be aligned with the stimulus location, and processing efficiency is a function of when this occurs. Although they involve different processing mechanisms, these models are difficult to distinguish empirically. We compared performance with abrupt-onset and no-onset Gabor patch stimuli in a cued detection task in which we obtained distributions of reaction time (RT) and accuracy as a function of stimulus contrast. In comparison to abrupt-onset stimuli, RTs to miscued no-onset stimuli were increased and accuracy was reduced. Modeling the data with the integrated system model of Philip L. Smith and Roger Ratcliff (2009) provided evidence for reallocation of processing resources during the course of a trial, consistent with an orienting account. Our results support a view of attention in which processing efficiency depends on a dynamic spatiotemporal distribution of resources that has both gain and orienting properties.  相似文献   

4.
Abstract

In intervention studies having multiple outcomes, researchers often use a series of univariate tests (e.g., ANOVAs) to assess group mean differences. Previous research found that this approach properly controls Type I error and generally provides greater power compared to MANOVA, especially under realistic effect size and correlation combinations. However, when group differences are assessed for a specific outcome, these procedures are strictly univariate and do not consider the outcome correlations, which may be problematic with missing outcome data. Linear mixed or multivariate multilevel models (MVMMs), implemented with maximum likelihood estimation, present an alternative analysis option where outcome correlations are taken into account when specific group mean differences are estimated. In this study, we use simulation methods to compare the performance of separate independent samples t tests estimated with ordinary least squares and analogous t tests from MVMMs to assess two-group mean differences with multiple outcomes under small sample and missingness conditions. Study results indicated that a MVMM implemented with restricted maximum likelihood estimation combined with the Kenward–Roger correction had the best performance. Therefore, for intervention studies with small N and normally distributed multivariate outcomes, the Kenward–Roger procedure is recommended over traditional methods and conventional MVMM analyses, particularly with incomplete data.  相似文献   

5.
Almost all models of response time (RT) use a stochastic accumulation process. To account for the benchmark RT phenomena, researchers have found it necessary to include between-trial variability in the starting point and/or the rate of accumulation, both in linear (R. Ratcliff & J. N. Rouder, 1998) and nonlinear (M. Usher & J. L. McClelland, 2001) models. The authors show that a ballistic (deterministic within-trial) model using a simplified version of M. Usher and J. L. McClelland's (2001) nonlinear accumulation process with between-trial variability in accumulation rate and starting point is capable of accounting for the benchmark behavioral phenomena. The authors successfully fit their model to R. Ratcliff and J. N. Rouder's (1998) data, which exhibit many of the benchmark phenomena.  相似文献   

6.
Sophisticated senator and legislative onion. Whether or not you have ever heard of these things, we all have some intuition that one of them makes much less sense than the other. In this paper, we introduce a large dataset of human judgments about novel adjective‐noun phrases. We use these data to test an approach to semantic deviance based on phrase representations derived with compositional distributional semantic methods, that is, methods that derive word meanings from contextual information, and approximate phrase meanings by combining word meanings. We present several simple measures extracted from distributional representations of words and phrases, and we show that they have a significant impact on predicting the acceptability of novel adjective‐noun phrases even when a number of alternative measures classically employed in studies of compound processing and bigram plausibility are taken into account. Our results show that the extent to which an attributive adjective alters the distributional representation of the noun is the most significant factor in modeling the distinction between acceptable and deviant phrases. Our study extends current applications of compositional distributional semantic methods to linguistically and cognitively interesting problems, and it offers a new, quantitatively precise approach to the challenge of predicting when humans will find novel linguistic expressions acceptable and when they will not.  相似文献   

7.
The multinomial (Dirichlet) model, derived from de Finetti's concept of exchangeability, is proposed as a general Bayesian framework to test axioms on data, in particular, deterministic axioms characterizing theories of choice or measurement. For testing, the proposed framework does not require a deterministic axiom to be cast in a probabilistic form (e.g., casting deterministic transitivity as weak stochastic transitivity). The generality of this framework is demonstrated through empirical tests of 16 different axioms, including transitivity, consequence monotonicity, segregation, additivity of joint receipt, stochastic dominance, coalescing, restricted branch independence, double cancellation, triple cancellation, and the Thomsen condition. The model generalizes many previously proposed methods of axiom testing under measurement error, is analytically tractable, and provides a Bayesian framework for the random relation approach to probabilistic measurement (J. Math. Psychol. 40 (1996) 219). A hierarchical and nonparametric generalization of the model is discussed.  相似文献   

8.
Both the speed and accuracy of responding are important measures of performance. A well-known interpretive difficulty is that participants may differ in their strategy, trading speed for accuracy, with no change in underlying competence. Another difficulty arises when participants respond slowly and inaccurately (rather than quickly but inaccurately), e.g., due to a lapse of attention. We introduce an approach that combines response time and accuracy information and addresses both situations. The modeling framework assumes two latent competing processes. The first, the error-free process, always produces correct responses. The second, the guessing process, results in all observed errors and some of the correct responses (but does so via non-specific processes, e.g., guessing in compliance with instructions to respond on each trial). Inferential summaries of the speed of the error-free process provide a principled assessment of cognitive performance reducing the influences of both fast and slow guesses. Likelihood analysis is discussed for the basic model and extensions. The approach is applied to a data set on response times in a working memory test. The authors wish to thank Roger Ratcliff, Christopher Chabris, and three anonymous referees for their helpful comments, and Aureliu Lavric for providing the data analyzed in this paper.  相似文献   

9.
Because reaction time (RT) tasks are generally repetitive and temporally regular, participants may use timing strategies that affect response speed and accuracy. This hypothesis was tested in 3 serial choice RT experiments in which participants were presented with stimuli that sometimes arrived earlier or later than normal. RTs increased and errors decreased when stimuli came earlier than normal, and RTs decreased and errors increased when stimuli came later than normal. The results were consistent with an elaboration of R. Ratcliff's diffusion model (R. Ratcliff, 1978; R. Ratcliff & J. N. Rouder, 1998; R. Ratcliff, T. Van Zandt, & G. McKoon, 1999), supplemented by a hypothesis developed by D. Laming (1979a, 1979b), according to which participants initiate stimulus sampling before the onset of the stimulus at a time governed by an internal timekeeper. The success of this model suggests that timing is used in the service of decision making.  相似文献   

10.
The diffusion model (Ratcliff, 1978) allows for the statistical separation of different components of a speeded binary decision process (decision threshold, bias, information uptake, and motor response). These components are represented by different parameters of the model. Two experiments were conducted to test the interpretational validity of the parameters. Using a color discrimination task, we investigated whether experimental manipulations of specific aspects of the decision process had specific effects on the corresponding parameters in a diffusion model data analysis (see Ratcliff, 2002; Ratcliff & Rouder, 1998; Ratcliff, Thapar, & McKoon, 2001, 2003). In support of the model, we found that (1) decision thresholds were higher when we induced accuracy motivation, (2) drift rates (i.e., information uptake) were lower when stimuli were harder to discriminate, (3) the motor components were increased when a more difficult form of response was required, and (4) the process was biased toward rewarded responses.  相似文献   

11.
Subitizing: Magical numbers or mere superstition?   总被引:3,自引:0,他引:3  
Summary It is widely believed that humans are endowed with a specialized numerical process, calledsubitizing, which enables them to apprehend rapidly and accurately the numerosity of small sets of objects. A major part of the evidence for this process is a purported discontinuity in the mean response time (RT) versus numerosity curves at about 4 elements, when subjects enumerate up to 7 or more elements in a visual display. In this article, RT data collected in a speeded enumeration experiment are subjected to a variety of statistical analyses, including several tests on the RT distributions. None of these tests reveals a significant discontinuity as numerosity increases. The data do suggest a strong stochastic dominance in RT by display numerosity, indicating that the mental effort required to enumerate does increase with each additional element in the display, both within and beyond the putative subitizing range.  相似文献   

12.
Conflict and the Stochastic-Dominance Principle of Decision Making   总被引:1,自引:0,他引:1  
One of the key principles underlying rational models of decision making is the idea that the decision maker should never choose an action that is stochastically dominated by another action. In the study reported in this article, violations of stochastic dominance frequently occurred when the payoffs produced by two actions were negatively correlated (in conflict), but no violations occurred when the payoffs were positively correlated (no conflict). This finding is contrary to models which assume that choice probability depends on the utility of each action, and the utility for an action depends solely on its own payoffs and probabilities. This article also reports, for the first time ever, the distribution of response times observed in a risky decision task. Both the violations of stochastic dominance and the response time distributions are explained in terms of a dynamic theory of decision making called multiattribute decision field theory.  相似文献   

13.
Testing Critical Properties of Decision Making on the Internet   总被引:3,自引:0,他引:3  
  相似文献   

14.
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement invariance based on stochastic processes of casewise derivatives of the likelihood function. These tests can be viewed as generalizations of the Lagrange multiplier test, and they are especially useful for: (i) identifying subgroups of individuals that violate measurement invariance along a continuous auxiliary variable without prespecified thresholds, and (ii) identifying specific parameters impacted by measurement invariance violations. The tests are presented and illustrated in detail, including an application to a study of stereotype threat and simulations examining the tests’ abilities in controlled conditions.  相似文献   

15.
Successive durations of binocular rivalry are sequentially independent, random variables. To explore the underlying control process, we perturbed the cycle during a 30-sec viewing period by immediately forcing an eye to return to dominance whenever it became suppressed. During this period of forced dominance, that eye's individual dominance durations were unusually brief, but immediately following the period of forced dominance that eye's suppression durations were unusually long. However, no long-term change in the sequential pattern of rivalry occurred, and the stochastic independence of successive durations was maintained during and following the period of forced dominance. The same pattern of results was obtained with even longer periods of forced dominance. These results are consistent with the existence of a short-term adaptation, or fatigue, process responsible for transitions from dominance to suppression.  相似文献   

16.
This note concerns two issues left unresolved in our study of lexicographic‐order preservation and stochastic dominance in settings where preferences are represented by utility vectors, ordered lexicographically, and judgements emerge as matrices that premultiply utility vectors in expected utility sums. First, a generalization of the ‘Conjecture Σ’, which implied transitivity of a stochastic dominance relation under non‐vacuous resolution‐level information, is proved. Second, this paper comments on using resolution‐level information in higher as well as in first degree stochastic dominance analysis. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.
Using diffusion models to understand clinical disorders   总被引:1,自引:0,他引:1  
Sequential sampling models provide an alternative to traditional analyses of reaction times and accuracy in two-choice tasks. These models are reviewed with particular focus on the diffusion model (Ratcliff, 1978) and how its application can aid research on clinical disorders. The advantages of a diffusion model analysis over traditional comparisons are shown through simulations and a simple lexical decision experiment. Application of the diffusion model to a clinically relevant topic is demonstrated through an analysis of data from nonclinical participants with high- and low-trait anxiety in a recognition memory task. The model showed that after committing an error, participants with high-trait anxiety responded more cautiously by increasing their boundary separation, whereas participants with low-trait anxiety did not. The article concludes with suggestions for ways to improve and broaden the application of these models to studies of clinical disorders.  相似文献   

18.
The distribution of an ordinal response can be modelled as a grouping of an underlying quantitative variable whose mean is a linear function of explanatory variables. Possible distributional assumptions about the underlying quantitative response are compared. An iteratively reweighted least squares algorithm for parameter estimation in these models is described in detail and variances and tests of hypotheses are given. Two data sets are analysed to illustrate the methods.  相似文献   

19.
Lexical ambiguity—the phenomenon of a single word having multiple, distinguishable senses—is pervasive in language. Both the degree of ambiguity of a word (roughly, its number of senses) and the relatedness of those senses have been found to have widespread effects on language acquisition and processing. Recently, distributional approaches to semantics, in which a word's meaning is determined by its contexts, have led to successful research quantifying the degree of ambiguity, but these measures have not distinguished between the ambiguity of words with multiple related senses versus multiple unrelated meanings. In this work, we present the first assessment of whether distributional meaning representations can capture the ambiguity structure of a word, including both the number and relatedness of senses. On a very large sample of English words, we find that some, but not all, distributional semantic representations that we test exhibit detectable differences between sets of monosemes (unambiguous words; N = 964), polysemes (with multiple related senses; N = 4,096), and homonyms (with multiple unrelated senses; N = 355). Our findings begin to answer open questions from earlier work regarding whether distributional semantic representations of words, which successfully capture various semantic relationships, also reflect fine-grained aspects of meaning structure that influence human behavior. Our findings emphasize the importance of measuring whether proposed lexical representations capture such distinctions: In addition to standard benchmarks that test the similarity structure of distributional semantic models, we need to also consider whether they have cognitively plausible ambiguity structure.  相似文献   

20.
In multi‐attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative's intensity of dominance, known as dominance measuring methods. Different dominance measuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we use Monte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions or weights represented by fuzzy numbers. Moreover, dominance measuring method performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号