首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes different choices in nearly identical situations, or why the magnitude of these inconsistencies varies in different situations. To illustrate the advantage of probabilistic theories, three probabilistic theories of decision making under risk are compared with their deterministic counterparts. The probabilistic theories are (a) a probabilistic version of a simple choice heuristic, (b) a probabilistic version of cumulative prospect theory, and (c) decision field theory. By testing the theories with the data from three experimental studies, the superiority of the probabilistic models over their deterministic counterparts in predicting people's decisions under risk become evident. When testing the probabilistic theories against each other, decision field theory provides the best account of the observed behavior.  相似文献   

2.
Previous work comparing pricing decisions by buyers and sellers has primarily focused on the endowment effect, the phenomenon that selling prices exceed buying prices. Here, we examine whether pricing decisions by buyers and sellers also vary in sensitivity to differences between objects' expected values (EVs). Both a loss‐aversion account (which posits that losses are weighted more heavily than gains) and a loss‐attention account (which posits increased attention to a task when it involves possible losses) predict that pricing decisions by sellers should exhibit higher sensitivity. The latter, however, additionally predicts that this pattern should only emerge under certain conditions. In studies 1 and 2, we reanalyzed two published datasets in which participants priced monetary lotteries as sellers or buyers. It emerged that sellers showed greater EV sensitivity (defined as the rank correlation between the set price for each lottery and its EV) except in a condition with an extended deliberation time of 15 seconds. In study 3, the buyer–seller difference in EV sensitivity was replicated even when the pricing task was presented repeatedly, while in study 4, it was eliminated when buying and selling trials were randomly mixed. The reduction of the “seller's sense” in long deliberation and mixed trials settings supports an attentional resource‐based account of the differences between sellers and buyers in their EV sensitivity. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Brian M. Scott 《Synthese》1996,109(2):281-289
Recently Samuel Richmond, generalizing Nelson Goodman, has proposed a measure of the simplicity of a theory that takes into account not only the polymorphicity of its models but also their internal homogeneity. By this measure a theory is simple if small subsets of its models exhibit only a few distinct (i.e., non-isomorphic) structures. Richmond shows that his measure, unlike that given by Goodman's theory of simplicity of predicates, orders the order relations in an intuitively satisfactory manner. In this note I formalize his presentation and suggest an improvement designed to overcome certain technical difficulties.  相似文献   

4.
Humans perceive and reproduce short intervals of time (e.g. 1-60 s) relatively accurately, and are capable of timing multiple overlapping intervals if these intervals are presented in different modalities [e.g., Rousseau, L., & Rousseau, R. (1996). Stop-reaction time and the internal clock. Perception and Psychophysics, 58(3), 434-448]. Tracking multiple intervals can be explained either by assuming multiple internal clocks or by strategic arithmetic using a single clock. The underlying timescale (linear or nonlinear) qualitatively influences the predictions derived from these accounts, as assuming a nonlinear timescale introduces systematic errors in added or subtracted intervals. Here, we present two experiments that provide support for a single clock combined with a nonlinear underlying timescale. When two equal but partly overlapping time intervals had to be estimated, the second estimate was positively correlated with the stimulus onset asynchrony. This effect was also found in a second experiment with unequal intervals that showed evidence of subtraction of intervals. The findings were supported by computational models implemented in a previously validated account of interval timing [Taatgen, N. A., Van Rijn, H., & Anderson, J. R. (2007). An integrated theory of prospective time interval estimation: The role of cognition, attention and learning. Psychological Review, 114(3), 577-598].  相似文献   

5.
6.
A classic question in cognitive psychology concerns the nature of memory search in short-term recognition. Despite its long history of investigation, however, there is still no consensus on whether memory search takes place serially or in parallel or is based on global access. In the present investigation, we formalize a variety of models designed to account for detailed response time distribution data in the classic Sternberg (Science 153: 652-654, 1966) memory-scanning task. The models vary in their mental architectures (serial exhaustive, parallel self-terminating, and global access). Furthermore, the component processes within the architectures that make match/mismatch decisions are formalized as linear ballistic accumulators (LBAs). In fast presentation rate conditions, the parallel and global access models provide far better accounts of the data than does the serial model. LBA drift rates are found to depend almost solely on the lag between study items and test probes, whereas response thresholds change with memory set size. Under slow presentation rate conditions, even simple versions of the serial-exhaustive model provide accounts of the data that are as good as those of the parallel and global access models. We provide alternative interpretations of the results in our General Discussion.  相似文献   

7.
Individuals with agrammatic Broca's aphasia experience difficulty when processing reversible non‐canonical sentences. Different accounts have been proposed to explain this phenomenon. The Trace Deletion account (Grodzinsky, 1995, 2000, 2006) attributes this deficit to an impairment in syntactic representations, whereas others (e.g., Caplan, Waters, Dede, Michaud, & Reddy, 2007; Haarmann, Just, & Carpenter, 1997) propose that the underlying structural representations are unimpaired, but sentence comprehension is affected by processing deficits, such as slow lexical activation, reduction in memory resources, slowed processing and/or intermittent deficiency, among others. We test the claims of two processing accounts, slowed processing and intermittent deficiency, and two versions of the Trace Deletion Hypothesis (TDH), in a computational framework for sentence processing (Lewis & Vasishth, 2005) implemented in ACT‐R (Anderson, Byrne, Douglass, Lebiere, & Qin, 2004). The assumption of slowed processing is operationalized as slow procedural memory, so that each processing action is performed slower than normal, and intermittent deficiency as extra noise in the procedural memory, so that the parsing steps are more noisy than normal. We operationalize the TDH as an absence of trace information in the parse tree. To test the predictions of the models implementing these theories, we use the data from a German sentence—picture matching study reported in Hanne, Sekerina, Vasishth, Burchert, and De Bleser (2011). The data consist of offline (sentence‐picture matching accuracies and response times) and online (eye fixation proportions) measures. From among the models considered, the model assuming that both slowed processing and intermittent deficiency are present emerges as the best model of sentence processing difficulty in aphasia. The modeling of individual differences suggests that, if we assume that patients have both slowed processing and intermittent deficiency, they have them in differing degrees.  相似文献   

8.
9.
A hallmark feature of elemental associative learning theories is that multiple cues compete for associative strength when presented with an outcome. Cue competition effects have been observed in humans, both in forward and in backward blocking procedures (e.g., Shanks, 1985) and are often interpreted as evidence for an associative account of human causal learning (e.g., Shanks & Dickinson, 1987). Waldmann and Holyoak (1992), however, demonstrated that cue competition only occurs in predictive, and not diagnostic, learning paradigms. While unexplainable from an associative perspective, this asymmetry readily follows from structural considerations of causal model theory. In this paper, we show that causal models determine the extent of cue competition not only in forward but also in backward blocking designs. Implications for associative and inferential accounts of causal learning are discussed.  相似文献   

10.
How do people learn to allocate resources? To answer this question, 2 major learning models are compared, each incorporating different learning principles. One is a global search model, which assumes that allocations are made probabilistically on the basis of expectations formed through the entire history of past decisions. The 2nd is a local adaptation model, which assumes that allocations are made by comparing the present decision with the most successful decision up to that point, ignoring all other past decisions. In 2 studies, participants repeatedly allocated a capital resource to 3 financial assets. Substantial learning effects occurred, although the optimal allocation was often not found. From the calibrated models of Study 1, a priori predictions were derived and tested in Study 2. This generalization test shows that the local adaptation model provides a better account of learning in resource allocations than the global search model.  相似文献   

11.
This paper discusses differences between prospect theory and cumulative prospect theory. It shows that cumulative prospect theory is not merely a formal correction of some theoretical problems in prospect theory, but it also gives different predictions. Some experiments by Lola Lopes are re-analyzed, and are demonstrated to favor cumulative prospect theory over prospect theory. It turns out that the mathematical form of cumulative prospect theory is well suited for modeling the psychological phenomenon of diminishing sensitivity. © 1997 John Wiley & Sons, Ltd.  相似文献   

12.
The purpose of the present experiments was to investigate the generation of conscious awareness (i.e., of verbal report) in an incidental learning situation. While the single-system account assumes that all markers of learning, verbal or nonverbal, index the same underlying knowledge representation, multiple-systems accounts grant verbal report a special status as a marker of learning because they assume that the nonverbal and verbal effects of learning rely on different memory representations. We tested these two accounts in two experiments in which we held the amount of learning in the nonverbal memory system constant while manipulating independent variables aimed at affecting learning in the declarative system. The results of both experiments revealed significant differences in verbal report between experimental conditions, but no significant differences in response times. Overall, these results provide clear evidence in favor of the multiple-systems account.  相似文献   

13.
Acquisition of conditioned responding is thought to be determined by the number of pairings of a conditioned stimulus (CS) and an unconditioned stimulus (US). However, it is possible that acquisition is primarily determined not by the number of trials but rather by quantities that often correlate with the number of trials, such as cumulative intertrial interval (ITI) and the number of sessions. Four experiments examined whether the number of trials has an effect on acquisition of conditioned responding, once cumulative ITI and number of sessions are equated. Results of the experiments with rats and mice favor the hypothesis that over an eightfold range, variation in number of CS-US pairings has little effect. It is suggested that learning curves might more accurately be plotted across cumulative ITI or number of sessions, and not across number of trials. Results pose a challenge to trial-centered accounts of conditioning, as demonstrated by simulations of the Rescorla-Wagner model, a simplified version of Wagner's standard operating procedure model (SOP), and Stout & Miller's sometimes competing retrieval model (SOCR). A time-centered account, rate estimation theory (RET), predicts the main finding but has trouble with other aspects of the learning process more easily accommodated by trial-centered models.  相似文献   

14.
Glöckner A  Pachur T 《Cognition》2012,123(1):21-32
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT’s parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual’s choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT’s parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice.  相似文献   

15.
Word learning as Bayesian inference   总被引:2,自引:0,他引:2  
The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word's referents, by making rational inductive inferences that integrate prior knowledge about plausible word meanings with the statistical structure of the observed examples. The theory addresses shortcomings of the two best known approaches to modeling word learning, based on deductive hypothesis elimination and associative learning. Three experiments with adults and children test the Bayesian account's predictions in the context of learning words for object categories at multiple levels of a taxonomic hierarchy. Results provide strong support for the Bayesian account over competing accounts, in terms of both quantitative model fits and the ability to explain important qualitative phenomena. Several extensions of the basic theory are discussed, illustrating the broader potential for Bayesian models of word learning.  相似文献   

16.
We explore the adequacy of two types of similarity representation in the context of semantic concepts. To this end, we evaluate different categorization models, assuming either a geometric or a featural representation, using categorization decisions involving familiar and unfamiliar foods and animals. The study aims to assess the optimal stimulus representation as a function of the familiarity of the stimuli. For the unfamiliar stimuli, the geometric categorization models provide the best account of the categorization data, whereas for the familiar stimuli, the featural categorization models provide the best account. This pattern of results suggests that people rely on perceptual information to assign an unfamiliar stimulus to a category but rely on more elaborate conceptual knowledge when assigning a familiar stimulus.  相似文献   

17.
Laboratory studies on a range of animals have identified a bias that seems to violate basic principles of rational behavior: a preference is shown for feeding options that previously provided food when reserves were low, even though another option had been found to give the same reward with less delay. The bias presents a challenge to normative models of decision making (which only take account of expected rewards and the state of the animal at the decision time). To understand the behavior, we take a broad ecological perspective and consider how valuation mechanisms evolve when the best action depends upon the environment being faced. We show that in a changing and uncertain environment, state-dependent valuation can be favored by natural selection: Individuals should allow their hunger to affect learning for future decisions. The valuation mechanism that typically evolves produces the kind of behavior seen in standard laboratory tests. By providing an insight into why learning should be affected by the state of an individual, we provide a basis for understanding psychological principles in terms of an animal's ecology.  相似文献   

18.
Previous research ( Greitemeyer & Weiner, 2003 ) has demonstrated that compliance to commit a transgression for an anticipated reward as opposed to an anticipated punishment results in greater inferences of personal responsibility. The present studies extend these findings to a courtroom context in which punishment decisions are made. In Study 1, a nurse who administered a non-approved drug was perceived as more responsible; and more severe punishment decisions were recommended, given compliance for an offered reward relative to a threatened punishment. These findings subsequently were replicated while varying the consequences of the drug administration (Study 2) and employing an antisocial scenario (Study 3). Legal theory, field theory, and prospect theory are discussed as possible explanations for these phenomena.  相似文献   

19.
Schema theory: a critical appraisal and reevaluation   总被引:1,自引:0,他引:1  
The authors critically review a number of the constructs and associated predictions proposed in schema theory (R. A. Schmidt, 1975). The authors propose that new control and learning theories should include a reformulated (a) notion of a generalized motor program that is not based on motor program but still accounts for the strong tendency for responses to maintain their relative characteristics; (b) mechanism or processes whereby an abstract movement structure based on proportional principles (e.g., relative timing, relative force) is developed through practice; and (c) explanation for parameter learning that accounts for the benefits of parameter variability but also considers how variability is scheduled. Furthermore, they also propose that new theories of motor learning must be able to account for the consistent findings spawned as a result of the schema theory proposal and must not be simply discounted because of some disfavor with the motor program notion, in general, or schema theory, more specifically.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号