首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Decision making is a two‐stage process, consisting of, first, consideration set construction and then final choice. Decision makers can form a consideration set from a choice set using one of two strategies: including the options they wish to further consider or excluding those they do not wish to further consider. The authors propose that decision makers have a relative preference for an inclusion (vs. exclusion) strategy when choosing from large choice sets and that this preference is driven primarily by a lay belief that inclusion requires less effort than exclusion, particularly in large choice sets. Study 1 demonstrates that decision makers prefer using an inclusion (vs. exclusion) strategy when faced with large choice sets. Study 2 replicates the effect of choice set size on preference for consideration set construction strategy and demonstrates that the belief that exclusion is more effortful mediates the relative preference for inclusion in large choice sets. Studies 3 and 4 further support the importance of perceived effort, demonstrating a greater preference for inclusion in large choice sets when decision makers are primed to think about effort (vs. accuracy; Study 3) and when the choice set is perceived as requiring more effort because of more information being presented about each alternative (vs. more alternatives in the choice set; Study 4). Finally, Study 5 manipulates consideration set construction strategy, showing that using inclusion (vs. exclusion) in large choice sets leads to smaller consideration sets, greater confidence in the decision process, and a higher quality consideration set.  相似文献   

3.
The present research contrasts two seemingly complementary decision strategies: acceptance and elimination. In acceptance, a choice set is created by including suitable alternatives from an initial set of alternatives, whereas in elimination it is created by removing inappropriate alternatives from that same initial set. The research used realistic career decision-making scenarios and presented to respondents sets of alternatives that varied in their preexperimental strength values. Whereas complementarity of acceptance and elimination is implied by three standard (normative) assumptions of decision theory, we find a systematic discrepancy between the outcomes of these procedures: choice sets were larger in elimination than in acceptance. This acceptance–elimination discrepancy is directly tied to subcomplementarity. The central tenet of the theoretical framework developed here is that acceptance and elimination procedures imply different types of status quo for the alternatives, thereby invoking a different selection criterion for each procedure. A central prediction of the dual-criterion framework is that middling alternatives should be most susceptible to the type of procedure used. The present studies focus on this prediction which is substantiated by the results showing that middling alternatives yield the greatest discrepancy between acceptance and elimination. The implications of this model and findings for various research domains are discussed.  相似文献   

4.
Context effects refer to the shifts in shares when another alternative is introduced in the choice set. The alternative can be asymmetrically dominated, asymmetrically dominating, totally dominated, or totally dominating. We developed a theoretically derived model based on the shifts in attribute valuation as a potential explanation for all context effects. First, the model is tested using data from previously published studies. As predicted, the results showed a high correlation between shifts in valuation and changes in the choice shares. The model is also tested using 2 studies that extend the design of the choice sets to include better alternatives in a search context and the removal of an alternative. The strong relation justifies the case for comparative valuation as an underlying mechanism for context effects. Assuming this valuation, the article illustrates how the framework can be used to develop new product strategies taking into account the values of the unchosen alternatives.  相似文献   

5.
In preference aggregation a set of individuals express preferences over a set of alternatives, and these preferences have to be aggregated into a collective preference. When preferences are represented as orders, aggregation procedures are called social welfare functions. Classical results in social choice theory state that it is impossible to aggregate the preferences of a set of individuals under different natural sets of axiomatic conditions. We define a first-order language for social welfare functions and we give a complete axiomatisation for this class, without having the number of individuals or alternatives specified in the language. We are able to express classical axiomatic requirements in our first-order language, giving formal axioms for three classical theorems of preference aggregation by Arrow, by Sen, and by Kirman and Sondermann. We explore to what extent such theorems can be formally derived from our axiomatisations, obtaining positive results for Sen’s Theorem and the Kirman-Sondermann Theorem. For the case of Arrow’s Theorem, which does not apply in the case of infinite societies, we have to resort to fixing the number of individuals with an additional axiom. In the long run, we hope that our approach to formalisation can serve as the basis for a fully automated proof of classical and new theorems in social choice theory.  相似文献   

6.
7.
This paper is concerned with procedures which transform valued preference relations on a set of alternatives into crisp relations. We present a simple characterization of a procedure that ranks alternatives in decreasing order of their minimal performance. This is done by means of three axioms that are shown to be independent. Among other results, we characterize in a very similar manner a procedure called ‘leximin’ and investigate two families of procedures whose intersection is the ‘min’ procedure.  相似文献   

8.
This paper brings the intellectual tools of cognitive science to bear on resolving the “paradox of the active user” [Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, Cambridge, MIT Press, MA, USA]—the persistent use of inefficient procedures in interactive tasks by experienced or even expert users when demonstrably more efficient procedures exist. The goal of this paper is to understand the roots of this paradox by finding regularities in these inefficient procedures. We examine three very different data sets. For each data set, we first satisfy ourselves that the preferred procedures used by some subjects are indeed less efficient than the recommended procedures. We then amass evidence, for each set, and conclude that when a preferred procedure is used instead of a more efficient, recommended procedure, the preferred procedure tends to have two major characteristics: (1) the preferred procedure is a well-practiced, generic procedure that is applicable either within the same task environment in different contexts or across different task environments, and (2) the preferred procedure is composed of interactive components that bring fast, incremental feedback on the external problem states. The support amassed for these characteristics leads to a new understanding of the paradox. In interactive tasks, people are biased towards the use of general procedures that start with interactive actions. These actions require much less cognitive effort as each action results in an immediate change to the external display that, in turn, cues the next action. Unfortunately for the users, the bias to use interactive unit tasks leads to a path that requires more effort in the long run. Our data suggest that interactive behavior is composed of a series of distributed choices; that is, people seldom make a once-and-for-all decision on procedures. This series of biased selection of interactive unit tasks often leads to a stable suboptimal level of performance.  相似文献   

9.
Although choice between two alternatives has been widely researched, fewer studies have examined choice across multiple (more than two) alternatives. Past models of choice behavior predict that the number of alternatives should not affect relative response allocation, but more recent research has found violations of this principle. Five pigeons were presented with three concurrently scheduled alternatives. Relative reinforcement rates across these alternatives were assigned 9:3:1. In some conditions three keys were available; in others, only two keys were available. The number of available alternatives did not affect relative response rates for pairs of alternatives; there were no significant differences in behavior between the two and three key conditions. For two birds in the three‐alternative conditions and three birds in the two‐alternative conditions, preference was more extreme for the pair of alternatives with the lower overall pairwise reinforcer rate (3:1) than the pair with higher overall reinforcer rate (9:3). However, when responding during the changeover was removed three birds showed the opposite pattern in the three‐alternative conditions; preference was more extreme for the pair of alternatives with the higher overall reinforcer rate. These findings differ from past research and do not support established theories of choice behavior.  相似文献   

10.
It has long been supposed that preference judgments between sets of to-be-considered possibilities are made by means of initially winnowing down the most promising-looking alternatives to form smaller “consideration sets” (Howard, 1963, Wright and Barbour, 1977). In preference choices with >2 options, it is standard to assume that a “consideration set”, based upon some simple criterion, is established to reduce the options available. Inferential judgments, in contrast, have more frequently been investigated in situations in which only two possibilities need to be considered (e.g., which of these two cities is the larger?) Proponents of the “fast and frugal” approach to decision-making suggest that such judgments are also made on the basis of limited, simple criteria. For example, if only one of two cities is recognized and the task is to judge which city has the larger population, the recognition heuristic states that the recognized city should be selected. A multinomial processing tree model is outlined which provides the basis for estimating the extent to which recognition is used as a criterion in establishing a consideration set for inferential judgments between three possible options.  相似文献   

11.
Contrastivism about reasons is the view that ‘reason’ expresses a relation with an argument place for a set of alternatives. This is in opposition to a more traditional theory on which reasons are reasons for things simpliciter. I argue that contrastivism provides a solution to a puzzle involving reason claims that explicitly employ ‘rather than’. Contrastivism solves the puzzle by allowing that some fact might be a reason for an action out of one set of alternatives without being a reason for that action out of a different set of alternatives.  相似文献   

12.
Gerhard Brewka 《Synthese》2005,146(1-2):171-187
Logic programs under answer set semantics have become popular as a knowledge representation formalism in Artificial Intelligence. In this paper we investigate the possibility of using answer sets for qualitative decision making. Our approach is based on an extension of the formalism, called logic programs with ordered disjunction (LPODs). These programs contain a new connective called ordered disjunction. The new connective allows us to represent alternative, ranked options for problem solutions in the heads of rules: A × B intuitively means: if possible A, but if A is not possible then at least B. The semantics of logic programs with ordered disjunction is based on a preference relation on answer sets. We show that LPODs can serve as a basis for qualitative decision making.  相似文献   

13.
Many have questioned the wisdom of using traditional juries to decide cases involving complex scientific and technical evidence. Alternative decision-makers that have been proposed include: judges; expert arbitrators; special juries composed of people who possess either a minimum level of higher education or knowledge especially relevant to the issues in the particular trial; and panels of experts in the particular field, acting as either a jury or a non-jury tribunal. These alternatives differ from the traditional jury not only in their composition but also, to varying degrees, in terms of the resources available to them and the procedures under which they operate. In this article, we explore the advantages that these alternative decision-makers have over juries and discuss how the same resources and procedures enjoyed by the alternatives could be made available to and enhance the abilities of the traditional jury in cases involving complex evidence.  相似文献   

14.
In choosing between small, immediate and large, delayed reward, an organism behaves impulsively if it chooses the small reward and shows impulse control if it chooses the large reward. Work with nonhumans suggests that impulsivity and impulse control may be derived from gradients of delayed reinforcement. A model developed by Ainslie and by Rachlin suggests that preference for the rewards should be a function of when the choice is made: small reward with no delay may be preferred to large reward with delay X, but adding delay T to both alternatives should shift preference to the large reward. Three experiments investigated this preference reversal in humans, using termination of 90 dba white noise as the reinforcing event. Experiment 1 showed that under some instructional conditions 90-sec noise off with no delay was preferred over 120-sec noise off after a 60-sec delay, but that preference shifted to the large reward when a 15-sec delay (T) was added to both alternatives. Experiment 2 replicated this preference reversal under two conditions of large, delayed reward, and with three rather than two values of T. Experiment 3 confirmed this effect of T and showed that some humans committed themselves to the large reward when commitment could be made some time before presentation of the reward alternatives. These data support the Ainslie-Rachlin model and extend it to human choice behavior.  相似文献   

15.
This study explored the potential of a person × situation approach to identifying the characteristics of leaders in a voluntary community organization. A set of variables based on Mischel's “cognitive social learning variables” was operationalized to provide variables which assess the characteristics of individuals in relation to the specific context in which some emerge as leaders. This set of variables was compared with a larger set of traditional demographic and personality variables. Analyses indicated the approximate statistical comparability of the two sets. Advantages of the cognitive social learning approach for understanding and intervening in leader emergence and development in voluntary community organizations are discussed.  相似文献   

16.
Five pigeons were trained on a procedure in which seven concurrent variable-interval schedules arranged seven different food-rate ratios in random sequence in each session. Each of these components lasted for 10 response-produced food deliveries, and components were separated by 10-s blackouts. We varied delays to food (signaled by blackout) between the two response alternatives in an experiment with three phases: In Phase 1, the delay on one alternative was 0 s, and the other was varied between 0 and 8 s; in Phase 2, both delays were equal and were varied from 0 to 4 s; in Phase 3, the two delays summed to 8 s, and each was varied from 1 to 7 s. The results showed that increasing delay affected local choice, measured by a pulse in preference, in the same way as decreasing magnitude, but we found also that increasing the delay at the other alternative increased local preference. This result casts doubt on the traditional view that a reinforcer strengthens a response depending only on the reinforcer's value discounted by any response-reinforcer delay. The results suggest that food guides, rather than strengthens, behavior.  相似文献   

17.
18.
The joint effects of punishment and reinforcement on the pigeon's key-peck response were examined in three choice experiments conducted to compare predictions of Farley and Fantino's (1978) subtractive model with those made by Deluty's (1976) and Deluty and Church's (1978) model of punishment. In Experiment 1, the addition of equal punishment schedules to both alternatives of a concurrent reinforcement schedule enhanced the preference exhibited for the more frequent reinforcement alternative. Experiment 2 demonstrated decreases in the absolute response rate for each member of a concurrent reinforcement schedule when increasing frequencies of punishment were added to each alternative. Experiment 3 found that preference for the denser of two reinforcement schedules diminished when the absolute frequencies of reinforcement were increased by a constant factor and conditions of punishment for both alternatives were held constant. Diminished preferences were obtained regardless of whether the frequency of punishment associated with the denser reinforcement schedule was greater or less than that associated with the lean reinforcement alternative. The results from all three experiments uniquely supported Farley and Fantino's (1978) subtractive model of punishment and reinforcement.  相似文献   

19.
Multicriteria decision‐making (MCDM) methods are concerned with the ranking of alternatives based on expert judgements made using a number of criteria. In the MCDM field, the distance‐based approach is one popular method for obtaining a final ranking. The technique for order preference by similarity to the ideal solution (TOPSIS) is a commonly used example of this kind of MCDM method. The TOPSIS ranks the alternatives with respect to their geometric distance from the positive and negative ideal solutions. Unfortunately, two reference points are often insufficient, especially for nonlinear problems. As a consequence of this situation, the final result ranking is prone to errors, including the rank reversals phenomenon. This study proposes a new distance‐based MCDM method: the characteristic objects method. In this approach, the preferences of each alternative are obtained on the basis of the distance from the nearest characteristic objects and their values. For this purpose, we have determined the domain and Fuzzy number set for all the considered criteria. The characteristic objects are obtained as the combination of the crisp values of all the Fuzzy numbers. The preference values of all the characteristic object are determined on the basis of the tournament method and the principle of indifference. Finally, the Fuzzy model is constructed and is used to calculate preference values of the alternatives, making it a multicriteria model that is free of rank reversal. The numerical example is used to illustrate the efficiency of the proposed method with respect to results from the TOPSIS method. The characteristic objects method results are more realistic than the TOPSIS results. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
A great deal has been made of the question of whether nano-materials provide a unique set of ethical challenges. Equally important is the question of whether they provide a unique set of regulatory challenges. In the last 18 months, the US Environmental Protection Agency has begun the process of trying to meet the regulatory challenge of nano using the Toxic Substances Control Act (1976)(TSCA). In this central piece of legislation, ‘newness’ is a critical concept. Current EPA policy, we argue, does not adequately (or ethically) deal with the novelty of nano. This paper is an exploration of how to do a better job of accounting for nanomaterials as ‘new.’ We explore three alternative ways that nanomaterials might be made to fall under the TSCA regulatory umbrella. Since nanomaterials are of interest precisely because of the exciting new properties that emerge at the nano-scale, each of these three alternatives must meet what we call the ‘novelty condition’ and avoid what we call the ‘central paradox’ of existing regulatory policy. Failure to meet either of these conditions is a moral failure. We examine both the strengths and weaknesses of each alternative in order to illuminate the conceptual, practical, and moral challenges of novelty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号