首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Coombs and Huang (1970) distributive theory of perceived risk is reinterpreted as a more robust statistical hypothesis to describe central tendencies of noisy replicates drawn from a homogeneous population. Barron's (1976) sample of 13 business faculty rank-order responses are pooled to obtain a replicated complete 3 × 3 × 3 design which is analyzed by a new stochastic conjoint measurement (SCJM) approach to axiomatic data analysis. SCJM implements statistical analogues of the deterministic Krantz and Tversky (1971) diagnostics for error-free data. SCJM diagnosis based on a series of one-sided nonparametric two-cell comparisons at the α = 0.04 level supports the hypothesis of interaction between the expected-value and number-of-plays attributes of gambles yet contradicts Barron's odd-even effects hypothesis. SCJM diagnosis with two-cell α < 0.04 supports an additive statistical model.  相似文献   

2.
3.
An axiomatization proposed by Coombs 1974 is shown to be insufficient for the representation of gambles in a risk × expected value space as described by Portfolio Theory. A new axiom system is given which is necessary and sufficient for this representation, and which implies the old axiomatization.  相似文献   

4.
This study assessed how confidence in judgments is affected by the need to make inferences about missing information. Subjects indicated their likelihood of taking each of a series of gambles based on both probability and payoff information or only one of these sources of information. They also rated their confidence in each likelihood judgment. Subjects in the Explicit Inference condition were asked to explicitly estimate the values of missing information before making their responses while subjects in the Implicit Inference condition were not. The manner in which probability information was framed was also manipulated. Experiment 1 employed hypothetical gambles and Experiment 2 employed gambles with real money. Expressed likelihood of taking gambles was higher when probability was phrased in terms of '% chance of winning' rather than '% chance of losing', but this difference was somewhat less with real gambles than with hypothetical gambles. Confidence ratings in each experiment were actually higher on incomplete information trials than on complete information trials in the Explicit Inference condition. Results were related to the general issue of confidence in judgments.  相似文献   

5.
Loglinear Rasch model tests   总被引:1,自引:0,他引:1  
Existing statistical tests for the fit of the Rasch model have been criticized, because they are only sensitive to specific violations of its assumptions. Contingency table methods using loglinear models have been used to test various psychometric models. In this paper, the assumptions of the Rasch model are discussed and the Rasch model is reformulated as a quasi-independence model. The model is a quasi-loglinear model for the incomplete subgroup × score × item 1 × item 2 × ... × itemk contingency table. Using ordinary contingency table methods the Rasch model can be tested generally or against less restrictive quasi-loglinear models to investigate specific violations of its assumptions.  相似文献   

6.
Complete tests of subjectively expected utility (SEU), subjectively expected value (SEV), expected utility (EU) and expected value (EV) theories were made for duplex gambles without measuring subjective probability or subjective utility. All gambles were hypothetical and offered on booklets. The duplex gambles consisted of winning gambles, which offered a chance to win a certain amount of money or to break even; and losing gambles, which offered a chance to lose a certain amount of money or break even.The results indicated that SEU, SEV and EU theories could not account for the strategies of 33%, 53% and 86% of the Ss respectively in the losing form of gambles, while EV theory accounted for 78% of the behavior of Ss.In the winning form of gambles, SEU, SEV and EU theory held for 77%, 65%, and 54% of the Ss respectively, while EV theory held for only 40% of the Ss. Suggestions for further research were made.  相似文献   

7.
Preference reversal is a systematic change in the preference order between options when different response methods are used (e.g., choice vs. judgment). The present study focuses on procedures used to elicit preferences according to an evaluability hypothesis. Two experiments compared joint vs. separate evaluations and explicit vs. non‐explicit joint evaluations. Subjects had to express preferences between high‐variance gambles (HVGs) and low‐variance gambles (LVGs) either by choosing one gamble to play in a lottery or by assigning gambles minimum selling prices. We show that HVGs are preferred in both choice and pricing conditions when gambles are evaluated separately, and LVGs are preferred in both choice and selling conditions when gambles are evaluated in pairs: i.e., when the evaluation mode is held constant, classic preference reversal disappears. These results support the evaluability hypothesis, and suggest that preferences depend on whether subjects are allowed to compare the options they are asked to choose from or judge, independently of the nature of the scale (i.e., attractiveness vs. minimum selling price) they are required to adopt. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
The goal of the current study was to explore information search and processing differences between individuals who are less and more numerate in an attempt to better understand the mechanisms that might differentiate the choices they make. We did so using a computerized process‐tracing system known as MouseTrace, which presented monetary gambles in an alternative × attribute matrix with outcome (dollar amount) and probability information as attributes. This information was initially occluded but could be revealed by clicking on the cell that contained the desired information. Beginning with nine gambles offering the chance of gaining or losing a specified amount, participants (N = 110) narrowed down the options (to three and then one) using an inclusion or exclusion strategy. Consistent with previous research, inclusion was a more effortful strategy, and individuals who were higher in numeracy were more likely to select prospects with the highest expected value. Process measures revealed these individuals expended more effort (i.e., attended to and sought out more information and processed it in greater depth) and exhibited more compensatory processing than those who were lower in numeracy, but this sometimes depended on whether one was asked to include or exclude. These results serve as further evidence that individuals with higher levels of numeracy often engage in more elaborative processing of the decision task, which tends to lead to more optimal choices. However, they also suggest that individuals are adaptive and that the specific situation can matter. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Implicit within the acceptance of most multidimensional scaling models as accurate representations of an individual's cognitive structure for a set of complex stimuli, is the acceptance of the more general Additive Difference Model (ADM). A theoretical framework for testing the ordinal properties of the ADM for dissimilarities data is presented and is illustrated for a set of three-outcome gambles. Paired comparison dissimilarity judgments were obtained for two sets of gambles. Judgments from one set were first analyzed using the ALSCAL individual differences scaling model. Based on four highly interpretable dimensions derived from this analysis, a predicted set of dimensions were obtained for each subject for the second set of gambles. The ordinal properties of the ADM necessary for interdimensional additivity and intradimensional subtractivity were then tested for each subject's second set of data via a new computer-based algorithm, ADDIMOD. The tests indicated that the ADM was rejected. Although violations of the axioms were significantly less than what would be expected by chance, for only one subject was the model clearly supported. It is argued that while multidimensional scaling models may be useful as data reduction techniques, they do not reflect the perceptual processes used by individuals to form judgments of similarity. Implications for further study of multidimensional scaling models are offered and discussed.  相似文献   

10.
11.
12.
Studies examining perfectionism, engagement and burnout in sport have produced different levels of support for the hypotheses of the 2 × 2 model of perfectionism. One explanation for why this is so is that researchers have used different measures of perfectionism when testing the hypotheses. To determine whether this is the case, in the current study we retested the hypotheses of the 2 × 2 model for engagement and burnout using different measures of perfectionism. A sample of 401 adult athletes from various sports and levels completed measures of athlete engagement and burnout, along with two measures of perfectionism. Moderated regression analyses revealed that support for the hypotheses of the 2 × 2 model did indeed differ depending on the measure of perfectionism. This was evident for both burnout (emotional and physical exhaustion and reduced sense of accomplishment) and engagement (dedication and vigor). The findings are aligned with similar work that has found differences in support for the hypotheses of the 2 × 2 model when using other measures of perfectionism for engagement and, importantly, provide the first evidence that this extends to athlete burnout. Researchers will need to consider the influence of the measures of perfectionism used when interpreting, comparing, and summarising future research on the 2 × 2 model for these and other outcomes.  相似文献   

13.
Determining a priori power for univariate repeated measures (RM) ANOVA designs with two or more within-subjects factors that have different correlational patterns between the factors is currently difficult due to the unavailability of accurate methods to estimate the error variances used in power calculations. The main objective of this study was to determine the effect of the correlation between the levels in one RM factor on the power of the other RM factor. Monte Carlo simulation procedures were used to estimate power for the A, B, and AB tests of a 2×3, a 2×6, a 2×9, a 3×3, a 3×6, and a 3×9 design under varying experimental conditions of effect size (small, medium, and large), average correlation (.4 and .8), alpha (.01 and .05), and sample size (n = 5, 10 ,15, 20, 25, and 30). Results indicated that the greater the magnitude of the differences between the average correlation among the levels of Factor A and the average correlation in the AB matrix, the lower the power for Factor B (and vice versa). Equations for estimating the error variance of each test of the two-way model were constructed by examining power and mean square error trends across different correlation matrices. Support for the accuracy of these formulae is given, thus allowing for direct analytic power calculations in future studies.  相似文献   

14.
Risks and rewards, or payoffs and probabilities, are inversely related in many choice environments. We investigated people's psychological responses to uncommon combinations of risk and reward that deviate from learned regularities (e.g., options that offer a high payoff with an unusually high probability) as they evaluated risky options. In two experiments (N = 183), participants first priced monetary gambles drawn from environments in which risks and rewards were negatively correlated, positively correlated, or uncorrelated. In later trials, they evaluated gambles with uncommon combinations of risk and reward—that is, options that deviated from the respective environment's risk–reward structure. Pricing, response times, and (in Experiment 2) pupil dilation were recorded. In both experiments, participants took more time when responding to uncommon compared to foreseeable options or when the same options were presented in an uncorrelated risk–reward environment. This result was most pronounced when the uncommon gambles offered higher expected values compared to the other gambles in the set. Moreover, these uncommon, high‐value options were associated with an increase in pupil size. These results suggest that people's evaluations of risky options are based not only on the options' payoffs and probabilities but also on the extent to which they fit the risk–reward structure of the environment.  相似文献   

15.
Prior research has found consistent support for the heuristic processing model of cultivation effects, which argues that cultivation effects can be explained by the availability heuristic. The present study represents an experimental test of the heuristic processing model and tests the impact of frequency, recency, and vividness on construct accessibility and social reality beliefs. 213 students participated in a 2 × 2 × 2 prolonged exposure experimental design varying the frequency of exposure to violent television programs, the level of vividness in the programs, and recency of exposure. Dependent measures were accessibility and social reality beliefs. Results showed that reaction times were largely unresponsive to the independent variables. Although there were no main effects for frequency on social reality beliefs, there was a significant interaction between frequency and vividness on beliefs: People watching vivid violent media gave higher estimates of the prevalence of crime and police immorality in the real world in the 3× viewing condition than those in the 1× viewing condition. In concluding, it is argued that this study has important implications for the heuristic processing model, cultivation theory, and research into vividness effects.  相似文献   

16.
A model of non-conscious affect is proposed and an experiment tests predictions about the influence of non-conscious affect on evaluations made of conversational interact ants. Participants engaged in a subliminal priming task to induce a positive non-conscious affective response toward one of two target persons. Participants then watched two videotaped interactions (one featured the subliminally primed target person) and rated a target person from each interaction. A3 × 2 × 2 mixed experimental design crossed Target Primed (Target A, B, or No Prime) and Order of Evaluation (A vs. B first), whereas the third factor (Target Evaluated) was within subjects. The primed target was rated as more likable and attractive yet not more competent. The non-conscious affect was target specific (affecting judgments of the primed target) and diffuse (affecting judgments of a non-primed target).  相似文献   

17.
The present experiment was designed to test whether choice-induced certainty equivalents (CEs) and joint receipt (JR) of gambles exhibit certain properties such as the equality of JR and convolution, monotonicity of convolution, monotonicity of JR over gambles, additivity of JR over gambles and money, segregation of a common consequence, additive segregation when JR is replaced by +, and several other derivative properties. Subjects were partitioned into "gamblers" and "nongamblers" by their performance on screening gambles. Assuming that CE is order preserving, monotonicity of JR and additivity of JR over gambles were both rejected whereas additivity of JR over money, segregation, and additive segregation were all sustained for gamblers and nongamblers. For gamblers, convolution is not monotonic but is equivalent to JR and segregation and additive segregation are not equivalent. For nongamblers, convolution is monotonic but is not equivalent to JR and segregation and additive segregation are probably equivalent.  相似文献   

18.
The log-linear model for contingency tables expresses the logarithm of a cell frequency as an additive function of main effects, interactions, etc., in a way formally identical with an analysis of variance model. Exact statistical tests are developed to test hypotheses that specific effects or sets of effects are zero, yielding procedures for exploring relationships among qualitative variables which are suitable for small samples. The tests are analogous to Fisher's exact test for a 2 × 2 contingency table. Given a hypothesis, the exact probability of the obtained table is determined, conditional on fixed marginals or other functions of the cell frequencies. The sum of the probabilities of the obtained table and of all less probable ones is the exact probability to be considered in testing the null hypothesis. Procedures for obtaining exact probabilities are explained in detail, with examples given.  相似文献   

19.
We present results from two experiments on the relative importance of, and subjects' differential sensitivity to, vagueness on both probabilities and outcomes. Subjects in these studies made certainty equivalent (CE) judgments for precise and vague gambles. In the first study subjects responded to gain gambles only; in the second they judged both gain and loss gambles. Model-free analyses of the results indicate (a) a higher concern for the precision of the outcomes than that of the probabilities, (b) vagueness seeking for positive outcomes and (c) vagueness avoidance for negative outcomes and (d) no strong modal attitude toward vagueness on the probability dimension. The greater salience of the outcomes can be explained by the nature of the response mode (CEs). The reflection of attitudes towards outcome vagueness in the two domains can be explained by the distinct goals of the DMs in the two cases, which cause them to focus on the highest (most desirable) possible gain or the largest (most dreaded) conceivable loss. We propose and fit a new model of decision making with vaguely specified attributes that generalize the Prospect Theory model for the precise case. The new generalized model combines the two submodels (preference among precise lotteries and effects of vagueness) and allows estimation of the vagueness parameters. These estimated parameters are consistent with, and confirm, the patterns uncovered by the qualitative analysis.  相似文献   

20.
Three studies investigate how physiological emotional responses can be combined with symbolic information to predict preferences. The first study used a weighted proportional difference rule to combine explicitly quantified symbolic and emotional information. The proportion of emotion model was more predictive than a simple additive emotional (AE) combination in decisions about selecting dating partners. Study 2 showed that a simple proportion algorithm of emotionally derived weights and a simple AE model predicted preference equally well for decisions between equal expected value (EV) gambles. Study 3 provided additional evidence for decision mechanisms that combine physiological measures within symbolic trade‐off algorithms for choices between diamond rings. Self‐reported emotion measures proved to be better predictors than physiological measures. The results are discussed in the context of other major models of emotional influence on preference and provide a foundation for future research on emotional decision‐making mechanisms. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号