首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Original, open‐source computer software was developed and validated against established delay discounting methods in the literature. The software executed approximate Bayesian model selection methods from user‐supplied temporal discounting data and computed the effective delay 50 (ED50) from the best performing model. Software was custom‐designed to enable behavior analysts to conveniently apply recent statistical methods to temporal discounting data with the aid of a graphical user interface (GUI). The results of independent validation of the approximate Bayesian model selection methods indicated that the program provided results identical to that of the original source paper and its methods. Monte Carlo simulation (n = 50,000) confirmed that true model was selected most often in each setting. Simulation code and data for this study were posted to an online repository for use by other researchers. The model selection approach was applied to three existing delay discounting data sets from the literature in addition to the data from the source paper. Comparisons of model selected ED50 were consistent with traditional indices of discounting. Conceptual issues related to the development and use of computer software by behavior analysts and the opportunities afforded by free and open‐sourced software are discussed and a review of possible expansions of this software are provided.  相似文献   

2.
Randomization statistics offer alternatives to many of the statistical methods commonly used in behavior analysis and the psychological sciences, more generally. These methods are more flexible than conventional parametric and nonparametric statistical techniques in that they make no assumptions about the underlying distribution of outcome variables, are relatively robust when applied to small‐n data sets, and are generally applicable to between‐groups, within‐subjects, mixed, and single‐case research designs. In the present article, we first will provide a historical overview of randomization methods. Next, we will discuss the properties of randomization statistics that may make them particularly well suited for analysis of behavior‐analytic data. We will introduce readers to the major assumptions that undergird randomization methods, as well as some practical and computational considerations for their application. Finally, we will demonstrate how randomization statistics may be calculated for mixed and single‐case research designs. Throughout, we will direct readers toward resources that they may find useful in developing randomization tests for their own data.  相似文献   

3.
Most delay discounting studies use tasks that arrange delay progressions in which the spacing between consecutive delays becomes progressively larger. To date, little research has examined delay discounting using other progressions. The present study assessed whether the form or steepness of discounting varied across different delay progressions. Human participants completed three discounting tasks with delay progressions that varied in the time between consecutive delays: a standard (increasing duration between delays), linear (equal duration between delays), and an inverse progression (decreasing duration between delays). Steepness of discounting was generally reduced, and remained so, following experience with the inverse progression. Effects of the delay progression on the best fitting equation were order‐dependent. Overall the hyperbola model provided better fits, but the exponential model performed better with data from the inverse progression. Regardless, differences in which model fit best were often small. The finding that the best fitting model was dependent, in some cases, on the delay progression suggests that a single quantitative model of discounting may not be applicable to describe discounting across all procedural contexts. Ultimately, changes in steepness of discounting following experience with the inverse progression appeared similar to anchoring effects, whose mechanism will require further study to delineate.  相似文献   

4.
Statistical inference (including interval estimation and model selection) is increasingly used in the analysis of behavioral data. As with many other fields, statistical approaches for these analyses traditionally use classical (i.e., frequentist) methods. Interpreting classical intervals and p‐values correctly can be burdensome and counterintuitive. By contrast, Bayesian methods treat data, parameters, and hypotheses as random quantities and use rules of conditional probability to produce direct probabilistic statements about models and parameters given observed study data. In this work, we reanalyze two data sets using Bayesian procedures. We precede the analyses with an overview of the Bayesian paradigm. The first study reanalyzes data from a recent study of controls, heavy smokers, and individuals with alcohol and/or cocaine substance use disorder, and focuses on Bayesian hypothesis testing for covariates and interval estimation for discounting rates among various substance use disorder profiles. The second example analyzes hypothetical environmental delay‐discounting data. This example focuses on using historical data to establish prior distributions for parameters while allowing subjective expert opinion to govern the prior distribution on model preference. We review the subjective nature of specifying Bayesian prior distributions but also review established methods to standardize the generation of priors and remove subjective influence while still taking advantage of the interpretive advantages of Bayesian analyses. We present the Bayesian approach as an alternative paradigm for statistical inference and discuss its strengths and weaknesses.  相似文献   

5.
Behavior analysis and statistical inference have shared a conflicted relationship for over fifty years. However, a significant portion of this conflict is directed toward statistical tests (e.g., t‐tests, ANOVA) that aggregate group and/or temporal variability into means and standard deviations and as a result remove much of the data important to behavior analysts. Mixed‐effects modeling, a more recently developed statistical test, addresses many of the limitations of more basic tests by incorporating random effects. Random effects quantify individual subject variability without eliminating it from the model, hence producing a model that can predict both group and individual behavior. We present the results of a generalized linear mixed‐effects model applied to single‐subject data taken from Ackerlund Brandt, Dozier, Juanico, Laudont, & Mick, 2015, in which children chose from one of three reinforcers for completing a task. Results of the mixed‐effects modeling are consistent with visual analyses and importantly provide a statistical framework to predict individual behavior without requiring aggregation. We conclude by discussing the implications of these results and provide recommendations for further integration of mixed‐effects models in the analyses of single‐subject designs.  相似文献   

6.
Intertemporal tradeoffs are ubiquitous in decision making, yet preferences for current versus future losses are rarely explored in empirical research. Whereas rational‐economic theory posits that neither outcome sign (gains vs. losses) nor outcome magnitude (small vs. large) should affect delay discount rates, both do, and moreover, they interact: in three studies, we show that whereas large gains are discounted less than small gains, large losses are discounted more than small losses. This interaction can be understood through a reconceptualization of fixed‐cost present bias, which has traditionally described a psychological preference for immediate rewards. First, our results establish present bias for losses—a psychological preference to have losses over with now. Present bias thus predicts increased discounting of future gains but decreased (or even negative) discounting of future losses. Second, because present bias preferences do not scale with the magnitude of possible gains or losses, they play a larger role, relative to other motivations for discounting, for small magnitude intertemporal decisions than for large magnitude intertemporal decisions. Present bias thus predicts less discounting of large gains than small gains but more discounting of large losses than small losses. The present research is the first to demonstrate that the effect of outcome magnitude on discount rates may be opposite for gains and losses and also the first to offer a theory (an extension of present bias) and process data to explain this interaction. The results suggest that policy efforts to encourage future‐oriented choices should frame outcomes as large gains or small losses. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
A novel method for analyzing delay discounting data is proposed. This newer metric, a model‐based Area Under Curve (AUC) combining approximate Bayesian model selection and numerical integration, was compared to the point‐based AUC methods developed by Myerson, Green, and Warusawitharana (2001) and extended by Borges, Kuang, Milhorn, and Yi (2016). Using data from computer simulation and a published study, comparisons of these methods indicated that a model‐based form of AUC offered a more consistent and statistically robust measurement of area than provided by using point‐based methods alone. Beyond providing a form of AUC directly from a discounting model, numerical integration methods permitted a general calculation in cases when the Effective Delay 50 (ED50) measure could not be calculated. This allowed discounting model selection to proceed in conditions where data are traditionally more challenging to model and measure, a situation where point‐based AUC methods are often enlisted. Results from simulation and existing data indicated that numerical integration methods extended both the area‐based interpretation of delay discounting as well as the discounting model selection approach. Limitations of point‐based AUC as a first‐line analysis of discounting and additional extensions of discounting model selection were also discussed.  相似文献   

8.
Basic research on delay discounting, examining preference for smaller–sooner or larger–later reinforcers, has demonstrated a variety of findings of considerable generality. One of these, the magnitude effect, is the observation that individuals tend to exhibit greater preference for the immediate with smaller magnitude reinforcers. Delay discounting has also proved to be a useful marker of addiction, as demonstrated by the highly replicated finding of greater discounting rates in substance users compared to controls. However, some research on delay discounting rates in substance users, particularly research examining discounting of small‐magnitude reinforcers, has not found significant differences compared to controls. Here, we hypothesize that the magnitude effect could produce ceiling effects at small magnitudes, thus obscuring differences in delay discounting between groups. We examined differences in discounting between high‐risk substance users and controls over a broad range of magnitudes of monetary amounts ($0.10, $1.00, $10.00, $100.00, and $1000.00) in 116 Amazon Mechanical Turk workers. We found no significant differences in discounting rates between users and controls at the smallest reinforcer magnitudes ($0.10 and $1.00) and further found that differences became more pronounced as magnitudes increased. These results provide an understanding of a second form of the magnitude effect: That is, differences in discounting between populations can become more evident as a function of reinforcer magnitude.  相似文献   

9.
Impulsive and myopic choices are often explained as due to hyperbolic discounting, meaning that people are impatient for outcomes available immediately, and become increasingly more patient the more the outcome is delayed. Recent research, however, has suggested that much experimental evidence for increasing patience is actually due to subadditive discounting: people are less patient (per-time-unit) over shorter intervals regardless of when they occur. Because previous research into subadditive discounting has used a choice elicitation procedure, the present paper tests whether it generalises to matching. We find strong evidence that it does, but also find weak evidence of increasing patience for matching. We suggest, however, that subadditive discounting alone may be sufficient to account for all of our results. We conclude by questioning the contribution that hyperbolic discounting makes to our understanding of time preference.  相似文献   

10.
Lewis rats have been shown to make more impulsive choices than Fischer 344 rats in discrete-trial choice procedures that arrange fixed (i.e., nontitrating) reinforcement parameters. However, nontitrating procedures yield only gross estimates of preference, as choice measures in animal subjects are rarely graded at the level of the individual subject. The present study was designed to examine potential strain differences in delay discounting using an adjusting-amount procedure, in which distributed (rather than exclusive) choice is observed due to dynamic titration of reinforcer magnitude across trials. Using a steady-state version of the adjusting-amount procedure in which delay was manipulated between experimental conditions, steeper delay discounting was observed in Lewis rats compared to Fischer 344 rats; further, delay discounting in both strains was well described by the traditional hyperbolic discounting model. However, upon partial completion of the present study, a study published elsewhere (Wilhelm & Mitchell, 2009) demonstrated no difference in delay discounting between these strains with the use of a more rapid version of the adjusting-amount procedure (i.e., in which delay is manipulated daily). Thus, following completion of the steady-state assessment in the present study, all surviving Lewis and Fischer 344 rats completed an approximation of this rapid-determination procedure in which no strain difference in delay discounting was observed.  相似文献   

11.
Almost all book chapters, review articles, and textbooks in the field of personnel selection suggest that work sample tests are associated with lower levels of ethnic group adverse impact than paper‐and‐pencil tests of cognitive ability. However, the empirical literature is heavily dependent on adverse impact estimates obtained from incumbent samples rather than applicant samples. As such, parameter estimates are subject to range restriction from prior selection and on‐the‐job experiences. Further, an emerging consensus in the selection literature indicates that any method of assessment can be associated with high or low levels of adverse impact – depending on the nature of the construct(s) being measured. To begin to examine these issues, we present two recent sets of applicant data from public sector jobs (for a management and entry‐level job, each with technical and interpersonal skill requirements) and show that adverse impact of work sample exams might be more extensive than realized. We discuss the mismatch between what the field of employee selection “knows” and what is said in articles/summaries about work samples. Employers and other practitioners who depend on advice in academic overview articles may be overly optimistic and eventually disappointed by minimal reduction in adverse impact. Implications for workforce diversity and future research needs are also discussed.  相似文献   

12.
Statistical tests of indirect effects can hardly distinguish between genuine and spurious mediation effects. The present research demonstrates, however, that mediation analysis can be improved by combining a significance test of the indirect effect with assessing the fit of causal models. Testing only the indirect effect can be misleading, because significant results may also be obtained when the underlying causal model is different from the mediation model. We use simulated data to demonstrate that additionally assessing the fit of causal models with structural equation models can be used to exclude subsets of models that are incompatible with the observed data. The results suggest that combining structural equation modeling with appropriate research design and theoretically stringent mediation analysis can improve scientific insights. Finally, we discuss limitations of the structural equation modeling approach, and we emphasize the importance of non‐statistical methods for scientific discovery.  相似文献   

13.
决策与判断研究中的个体分析(英文)   总被引:1,自引:0,他引:1  
决策与判断研究中(甚至是实验心理学研究中)的许多问题关注某效应是否真实存在,及其背后的解释是什么。这些问题不关注该效应在某一特殊群体中是否显著。因此,可以通过分析单个被试来检验效应的显著性。如果有一个被试表现出了该效应,那么,这个效应就是存在的。根据这一观点,有时也可通过跨案例或者轮次(across cases or rounds)分析来验证效应的显著性,而不需要进行跨被试分析(across subjects)。这一观点也暗示在一些实验中可能存在反方向的效应。本文建议通过进行基于被试个体的统计分析来检验这样的效应,并介绍了一些不同形式的方法:PP概率图(probability probability plots);P值分布检验(tests of the distribution of p-values);分层取样多重检验的矫正(correction for multiple testing with step-down resampling)。这些方法都可以用于处理在对同样假设进行多重检验时无法避免的问题。另外,本文也列举了一些例子,其中有一部分例子存在反方向的效应,另一部分例子不存在。  相似文献   

14.
A within-subject design, using human participants, compared delay discounting functions for real and hypothetical money rewards. Both real and hypothetical rewards were studied across a range that included $10 to $250. For 5 of the 6 participants, no systematic difference in discount rate was observed in response to real and hypothetical choices, suggesting that hypothetical rewards may often serve as a valid proxy for real rewards in delay discounting research. By measuring discounting at an unprecedented range of real rewards, this study has also systematically replicated the robust finding in human delay discounting research that discount rates decrease with increasing magnitude of reward. A hyperbolic decay model described the data better than an exponential model.  相似文献   

15.
The weighted euclidean distances model in multidimensional scaling (WMDS) represents individual differences as dimension saliences which can be interpreted as the orientations of vectors in a subject space. It has recently been suggested that the statistics of directions would be appropriate for carrying out tests of location with such data. The nature of the directional representation in WMDS is reviewed and it is argued that since dimension saliences are almost always positive, the directional representations will usually be confined to the positive orthant. Conventional statistical techniques are appropriate to angular representations of the individual differences which will yield angles in the interval (0, 90) so long as dimension saliences are nonnegative, a restriction which can be imposed. Ordinary statistical methods are also appropriate with several linear indices which can be derived from WMDS results. Directional statistics may be applied more fruitfully to vector representations of preferences.  相似文献   

16.
Recent advances in assessment methodology have resulted in a highly efficient procedure for obtaining delay discounting rates for adults: a 5‐trial adjusting delay task (ADT‐5) examining intertemporal choice for hypothetical rewards. The low participant burden of this task makes it potentially useful for children, with whom delay discounting research is relatively limited. However, it is unknown whether results from this task match choice for real rewards. The present study assessed delay discounting for real and hypothetical monetary rewards using a modified ADT‐5 with 9 children admitted to a psychiatric day treatment program. Participants completed up to 3 tasks with each reward type in alternating order. No difference in discounting rate, via log(k), was observed between the first task of each reward type. This finding was replicated across subsequent tasks for the subset of participants (n = 6) who completed all 6 tasks. However, delay discounting of real and hypothetical rewards was not found to be statistically equivalent. These results suggest that a modified ADT‐5 using hypothetical rewards may be a viable option for assessing delay discounting in children with psychiatric diagnoses, but additional research is needed to explicitly examine whether hypothetical and real rewards are discounted equivalently in this population.  相似文献   

17.
Discrete choice experiments—selecting the best and/or worst from a set of options—are increasingly used to provide more efficient and valid measurement of attitudes or preferences than conventional methods such as Likert scales. Discrete choice data have traditionally been analyzed with random utility models that have good measurement properties but provide limited insight into cognitive processes. We extend a well‐established cognitive model, which has successfully explained both choices and response times for simple decision tasks, to complex, multi‐attribute discrete choice data. The fits, and parameters, of the extended model for two sets of choice data (involving patient preferences for dermatology appointments, and consumer attitudes toward mobile phones) agree with those of standard choice models. The extended model also accounts for choice and response time data in a perceptual judgment task designed in a manner analogous to best–worst discrete choice experiments. We conclude that several research fields might benefit from discrete choice experiments, and that the particular accumulator‐based models of decision making used in response time research can also provide process‐level instantiations for random utility models.  相似文献   

18.
何贵兵  杨鑫蔚  蒋多 《心理学报》2017,(10):1334-1343
他人与自我之间的社会距离越远,则他人的获益或损失带给自我的效用就越小,此现象被称为社会折扣。虽然有一些研究探讨了金钱结果的社会折扣现象,但作为公共品的环境结果的社会折扣规律及其影响因素并未得到应有的研究。本研究以优劣空气天数为例,采用选择滴定程序,在损益两种情境下探索环境结果的社会折扣现象,并考察利他人格对社会折扣的影响。结果发现:(1)相比双曲模型,指数模型在损益两种情境下皆能更佳地拟合环境结果的社会折扣函数;(2)损益情境与社会距离的交互作用影响环境结果的社会折扣程度,损失情境下的社会折扣程度随社会距离的增加而变大的幅度大于收益情境;(3)利他人格在社会距离对社会折扣的影响中起调节作用。相比高利他人格者,低利他人格者的社会折扣受社会距离的影响较大。本研究对理解环境结果社会折扣和环保决策行为具有重要意义。  相似文献   

19.
Existing test statistics for assessing whether incomplete data represent a missing completely at random sample from a single population are based on a normal likelihood rationale and effectively test for homogeneity of means and covariances across missing data patterns. The likelihood approach cannot be implemented adequately if a pattern of missing data contains very few subjects. A generalized least squares rationale is used to develop parallel tests that are expected to be more stable in small samples. Three factors were varied for a simulation: number of variables, percent missing completely at random, and sample size. One thousand data sets were simulated for each condition. The generalized least squares test of homogeneity of means performed close to an ideal Type I error rate for most of the conditions. The generalized least squares test of homogeneity of covariance matrices and a combined test performed quite well also.Preliminary results on this research were presented at the 1999 Western Psychological Association convention, Irvine, CA, and in the UCLA Statistics Preprint No. 265 (http://www.stat.ucla.edu). The assistance of Ke-Hai Yuan and several anonymous reviewers is gratefully acknowledged.  相似文献   

20.
Recent cluster analytic research with alcoholic inpatients has demonstrated the existence of several Millon Clinical Multiaxial Inventory (MCMI) clusters that appear to be consistent across different subject samples. The validity of these data would be strengthened by a statistical demonstration of the similarity of attained clusters across studies--a demonstration of concordance of subject classification across different clustering techniques on the same data set- and the inclusion of external, independent measures against which to evaluate the predictive validity of the cluster typology. We found a high level of concordance in subject classification across different clustering methods on the same data set and a high level of agreement with cluster typologies attained in previous studies. Subsequent multivariate analyses employing independent scales measuring various aspects of alcohol use confirmed differences among cluster members on perceived benefits of alcohol use and deleterious effects of alcohol use. The prominent differences in alcohol use along with a rationale for their development are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号