首页 | 本学科首页   官方微博 | 高级检索  
     


The impact of ordinate scaling on the visual analysis of single-case data
Affiliation:1. Department of Counseling and Human Development, University of Louisville, Louisville, KY, USA;2. Department of Educational Leadership, Evaluation and Organizational Development, University of Louisville, Louisville, KY, USA;1. Roehampton University, United Kingdom;2. Université du Québec à Montréal, Canada;3. Cégep Régional de Lanaudière à Joliette, Canada;1. University of Utah, United States;2. University of Massachusetts Boston, United States;3. University of California, Riverside, United States;4. Louisiana State University, United States;1. Department of Educational Psychology, University of Texas at Austin, United States of America;2. College of Education, Lehigh University, United States of America;3. Serve Minnesota, Minneapolis, MN, United States of America;4. School District of Elmbrook, Brookfield, WI, United States of America;1. Linköping University, Sweden;2. Wayne State University, USA;3. Sungkyunkwan University, South Korea;4. University of Florida, USA
Abstract:Visual analysis is the primary method for detecting the presence of treatment effects in graphically displayed single-case data and it is often referred to as the “gold standard.” Although researchers have developed standards for the application of visual analysis (e.g., Horner et al., 2005), over- and underestimation of effect size magnitude is not uncommon among analysts. Several characteristics have been identified as potential contributors to these errors; however, researchers have largely focused on characteristics of the data itself (e.g., autocorrelation), paying less attention to characteristics of the graphic display which are largely in control of the analyst (e.g., ordinate scaling). The current study investigated the impact that differences in ordinate scaling, a graphic display characteristic, had on experts' accuracy in judgments regarding the magnitude of effect present in single-case percentage data. 32 participants were asked to evaluate eight ABAB data sets (2 each presenting null, small, moderate, and large effects) along with three iterations of each (32 graphs in total) in which only the ordinate scale was manipulated. Results suggest that raters are less accurate in their detection of treatment effects as the ordinate scale is constricted. Additionally, raters were more likely to overestimate the size of a treatment effect when the ordinate scale was constricted.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号