首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   38篇
  免费   8篇
  国内免费   1篇
  2022年   2篇
  2021年   1篇
  2020年   2篇
  2019年   2篇
  2018年   2篇
  2017年   3篇
  2016年   2篇
  2014年   3篇
  2013年   3篇
  2012年   2篇
  2011年   1篇
  2009年   1篇
  2008年   1篇
  2007年   3篇
  2006年   1篇
  2005年   1篇
  2004年   2篇
  2003年   1篇
  2002年   1篇
  1998年   1篇
  1995年   1篇
  1991年   1篇
  1988年   1篇
  1986年   1篇
  1984年   1篇
  1982年   2篇
  1980年   2篇
  1978年   2篇
  1976年   1篇
排序方式: 共有47条查询结果,搜索用时 15 毫秒
41.
Uncertain quantities can be described by single‐point estimates of lower interval bounds (X1), upper interval bounds (X2), two‐bound estimates (separate estimates of X1 and X2), and by ranges (X1?X2). A price estimation task showed that single‐bound estimates phrased as “T costs more than X1” and “T costs less than X2,” yielded much larger intervals than “minimum X1” and “maximum X2.” This difference can be attributed to exclusive interpretations of X1 and X2 in the first case (X1 and X2 are unlikely values), and inclusive interpretations in the second (X1 and X2 are likely values). This pattern of results was replicated in other domains where participants estimated single targets. When they estimated a distribution of targets, the pattern was reversed. “Minimum” and “maximum” values of variable quantities (e.g., flight prices) were found to delimit larger intervals than “more than” and “less than” estimates. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   
42.
Estimation based on effect sizes, confidence intervals, and meta‐analysis usually provides a more informative analysis of empirical results than does statistical significance testing, which has long been the conventional choice in psychology. The sixth edition of the American Psychological Association Publication Manual now recommends that psychologists should, wherever possible, use estimation and base their interpretation of research results on point and interval estimates. We outline the Manual's recommendations and suggest how they can be put into practice: adopt an estimation framework, starting with the formulation of research aims as ‘How much?’ or ‘To what extent?’ questions. Calculate from your data effect size estimates and confidence intervals to answer those questions, then interpret. Wherever appropriate, use meta‐analysis to integrate evidence over studies. The Manual's recommendations can assist psychologists improve they way they do their statistics and help build a more quantitative and cumulative discipline.  相似文献   
43.
44.
A study was conducted to investigate the effects on students' spelling achievement of variations in teacher assessment procedures. Teachers measured student spelling performance at a constant level of task difficulty using different measurement frequencies and different rules to interpret the data. Each teacher wrote two consecutive 3-week goals for improved spelling performance for two sets of 100 spelling words and then measured student performance either daily or weekly by dictating randomly selected words from each 100-word list. Teachers were trained to apply either a predetermined set of decision rules or their own judgment to the data to decide if the spelling program they had implemented for the student was effective. Ineffective programs were changed or modified. Results indicated that daily measurement was significantly more effective than weekly measurement in increasing spelling achievement and that, under certain conditions, decision rules were more effective than teacher judgment in determining when to make program changes or modifications.This research was conducted pursuant to Contract 300-77-0491 between the Bureau of Education for the Handicapped (now called Special Education Programs) and the University of Minnesota Institute for Research on Learning Disabilities.  相似文献   
45.
The authors examined statistical practices in 193 randomized controlled trials (RCTs) of psychological therapies published in prominent psychology and psychiatry journals during 1999-2003. Statistical significance tests were used in 99% of RCTs, 84% discussed clinical significance, but only 46% considered-even minimally-statistical power, 31% interpreted effect size and only 2% interpreted confidence intervals. In a second study, 42 respondents to an email survey of the authors of RCTs analyzed in the first study indicated they consider it very important to know the magnitude and clinical importance of the effect, in addition to whether a treatment effect exists. The present authors conclude that published RCTs focus on statistical significance tests ("Is there an effect or difference?"), and neglect other important questions: "How large is the effect?" and "Is the effect clinically important?" They advocate improved statistical reporting of RCTs especially by reporting and interpreting clinical significance, effect sizes and confidence intervals.  相似文献   
46.
Undergraduates were exposed to a mixed fixed-ratio differential-reinforcement-of-low-rate schedule. Values of the schedule components were adjusted so that interreinforcer intervals in one component were longer than those in another component. Following this, a mixed fixed-interval 5-s fixed-interval 20-s schedule (Experiment 1) or six fixed-interval schedules in which the values ranged from 5 to 40 s (Experiment 2) were in effect. In both experiments, response rates under the fixed-interval schedules were higher when the interreinforcer intervals approximated those produced under the fixed-ratio schedule, whereas the rates were lower when the interreinforcer intervals approximated those produced under the different-reinforcement-of-low-rate schedule. The present results demonstrate that the effects of behavioral history were under control of the interreinforcer intervals as discriminative stimuli.  相似文献   
47.
Sixty-one publications about evoked and event-related potentials (EP and ERP, respectively) in patients with severe Disorders of Consciousness (DoC) were found and analyzed from a quantitative point of view. Most studies are strongly underpowered, resulting in very broad confidence intervals (CI). Results of such studies cannot be correctly interpreted, because, for example, CI > 1 (in terms of Cohen’s d) indicate that the real effect may be very strong, very weak, or even opposite to the reported effect. Furthermore, strong negative correlations were obtained between sample size and effect size, indicating a possible publication bias. These correlations characterized not only the total data set, but also each thematically selected subset. The minimal criteria of a strong study to EP/ERP in DoC are proposed: at least 25 patients in each patient group; as reliable diagnosis as possible; the complete report of all methodological details and all details of results (including negative results); and the use of appropriate methods of data analysis. Only three of the detected 60 studies (5%) satisfy these criteria. The limitations of the current approach are also discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号