首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Optimization‐based computer systems are used by many airlines to solve crew planning problems by constructing minimal cost tours of duty. However, today airlines do not only require cost effective solutions, but are also very interested in robust solutions. A more robust solution is understood to be one where disruptions in the schedule (due to delays) are less likely to be propagated into the future, causing delays of subsequent flights. Current scheduling systems based solely on cost do not automatically provide robust solutions. These considerations lead to a multiobjective framework, as the maximization of robustness will be in conflict with the minimization of cost. For example crew changing aircraft within a duty period is discouraged if inadequate ground time is provided. We develop a bicriteria optimization framework to generate Pareto optimal schedules for the domestic airline. A Pareto optimal schedule is one which does not allow an improvement in cost and robustness at the same time. We developed a method to solve the bicriteria problem, implemented it and tested it with actual airline data. Our results show that considerable gain in robustness can be achieved with a small increase in cost. The additional cost is mainly due to an increase in overnights, which allows for a reduction of the number of aircraft changes. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
We introduce an approximation set to the value efficient set in multiobjective problems under partial information on the decision maker's preferences modelled by a vector value function. We show monotonicity and convergence properties based on increasingly precise vector value functions with two components, which improve the approximation and might be a support to possible solution methods. © 1997 John Wiley & Sons, Ltd.  相似文献   

3.
Logistic Approximation to the Normal: The KL Rationale   总被引:1,自引:0,他引:1  
A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback–Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of them is true. The new constant 1.749, computed assuming the normal distribution is true, yields an approximation that is an improvement in fit of the tails of the distribution as compared to the minimax constant of 1.702, widely used in item response theory (IRT). The minimax constant is by definition marginally better in its overall maximal error. It is argued that the KL constant is more statistically appropriate for use in IRT. The author would like to thank Sebastian Schreiber for his generous assistance with this project.  相似文献   

4.
In the organizational behaviour and organizational psychology literature, individual errors are considered either as sources of blame (error-prevention culture) or as sources of learning and something to be encouraged in order to promote innovation (error-management culture). While we can assume that a third perspective exists somewhere in between, error management is usually considered as the best solution. Yet scholars have tended to neglect the planned and directed transition from a pure error-prevention to an error-management culture. We thus examine to what extent and under what conditions an organization can culturally transform the representation of individual errors through its business leaders. To answer this question, we conducted a qualitative study on the case of a French insurance company. We portray a realistic image of the promotion of an error management culture, pointing out certain limitations and constraints, while nonetheless identifying some conditions for successful error reframing.  相似文献   

5.
将GO/NO-GO任务范式和错误意识判断范式相结合,对21名ADHD儿童,27名正常儿童,在错误监控中的错误觉察水平进行考察,结果表明:1)ADHD儿童能够正常觉察到自己的错误反应;2)错误意识判断任务诱发出ADHD儿童的错误延迟效应,这种作用,既是因为该任务能够刺激ADHD儿童的有意觉察,又因为该任务无形中增加了GO/NO-GO任务中的刺激间隔时间。该结果表明,增加刺激间隔时间,可能会促使ADHD儿童改变错误后的反应策略,对错误反应进行错误调节,提高其错误监控水平。  相似文献   

6.
韩明秀  贾世伟 《心理科学进展》2016,24(11):1758-1766
错误加工的意识水平包括有意识水平、无意识水平以及介于两者之间的意识水平。意识水平的划分通常通过错误报告范式实现。根据其对意识水平的划分, 可以将相关研究分为二元意识水平研究和多元意识水平研究。已有研究, 主要关注于意识水平与错误诱发的Ne、Pe的关系。两类研究一致表明, Pe受意识水平的调节, 但是并没有得到fMRI研究的一致支持。对Ne的激活是否独立于意识, 研究者们仍存在争议。还有研究考察了注意对错误意识水平的影响, 结果支持Pe与意识水平共变的观点, 但是对注意如何影响错误意识水平还存在质疑。在对以上研究进行整理与分析的基础上, 对将来的研究提出了建议。  相似文献   

7.
Recent studies have shown that participants can keep track of the magnitude and direction of their errors while reproducing target intervals (Akdoğan & Balcı, 2017) and producing numerosities with sequentially presented auditory stimuli (Duyan & Balcı, 2018). Although the latter work demonstrated that error judgments were driven by the number rather than the total duration of sequential stimulus presentations, the number and duration of stimuli are inevitably correlated in sequential presentations. This correlation empirically limits the purity of the characterization of “numerical error monitoring”. The current work expanded the scope of numerical error monitoring as a form of “metric error monitoring” to numerical estimation based on simultaneously presented array of stimuli to control for temporal correlates. Our results show that numerical error monitoring ability applies to magnitude estimation in these more controlled experimental scenarios underlining its ubiquitous nature.  相似文献   

8.
A paradoxical implication of Kraemer's expression for the large-sample standard error of Brogden's form of the biserial correlation is identified, and a new expression is given which does not imply the paradox. However, numerical evidence is presented which calls into question the correctness of the expression.  相似文献   

9.
The aim of this study was to gain a better understanding of how the run pattern varies as a consequence to main error correction versus secondary error correction. Twenty-two university students were randomly assigned to one of two training-conditions: ‘main error’ (ME) and ‘secondary error’ (SE) correction. The rear-foot strike at touchdown was hypothesized as the ‘main error’, whereas an incorrect shoulder position (i.e., behind the base of support) as the ‘secondary error’. In order to evaluate any changes in run pattern at the foot touchdown instant, the ankle, knee and hip joint angles, the height of toe and heel (with respect to the ground), and the horizontal distance from the heel to the projected center of mass on the ground were measured. After the training-intervention, the ME group showed a significant improvement in the run pattern at the foot touchdown instant in all kinematic parameters, whereas no significant changes were found in the SE group. The results support the hypothesis that the main error can have a greater influence on the movement patterns than a secondary error. Furthermore, the findings highlight that a correct diagnosis and the correction of the ‘main error’ are fundamental for greater run pattern improvement.  相似文献   

10.
Previous studies showed that random error can explain overconfidence effects typically observed in the literature. One of these studies concluded that, after accounting for random error effects in the data, there is little support for cognitive‐processing biases in confidence elicitation. In this paper, we investigate more closely the random error explanation for overconfidence. We generated data from four models of confidence and then estimated the magnitude of random error in the data. Our results show that, in addition to the true magnitude of random error specified in the simulations, the error estimates are influenced by important cognitive‐processing biases in the confidence elicitation process. We found that random error in the response process can account for the degree of overconfidence found in calibration studies, even when that overconfidence is actually caused by other factors. Thus, the error models say little about whether cognitive biases are present in the confidence elicitation process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
Deborah Mayo  Jean Miller 《Synthese》2008,163(3):305-314
We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest another approach for meta-methodology based on a conglomeration of tools and strategies (from statistical modeling, experimental design, and related fields) that affords forward looking procedures for learning from error and for controlling error. The resulting “error statistical” appraisal is empirical—methods are appraised by examining their capacities to control error. At the same time, this account is normative, in that the strategies that pass muster are claims about how actually to proceed in given contexts to reach reliable inferences from limited data.  相似文献   

12.
Methods to determine the direction of a regression line, that is, to determine the direction of dependence in reversible linear regression models (e.g., xy vs. yx), have experienced rapid development within the last decade. However, previous research largely rested on the assumption that the true predictor is measured without measurement error. The present paper extends the direction dependence principle to measurement error models. First, we discuss asymmetric representations of the reliability coefficient in terms of higher moments of variables and the attenuation of skewness and excess kurtosis due to measurement error. Second, we identify conditions where direction dependence decisions are biased due to measurement error and suggest method of moments (MOM) estimation as a remedy. Third, we address data situations in which the true outcome exhibits both regression and measurement error, and propose a sensitivity analysis approach to determining the robustness of direction dependence decisions against unreliably measured outcomes. Monte Carlo simulations were performed to assess the performance of MOM-based direction dependence measures and their robustness to violated measurement error assumptions (i.e., non-independence and non-normality). An empirical example from subjective well-being research is presented. The plausibility of model assumptions and links to modern causal inference methods for observational data are discussed.  相似文献   

13.
The error theory is a metaethical theory that maintains that normative judgments are beliefs that ascribe normative properties, and that these properties do not exist. In a recent paper, Bart Streumer argues that it is impossible to fully believe the error theory. Surprisingly, he claims that this is not a problem for the error theorist: even if we can’t fully believe the error theory, the good news is that we can still come close to believing the error theory. In this paper I show that Streumer’s arguments fail. First, I lay out Streumer’s argument for why we can’t believe the error theory. Then, I argue against the unbelievability of the error theory. Finally, I show that Streumer’s positive proposal that we can come close to believing the error theory is actually undermined by his own argument for why we can’t believe the error theory.  相似文献   

14.
ABSTRACT

This study investigated the association between exercise type and inhibition of prepotent responses and error detection. Totally, 75 adults (M = 68.88 years) were classified into one of three exercise groups: those who were regular participants in open- or closed-skill forms of exercise, and those who exercised only irregularly. The participants completed a Stroop and task-switching tasks with event-related brain potentials (ERPs) recorded. The results revealed that regular exercisers displayed faster reaction times (RTs) in the Stroop task compared with irregular exercisers. The open-skill exercisers exhibited smaller N200 and larger P300a amplitudes in the Stroop task compared with irregular exercisers. Furthermore, the open-skill exercisers showed a tendency of shorter error-related negativity latencies at the task-witching test. The findings suggest that older adults may gain extra cognitive benefits in areas such as inhibition functioning and error processing from participating in open-skill forms of physical exercises.  相似文献   

15.
Electrophysiological analysis of error monitoring in schizophrenia   总被引:4,自引:0,他引:4  
In this study, the authors sought to determine whether abnormalities exhibited by schizophrenia patients in event-related potentials associated with self-monitoring--the error-related negativity (ERN) and the correct response negativity (CRN)--persist under conditions that maximize ERN amplitude and to examine relationships between the ERN and behavior in schizophrenia. Participants performed a flanker task under 2 contingencies: one encouraging accuracy and another emphasizing speed. Compared with healthy participants, in schizophrenia patients the ERN was reduced in the accuracy condition, and the CRN was enhanced in the speed condition. The amplitude of a later ERP component, the error positivity, did not differ between groups in either task condition. Reduced self-correction and increased accuracy following errors were associated with larger ERNs in both groups. Thus, ERN generation appears to be abnormal in schizophrenia patients even under conditions demonstrated to maximize ERN amplitude; however, functional characteristics of the ERN appear to be intact.  相似文献   

16.
生活中,个体会时时关注自己的行为结果并及时做出调整以适应环境的变化。但在应激下个体能否有效地监控行为并做出适应性调整依然未知。本研究招募了52名男性大学生被试,将其随机分入应激组与控制组,采用特里尔社会应激测试(Trier Social Stress Test, TSST)诱发个体的应激反应,并结合错误意识任务(Error Awareness Task,EAT)探索个体急性应激下的错误监控与错误后调整过程。应激指标的结果显示应激组个体在应激任务后唾液皮质醇、心率、应激感知自我报告和负性情绪均显著高于控制组,表明急性应激的诱发是成功的。行为结果显示应激组的错误意识正确率显著低于控制组,错误意识反应时显著短于控制组;进一步地,应激组个体在意识到错误之后的试次上正确率显著低于未意识到错误之后的试次,并且应激组个体在意识到错误之后的试次上正确率低于控制组。结果表明急性应激降低了个体对错误反应的监控水平,即便在辨别出错误反应的情况下,个体的行为监控与调节也更差。本研究说明急性应激会损伤行为监控系统,导致个体的行为适应性下降。  相似文献   

17.
Kipros Lofitis 《Ratio》2020,33(1):37-45
An error theory about moral reasons is the view that ordinary thought is committed to error, and that the alleged error is the thought that moral norms (expressing alleged moral requirements) invariably supply agents with sufficient normative reasons (for action). In this paper, I sketch two distinct ways of arguing for the error theorist's substantive conclusion that moral norms do not invariably supply agents with sufficient normative reasons. I am primarily interested in the somewhat neglected way, which I call the alternative route. A reason for this is because it seems a genuine question whether the alternative route towards the substantive conclusion need be as troubling to the moralist as the standard route. My hunch is that it is not. Though the alternative error theory denies justification from genuinely moral acts, it also does so from acts born out of self-interest or immorality. If the alternative theory is true, the moralist can at least hold on to the claim that if genuinely moral considerations fail to provide agents with reasons for action, nothing else (of the sort) does.  相似文献   

18.
Although many common uses of p-values for making statistical inferences in contemporary scientific research have been shown to be invalid, no one, to our knowledge, has adequately assessed the main original justification for their use, which is that they can help to control the Type I error rate (Neyman & Pearson, 1928, 1933). We address this issue head-on by asking a specific question: Across what domain, specifically, do we wish to control the Type I error rate? For example, do we wish to control it across all of science, across all of a specific discipline such as psychology, across a researcher's active lifetime, across a substantive research area, across an experiment, or across a set of hypotheses? In attempting to answer these questions, we show that each one leads to troubling dilemmas wherein controlling the Type I error rate turns out to be inconsistent with other scientific desiderata. This inconsistency implies that we must make a choice. In our view, the other scientific desiderata are much more valuable than controlling the Type I error rate and so it is the latter, rather than the former, with which we must dispense. But by doing so—that is, by eliminating the Type I error justification for computing and using p-values—there is even less reason to believe that p is useful for validly rejecting null hypotheses than previous critics have suggested.  相似文献   

19.
This article discusses alternative procedures to the standardF-test for ANCOVA in case the covariate is measured with error. Both a functional and a structural relationship approach are described. Examples of both types of analysis are given for the simple two-group design. Several cases are discussed and special attention is given to issues of model identifiability. An approximate statistical test based on the functional relationship approach is described. On the basis of Monte Carlo simulation results it is concluded that this testing procedure is to be preferred to the conventionalF-test of the ANCOVA null hypothesis. It is shown how the standard null hypothesis may be tested in a structural relationship approach. It is concluded that some knowledge of the reliability of the covariate is necessary in order to obtain meaningful results.  相似文献   

20.
ABSTRACT

Errorless learning (EL) is an approach in which errors are eliminated or reduced as much as possible while learning of new information or skills. In contrast, during trial-and-error???or errorful???learning (TEL) errors are not reduced and are often even promoted. There is a complex and conflicting pattern of evidence whether EL or TEL may result in better memory performance. One major confound in the extant literature is that most EL studies have not controlled for the number of errors made during TEL, resulting in a large variability in the amount of errors committed. This variability likely explains why studies on the cognitive underpinnings of EL and TEL have produced mixed findings. In this study, a novel object-location learning task was employed to examine EL and TEL in 30 healthy young adults. The number of errors was systematically manipulated, allowing us to investigate the impact of frequency of errors on learning outcome. The results showed that recall from memory was significantly better during EL. However, the number of errors made during TEL did not influence the performance in young adults. Altogether, our novel paradigm is promising for measuring EL and TEL, allowing for more accurate analyses to understand the impact of error frequency on a person’s learning ability and style.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号