Negative and affirmative precepts |
| |
Authors: | Francis C. Wade |
| |
Affiliation: | (1) Marquette University, USA |
| |
Abstract: | Summary That negative precepts play the critical role in the generalization principle is a consequence of the relationship of negative to affirmative precepts, i.e. that the negative give the essential negative condition for observing the affirmative precept. This relationship in turn is based on the nature of: 1) the negative precept which obliges to inaction and consequently demands action in order to violate it; 2) the affirmative precept which obliges to action and can be violated by inaction. Since action requires agency, and agency involves more responsibility than does the non-agency present in violating affirmative percepts, we conclude that violating negative precepts demands more responsibility and consequently that they oblige more than do affirmative precepts. To emphasize this critical role of agency I shall conclude with an example proposed by Michael Tooley: Imagine a machine which contains two children, John and Mary. If one pushes a button, John will be killed, but Mary will emerge unharmed. If one does not push the button, John will emerge unharmed, but Mary will be killed. In the first case one kills John, while in the second case one merely lets Mary die. Does one seriously wish to say that the action of intentionally refraining from pushing the button is morally preferable to the action of pushing it, even though exactly one person will perish in either case?20Tooley's judgment on this example indicates that the outcome - in either case one person will perish - is the sole moral determinant (intentions do not enter this case) and that agency of pushing the button is of no moral significance. Yet, if you, the reader, stood before this machine and tried to decide what you should do, the fact of your agency in pushing the button would control your decision. Consider pushing the button. What reason could you have for that action? That otherwise Mary would die. But who can say that Mary's life is more valuable than John's? That Mary will die is no valid reason for pushing the button. But what of saving Mary's life? You can't do that without yourself actively killing John. But if you don't, Mary will die. This is true, but she will not die from your agency, and this is critical to your choice. What a machine may or may not do may or may not be under your control. What you do is under your control, and you may not do evil, not even that good may come of it. Consequently, you would be forced to say: the decision of intentionally refraining from pushing the button is morally preferable to the action of pushing it even though exactly one person perishes in either case. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|