The theory of moral status I have developed is designed to allow us to make moral judgments having to do with the treatment of members of various kinds of moral patients. A moral patient's particular grade of moral stature is proposed as a normative factor that determines, at least in part, whether or not certain kinds of acts or omissions regarding that individual are morally permissible, required, or forbidden.
I have illustrated how my notion of moral stature can be applied in several cases, e.g., human embryos, sentient fetuses, and infants, I have argued that while all of these kinds of moral patients have intrinsic value and are human beings, they have different grades of moral stature, and thus may ethically be treated differently. I have also illustrated how this account applies in cases of various kinds of non-human moral patients. Psychological organisms with greater moral stature, e.g. primates, cetaceans, elephants, etc., deserve greater respect than those with less developed or complex psychological capacities. Organisms that are not sentient, that have only the basic level of intrinsic moral standing, can be considered as moral patients, but the degree of respect they command is weaker than that of psychological organisms. Moral agents, particularly, autonomous moral agents, that is, persons like you and me, have the highest grade of moral stature. We are on a moral plateau that gives us both rights and responsibilities as members of the human moral community. Our responsibilities to protect vulnerable persons are weightier than comparable responsibilities to protect nonhumans on my view, other things being equal.
But it is necessary to introduce another normative factor to make this account better able to capture and explain our moral intuitions, a factor, that I will refer to as moral gravity. The intuitive idea of moral gravity is that some kinds of things that affect a moral patient's well-being, or interests, are more important than others. The notion of the "gravity" of different kinds of human rights violations can be used to illustrate this idea.
Amnesty International (an organization I have been associated with in various capacities for many years) has a mission statement in which it commits itself to oppose certain kinds of grave abuses of human rights. What are grave abuses? Well, torture, rape, murder, genocide, and arbitrary imprisonment are definitely on this list. What is not on the list of grave abuses of human rights that Amnesty International works on are, e.g. copyright infringement, infringements of personal privacy, denial of voting rights, and various kinds of labor abuses, for instance, requiring workers to work more than 60 hours a week. The international standards promulgated by the ILO (International Labor Organization) specify that the normal work week should be no more than 48 hours per week, with a maximum of 12 additional hours in voluntary overtime permitted. When workers are required to work for more than 60 hours per week, it is a human rights abuse. But it is not a grave abuse in Amnesty's view. There are probably several hundreds of millions of workers in the world who regularly are required to work more than 60 per week (think of young Wall Street lawyers in high-powered law firms, or medical interns at busy urban hospitals, not just low-wage workers in Asian sweatshops). In terms of the numbers of people whose human rights are affected, the abuse of overtime is a major human rights issue. But because the form of abuse is itself not considered "grave" it does not get much attention from human rights organizations like Amnesty International.
It is possible to think of moral gravity as a property of certain kinds of interests. The interests that have the greatest gravity are the ones most closely connected to an individual's survival, well-being, and freedom. These kinds of interests are sometimes called "ultimate interests" because they are typically the sorts of things that people regard as ultimately valuable or valuable for their own sake. They are interests like: not being killed, not being tortured, not having one's freedoms restricted, not having to be a slave, having enough to eat, having a safe place to live, and so forth. There are lots of other interests that people have that do not have much gravity as compared to these kinds of ultimate interests. For instance, I am interested in riding bicycles. I like it as a form of exercise and a form of transportation. I really enjoyed living in Copenhagen for a few months last fall because I got to ride my bike everyday on very safe bicycle paths along with lots other people who like me have a liking for biking. I would be unhappy if I could not ride my bicycle, but, as the saying goes, it wouldn't kill me not to. I could live without it.
How then does moral gravity affect our moral judgments? It seems that the straightforward answer to this question is that actions which threaten or harm an interest which has greater moral gravity to the moral patient concerned are more serious than those that threaten or harm an interest that has lesser moral gravity. The prohibition against killing other persons is the most serious moral prohibition because the interest that we have in staying alive is considered to be the ultimate interest which has the greatest moral gravity. If we cease be alive, nothing much else matters. The prohibition against theft of personal property is far less serious than the one against killing and other forms of physical assault against persons, e.g. rape, torture. That we have an interest in our property is clear, but whether it is an ultimate interest is doubtful. Like other Americans, I own a lot of things. But there is nothing that I own that I could not live without. If you steal my bike it harms me, but the interest involved in this case just isn't that grave. I can get another one.
But the gravity of the theft of a bicycle can be different for different people. Consider the classic film The Bicycle Thief in which the theft of a bicycle leads to a cascade of hardships for a whole family in post-war Rome. In this case, the bicycle was essential for a man's job that he needed to feed his family, and so the theft of his bicycle was a graver offense than if someone were to steal my bicycle. So this normative factor is going to be context-sensitive and it is therefore going to be difficult to say general things about it. Nevertheless, I think the intuition we have about it is sufficiently clear. It is possible to extend the idea of moral gravity to other kinds of moral patients.
Nonhuman animals which are sentient have different kinds of interests than do human beings, but it seems reasonable to suppose that they are also interested in some things ultimately such as avoiding pain and suffering, keeping their freedom, having enough to eat, having a safe place to live, and so forth. Non-psychological organisms, such as plants and bacteria, do not have interests, but may be said to have a "good of their own". A plant needs sunlight, water and soil in order to grow. These things constitute the good for plants such that depriving a plant of these things would kill it, while doing other things to it, like picking its fruit or pruning its branches, would not. It is possible to think that non-living moral patients, that is, those which have only derived moral status, for instance, works of art, also have goods of their own. It is better for a statue to be intact than to have parts missing or destroyed, for instance. The act of destroying a valuable work of art can be consider as a moral offense, but not nearly as grave as killing a sentient life form or a person because works of art only have derived moral status.
Assuming that the idea of moral gravity is sufficiently clear, we can now attempt to state the way in which moral gravity interacts with moral stature. All normative ethical theories require an account of the ways in which different normative factors interact with one another. This is as true for monistic theories, ones that propose a single fundamental moral principle, as well as for pluralistic theories, those that propose that there are several fundamental moral principles. Although I am a pluralist, I have been focusing attention on just one moral principle, the Vulnerability-Care Principle. But nevertheless, it is necessary to provide an account of how the different normative factors associated with the VCP interact with one another because, like other principles that apply generally, this principle can generate conflicts with itself. That is, we can have conflicting moral responsibilities to protect different vulnerable moral patients.
We can think of moral gravity and moral stature as two normative factors that combine to determine the weight of a moral responsibility to protect the vulnerable. In physics weight is a function of an object's mass times the force of the gravitational field that it is in, as given by Newton's Second Law:
F = ma: the net force on an object is equal to the mass of the object multiplied by its acceleration.
It would be nice if there was some way to quantify the notions of moral gravity and moral stature so that we would be able to calculate the precise weights of various kinds of moral responsibilities. But alas, I know of no such calculus and doubt that we can ever have one, because gravity and stature are not the only two normative factors and the VCP is not the only fundamental moral principle. But we can use the notion of moral weight analogously to the way we use and understand weight in physics.
In this case, the moral gravity of an interest or good is analogous to the A factor, and the moral stature of the patient is analogous to the M factor. The moral weight of a responsibility or obligation is then obtained by (figuratively) multiplying the A and the M. When ones does this one gets linear graphs that indicate that when one holds M (stature) constant, greater A (gravity) yields greater F (moral weight). Similarly, when one holds A (gravity) constant, greater M (stature) yields greater weight, and conversely in each case. The first relationship has been illustrated already with the examples of grave versus non-grave human rights abuses. The responsibility, say, not to kill another person comes out as weightier than the responsibility not to force people to work excessive overtime. The second relationship has been discussed earlier with respect to various duties towards non-human animals. For instance, I opined earlier that the duty not to kill a human person is weightier than a comparable duty not to kill a nonhuman psychological organism. I also argued that since not all human beings have the same moral stature, it is a less weighty offense to kill a human embryo than it is to kill an sentient fetus, which is less weighty an offense that to kill an infant or child who is assumed, according to the Human Rights Principle, to be a full member of the human moral community. Killing non-psychological organisms is not a very weighty matter because while they are moral patients and have some moral standing, their moral stature is slight compared to other kinds of moral patients.
Using this formula it is fairly easy to generate intuitive judgments about the weight of different moral responsibilities. One can test the theory by making up trolley car examples in which a moral agent (say, yourself) is in a forced choice situation in which you must choose between, say, killing a pet rabbit and killing a person. Tie them both to the tracks and let your moral intuitions tell you which way you would steer the trolley. If you would choose the kill the bunny, then your intuitions agree with what F=MA would predict. If not, then there is something wrong with my theory or something wrong with your moral intuitions, assuming, of course, that other things are equal, that is that there are no other morally relevant normative factors that might affect the judgment.
For instance, suppose that the person is a mass murderer and that if you don't kill him by running over him with the trolley he will carry out an evil plan in which 1000 innocent people will be killed. I bet you didn't think of that right off the bat, did you? This would, however, introduce another normative factor besides M and A and would complicate the interaction of these factors with our moral judgments. So let's just hold these other possibly relevant normative factors in abeyance for the time being and keep it simple.
When we do this, however, things still get complicated quickly when we vary the M and the A simultaneously. How would you decide a forced choice between say, depriving a psychological organism of its freedom, say to keep them in a cage for the amusement of tourists, versus not having a way for tourists to be amused by seeing animals in cages? Presumably, the interest that the animal has in its freedom is pretty grave, that is, it has a high A, while the interest that tourist have in being amused is not very grave at all, it has a low A. But tourists, assuming they are fully autonomous moral agents, have greater moral stature than nonhuman animals, so their interests, even their non-grave interests will count more.
So what is the answer you come to? Is it morally permissble to keep animals in cages at tourist attractions or isn't it? On the basis of these two normative factors alone, I would judge that it isn't. But, obviously even this simple case can get more complicated if you begin asking questions about, say, what happens to the animals if you close the zoo? and, What happens to the livelihoods of the people who own and operate the zoo? But, we have agreed to leave out these other normative factors. If we do this, then it would seem to follow that sometimes human interests ought to be sacrificed in order to protect the important interests of nonhuman animals.
This is precisely the result I wished to obtain. It is the result that follows from adopting a biocentric rather than a homocentric theory of moral status. Things get even more complicated when we vary the numbers of individual moral patients involved. Suppose that rather than just one chimpanzee on the trolley tracks we have 100 of them. And suppose on the other side we have one human person. While the lives of chimps are less intrinsically valuable than those of human persons, the lives of 100 chimps are collectively more valuable than the life of one person. If we had 100 people on one of the tracks and 1 on the other, I doubt many of you would choose to steer the trolley towards the 100. But what if there are 100 chimps, or dogs, or parrots? Should we still prefer to save one human life if doing so means ending 100 animal lives? Suppose it is not 100, but 1000 animal lives? Or pick an even higher number of your choice. If there is no threshold at which you would be willing to say it is morally preferable to sacrifice a human being to save some number of sentient animals, then my theory is not for you.
You want a theory that assigns zero weight to our moral responsibilities towards nonhuman animals and nature, that is, you want a homocentric ethics of the traditional kind. On this view, no matter how grave the interest or good of an nonhuman living thing is to its survival or well-being, its interests can never override any human interest, no matter how trivial. One gets this sort of view by assigning zero to the animal's moral stature for M factor. If you multiply zero times anything you get zero as the moral weight of the responsibility to protect. On the other hand, if you ascribe any non-zero number to an animal's M, their moral stature, and also assign some non-zero number to its A, or the gravity of its interest or good, then by adding more individuals we increase the moral weight of the responsibility to protect. At some point or another, the weight will be greater than those generated by conflicting human interests, and the latter will be out-weighed by the former. That is, as moral agents, we will be obliged to sacrifice some human interests in order to fulfill an even weightier responsibility to protect some vulnerable nonhuman organisms.
The principle of moral weight here operates to settle the conflict between two different moral responsibilities generated by the VCP. In general, the greater the weight of a moral responsibility, the harder it is to override it in favor of a conflicting moral responsibility. On my view, there are few if any non-overridable moral responsibilities; all or nearly all of them are defeasible under various circumstances in which combinations of normative factors yield great weight to some other moral responsibility requiring a different course of action. I do not think that moral stature and gravity are the only two normative factors that determine the weight of our moral obligations. There are other normative factors that can affect our "all things considered" evaluation of moral cases.
But I do think that these notions capture something important and useful about the way in which we ordinarily think about responsibilities to protect various categories of moral patients. The notions of moral stature and moral gravity can be part of a new philosophical vocabulary for making cross-species comparisons of our moral responsibilities. It can also, I will argue, be used along with other normative factors, to help explain other some moral intuitions we have about our responsibilities to one another as members of the human moral community.
Subscribe to:
Posts (Atom)