Moral Weight and Prima Facie Duties

Earlier I introduced the concept of moral gravity and illustrated it with some examples of how moral stature interacts with moral gravity to determine the weight of our moral obligations to protect the vulnerable. I want to develop this idea a bit further now into a general schema for thinking about the notion of the weight of different moral obligations in general.

Intuitively we all think that certain moral obligations are more compelling, that is, weightier, than others. This kind of intuition is particularly important when we are called upon to resolve conflicts among our prima facie duties. The notion of a prima facie duty was introduced by W. D. Ross as a way of reinterpreting the notion of categorical imperatives in Kantian ethics. Although Ross gave various interpretations of this notion, it is now generally understood among moral philosophers that one has a prima facie duty to do something just in case an agent has some reason to think that he or she has a moral obligation to act in a certain way, and that reason does not involve an appeal to personal inclination or self-interest or to the total consequences of performing that action. He writes that,
When a plain main fulfils a promise because he thinks he ought to do so, it seems clear that he does so with no thought of its total consequences, still less with any opinion that these are likely to be the best possible. He thinks in fact much more of the past than of the future. What makes him think it right to act in a certain way is the fact that he has promised to do so – that and, usually, nothing more.
Ross provided examples of six prima facie duties: fidelity, gratitude, justice, beneficence, nonmaleficence, and self-improvement, but suggested that there might be many more. So, for instance, in addition to such standard prima facie duties as “One ought to keep ones promises” or “One ought to refrain from injuring others,” there might be prima facie duties such as, “One ought not to discriminate against persons on account of their race,” or, “One ought not to believe things which are not true,” or “One ought to ensure that those accused of crimes get fair trials.”

In each case of a proposed prima facie duty moral agents are thought to have a pro tanto reason for acting in the ways that a description of a duty provides just because it is the sort of reason that counts as a moral reason for action. As I will use this expression, one has a pro tanto moral reason to do A, when one is aware of a state of affairs S that provides a moral reason for A-ing, even if there are other states of affairs of which one might become aware that would give one a moral (or other) reason not to A.

Prima facie duties are usually contrasted with “all things considered” moral duties, that is, ones in which all of the morally relevant factors have been taken into consideration in a process of moral deliberation before deciding whether the agent does in fact have an actual or operative duty to do something. As Shelley Kagan explains it: “To say that something is your duty all things considered is to say that you are required to do it given all of the relevant factors that are relevant to the case at hand. In contrast, to say of something that it is a prima facie duty is only to note the presence of one or more of the factors that would generate an all things considered duty—in the absence of conflicting factors.”

Ross himself expressed some dissatisfaction with the term “prima facie duty” and suggested that perhaps “conditional duty” or “claim” might better capture his meaning, but ultimately rejected these alternatives. The term “prima facie” is unfortunate because it suggests that the duty is only apparent, and the phrase “all things considered” is rather cumbersome. In my discussion I will use the term “standing moral responsibility” in place of “prima facie duty”, and the terms “actual” (or sometimes “operative”) moral obligation, for the notion of an all things considered duty.

So then, if moral agents have in general a standing moral responsibility to protect vulnerable moral patients, then it will often be the case that different special responsibilities generated by this general standing moral responsibility will come into conflict with one another. How does one resolve these kinds of conflicts? Ross feel back on moral intuition, but perhaps we can do better than that. The idea that there might be a formula of some kind that we could use to actually calculate the weight of our moral responsibilities has appealed to many philosophers, though none has come up with a satisfactory account. I don't think my account is ultimately satisfactory either: it is presented as a heuristic device to aid the understanding of the set of normative factors that might potentially influence judgments of these kinds.

The basic schema is as follows:

W = A x O x P

where W stands for the weight of a particular prima facie duty or standing moral responsibility, A stands for agent-relative factors, O stands for the kind of obligation involved, that is, the gravity of the interest or value that is being protected by having such an obligation, and P stands for patient-relative moral factors, such as the moral stature of the patients to whom the agent's obligation is directed.

We can illustrate how this schema works using simple thought experiments in which we hold two of the three variables constant and vary the third. For example, if we specify that the A is a competent moral agent with no special relationship to the patient involved, and the P is a normal healthy human child, then we can vary the O factor as follows: Suppose O is the obligation not to murder. This is a weighty moral obligation. It is weightier than the obligation not to injure, which is in turn weightier than the obligation to provide some benefit to a moral patient. So, if killing is given the value 100, and injuring the value 75, and benefiting the value 25, and we set A and P to 1 each, then, the schema predicts that the prima facie responsibility not to kill the child is weightier than the similar obligation not to injure it, which is weightier than the obligation to provide the child with a benefit. This is in turn explained by the fact that the underlying interests involved have greater gravity in relationship to the patient's well-being or good. So, O is going to be an independent variable that will affect our judgment about the weights of various moral obligations we might have. Other things being equal, the more grave the interest involved the weightier the corresponding moral obligation.

But, let's hold the O factor the same and vary the P factor, that is, the moral stature of the patient involved. Suppose that one patient is the normal healthy child as above and another is a sentient non-human animal such as say a rabbit. Let's put the child and the rabbit on the trolley tracks so that the moral agent involved has to make a forced choice between injuring the child and injuring the rabbit. In this case, we would assign the P factor a high value, say 100, and the rabbit a smaller, but not insignificant value, say 50. Since the A and O factors are held constant, the schema predicts that our choice should be to injure the rabbit, since our obligation to refrain from injuring sentient nonhuman animals is less weighty than our corresponding obligations not to injuring human beings, because the former have lower moral stature. And this is what the schema predicts (roughly) that in a forced choice between these two prima facie responsibilities, we should choose to save the child.

But this is still far too simple to account for our moral intuitions. We can see this by introducing quantities of moral patients.

W = A x O x (nP)

where n is a number of patients. Suppose that instead of one rabbit on the tracks we have 50 rabbits and one is still forced to choose between injuring one child and injuring 50 rabbits. If we use the values for moral stature I suggested earlier, the obvious conclusion would be that the obligation not to injure 50 rabbits is 25 times weightier than the obligation not to injure one child. But is it? We can fudge this by assigning an arbitrarily low value to the nonhuman patients involved, say 1 as opposed to 50, but this seems pretty arbitrary and just an ad hoc way of saving the intuition that human interests matter more than nonhuman interests. As I said before, if your view is that human interests, no matter how trivial, will always outweigh animal interests, then my theory is not for you.

But clearly quantities do matter. We can see this using standard cases in which there is a forced choice between steering the trolley so it collides with one innocent helpless person who is tied to the tracks or steering it so it will collide with five innocent helpless persons who are tied to the other track. Most everyone who is asked about such cases comes to the conclusion that it is preferable to injure or kill one person than it is to injure or kill five, other things being equal.

But what if that one person is your own mother? Here is where the notions of derived or observer-relative moral status and moral partiality come into play so as to raise the stakes involved and change the moral calculation. Suppose that on one track lies your own mother, and on the other a complete stranger. The additional moral stature your own mother has in your eyes would give her life greater value than the life of a mere stranger, and so, if forced to choose, most of us would (perhaps reluctantly) steer the trolley into the stranger. Except, of course, if your own mother was an evil-doer who abused you as a child. This kind of consideration lowers your mother's moral stature and makes it more likely that you would choose to sacrifice her for the stranger, who, presumably, is innocent. These kinds of considerations have nothing to do with the intrinsic moral standing of the patient, but rather, are due to the patient's derived moral status. In order to account for these kinds of factors, we shall have to make the proposed schema even more complex:

W = A x O x n(P +/- D)

where D represents the increment or decrement of moral stature in a moral patient due to their relational or derived moral status.

We probably also have to complicate the A factor, which we have taken to represent a normal adult human moral agent. But in addition to their standing moral responsibilities, moral agents can also have various kinds of special responsibilities due to the particular roles they occupy, their own past actions, the promises they have made, and so forth. These additional sorts of agent-relative factors can also affect the weight of an obligation, sometimes in crucial ways. So, for instance, if a physician happens to be on the scene when another person suffers a heart attack along with a non-physician, most people would I think judge that the obligation of the physician to attempt to help this person is weightier than the obligation of the non-medical bystander. It would be even greater is the medical person is a trained EMT who has been in fact summoned to the scene by a 911 call. Such an individual has a strict duty to provide aid to the patient in distress, which ordinary bystanders do not have. So, then, we need to make the schema look something like this:

W = (A +/-R) x O x n(P +/-D)

where R stands for agent-relative factors, such as social roles, that function to increase or decrease the agent's level of responsibility.

Again, this formula is just illustrative. It is far from clear what the relationship is among these different sorts of normative factors. I have represented them in terms of multiplication and addition, but I could have equally well used powers, or logarithms to the variables and the way they interact. In fact, it is not possible to determine the precise formula for thinking about these kinds of issue by doing armchair philosophical thought experiments like the ones I have been offerring. One needs to actually develop a range of cases and present them to subjects and measure their responses, and then find the data points, and then choose which sort of formula (if any) best fits the curve. That is, it might be doable through empirical moral psychological research, but even then, it is not going to be easy.

Yet as moral agents we make these kinds of judgments all the time. Assuming that our moral intuitions are not wholly arbitrary, it must then be the case that our brains are following some kinds of patterns or using some kinds of algorithms to arrive at judgment about the relative weights of our moral duties. But we are obviously a long way from understanding how we make these kinds of complex moral judgments. Which is why moral philosophers such as Ross, and I, fall back on using the intuitions of expert moral judges as the basis for deciding among conflicting moral responsibilities. Moral intuitions may be slippery and uncertain, but they are still better than simplistic formulas as a guide to moral judgment.

However, even simplistic moral formulas have some value in that they can be used to help us disentangle the different sorts of normative factors than can affect our intuitive judgments about the weight of different duties and responsibilities. The schema presented here is offerred only as a first rough approximation, a heuristic, that tells us that there are at least three basic kinds of normative factors that can affect the moral weight of an obligation: those related to the agent, those related to the type of obligation or interest involved, and those related to the patient's moral stature. This already gives us a lot of complexity, but I doubt it is adequate as it stands, since it leaves out what are called contextual factors, for instance, what the knock-on consequences of certain actions for other moral patients, and 'threshold effects', that is, when the harm done by fulfilling a prima facie obligation to say, refrain from restricting the free movement a person, would be so great as to tip the balance in favor of doing so, as for instance, when the person involved is the carrier of a serious infectious disease who might spread it to others. Such thresholds can alter our judgments even when duties derived from basic human rights are involved. So I am not claiming a lot for my schema; only that it is a way of starting to get a handle on these very complex matters.

Moral Partiality and Impartiality

We are now in a position to give an account of the notions of moral partiality and impartiality which links these notions to the concepts of moral stature and moral weight. The account I will offer allows for some kinds of morally permissible partiality based on a moral patient's observer-relative moral stature in the eyes of particular moral agents. The moral stature of a moral patient consists of both its moral standing based on its intrinsic or observer-independent properties, e.g., whether or not it is alive, sentient, or a moral agent, together with whatever additional increase or decrease in its moral stature that is due to the imputation of value or status to it by particular observers.

Warren's four relational criteria for assigning moral status: The Human Rigts Principle, the Interspecific Principle, the Ecological Principle and the Transitivity of Respect Principle can be used to add moral stature to patients who are otherwise on the same level of moral standing based on their intrinsic properties alone. The fact, for instance, that Fido is someone's beloved pet and is regarded as a "member of the family" gives Fido greater moral stature in her owner's eyes than another dog according to the Interspecific Principle. If we, as second-party observers, respect this attribution of additional moral stature, then we ought to also regard Fido as having somewhat greater moral stature than another unowned and unloved canine, Rex, even though, based on their intrinsic properties alone, Fido and Rex have the same moral standing.

These relational or derived principles for ascribing moral stature can also explain why certain other kinds of moral partiality are permissible, for instance, a partiality towards one's own interests, towards those of one's friends, family, and significant others, and perhaps some other kinds of special relationships among persons. However, in each of these kinds of cases the kinds of partiality are limited or restricted in various ways by the countervailing principle of moral impartiality. The impartiality principle is based on the idea of equality of moral standing. Looking only at the intrinsic moral properties places different moral patients with the same intrinsic properties on the same moral plateaus. So, the account of moral status aims to provide a basis for explaining both our intuitions about permissible moral partiality and about the basis for moral impartiality.

To begin we must get clear on what we mean by moral partiality and impartiality. There is a broad general sense of the terms partiality and impartiality that have little or nothing to do with moral obligations. Bernard Gert has proposed a definition of impartiality in the broad sense: “A is impartial in respect R with regard to group G if and only if A's actions in respect R are not influenced at all by which member(s) of G benefit or are harmed by these actions” (quoted by Troy Jollimore, Stanford Encyclopedia of Philosophy). Jollimore's gloss on this definition is as follows: Impartiality is probably best characterized in a negative rather than positive manner: an impartial choice is simply one in which a certain sort of consideration (i.e. some property of the individuals being chosen between) has no influence.... Thus, for Gert, impartiality is a property of a set of decisions made by a particular agent, directed toward a particular group. Gert's analysis captures the important fact that one cannot simply ask of a given agent whether or not she is impartial. Rather, we must also specify with regard to whom she is impartial, and in what respect.

For instance, I have been watching the Beijing Olympics on TV for the past week or so. In Olympic events such as diving or gymnastics where the contestant's scores are determined by panels of judges, it is obviously important that the judges be impartial, in the sense that they should not allow their loyalty to their own countries of origin to determine how they rate an athlete's performance. Their judgments should be based entirely on the observed merits of their performance as assessed against some objective standard of "goodness" and "difficulty", and, specifically, they should not be influence by their liking or disliking for the particular nations the athletes represent. The IOC has devoted a good deal of thought to the question of how to reduce (if not entirely eliminate) various kinds of bias that might influence judge's scoring of these events. They have different judges from different countries; they throw out the highest and lowest scores; they allow appeals and videotape reviews, and so forth. The goal is to make these competitions as fair as possible by eliminating or reducing possible biases. This broad sense applies to all forms of impartial judgment.

What then characterizes moral impartiality? It is impartiality in which moral judgments about obligations we owe toward various kinds of moral patients are not permitted to be influenced by certain properties of the individuals concerned. For instance, if two children are in need of a life-saving operation but I can only afford to pay for one operation for one child, the impartial attitude would regard it as a matter of indifference which child one saves. In order to be impartial in cases likes this, one ought to use some fair decision-making procedure like flipping a coin to determine which child gets the operation.

But if one is the parent of one of the children, then we normally think that it is morally permissible to prefer her over the other child. Indeed, if a parent allowed his own child to die while using the resources available to rescue another child, one would normally think that he is either a saint or a callous fool. The special moral relationship between parents and their own children licenses a particular kind of permissible moral partiality in such cases. We are even inclined to say that parents have special moral responsibilities to protect their own children from harm which they do not have in the same way towards other children, or at least, that their special obligations to protect their own children are weightier than any general obligations they might have to protect vulnerable persons in general.

How can we account for this kind of intuition? Most consequentialist theories in normative ethics hold that strict impartiality towards the interests of different persons is an essential feature of the moral point of view. On this kind of impartialist account, the person who flips the coin is doing the only moral thing. This, however, is often treated as an objection to impartialism, since it conflict with most people's ordinary moral intuitions about this and similar cases. One potent objection against utilitarianism is that it is too demanding of moral agents since it appears to leave no room for this kind of moral partiality. Deontologists can also be impartialists, but they can also defend the view that there can be limited partiality towards certain persons, e.g. oneself, one's family members, and other close associates. (See Jollimore, op.cit. for a good discussion of the differences between consequentialist and deontological theories of moral partiality). But a deontologist who wishes to make room for some kinds of permitted moral partiality must also account for the strong moral intuitions we have about the importance of impartiality in certain contexts of moral judgment.

My approach to normative ethical theory is deontological, but I want to give an account for both our intuitions about permitted moral partiality and for our intuitions about the importance of moral impartiality. My theory of moral stature and its relation to the moral weight of obligations allows me to do this. Recall that I as defined it, the moral weight of an obligation is a function of both the moral stature of the moral patient(s) to whom it is directed, and the moral gravity of the interest(s) of those patients that are at stake. Moral stature is determined by both an individual's intrinsic moral standing and by any observer-relative increments or decrements in their moral standing based upon the the values and preferences of the observer.

Given this account of moral weight we can explain the intuition that a parent should give greater moral weight to his responsibility to provide the operation to his own child than he should to provide it to another child who is equally in need of the operation. In this observer's eyes, his child has greater moral stature than another child does, although the interests that are at stake, staying alive, are equally grave. In deciding to give preference to his own child he is not deciding that the death of his child is a graver harm than the dealth of the other child, rather, he is imputing greater moral stature to his child, and this factor, not the gravity of the interest involved is what tips the moral scales in his child's favor and makes his responsibility to save his own child a weightier obligation than his responsibility to save the other child.

Philosophers who defend the impartialist view, however, might object that we should not allow such observer-relative judgments of moral stature to enter into moral deliberation. Rather, they argue, we should attempt to adopt the perspective of an "ideal observer" who is free of any bias or subjective preference that would ascribe greater moral stature to one child -- both children would be harmed by dying. The problem with this argument is that it is difficult to define what an ideal observer is in any non-circular fashion, and even if we could do so, it is not at all clear that any actual moral agents can or should try to function as ideal observers. As Thomas Nagel has argued, the ideal observer has a "view from nowhere". Ideal observers are imaginary moral agents who are not socially situated any place in the real world. Real moral agents, like you and me, are always socially situated in some network of interpersonal relationships, and in these relationships we attribute greater moral stature to some groups of moral patients based upon our observer-relative preferences and values. Like most other people, I sometimes prefer my own interests over those of other people. I paid a lot of money to educate my children, but not so much to help educate other people's. I like some people more than others. I like some animals more than others. I care more about my property than I do about yours, and so forth. These are not strange or shameful admissions. This is the way people are, and any ethical theory that ignores these facts is going to be difficult to universalize.

But even so, the degree of partiality that I am permitted is limited. In the case of my child needing a life-saving operation, I am not permitted to murder another child and harvest his organs to transplant into my child to save her life. The reason for this is the Human Rights Principle which requires that we treat all persons has entitled to equal dignity and rights, and murdering another child would obviously violate his human rights. Recall that in discussing Warren's multicriterial theory of moral status I noted that the seven principles she proposes are lexically ordered. Each later principle, it is stipulated, should be applied within the limits of the preceding principles. So given that the three principles based on a moral patient's intrinsic properties: life, sentience, and agency are the first principles listed, the four relational principles can only be applied within the limits imposed by these more basic principles. One cannot simply decide to prefer bacteria to human beings when we place them both on the trolley tracks. More importantly, the four relational or derived principles of moral status cannot be applied in contradiction to the three intrinsic principles. The four relational principles are also ordered, with the Transitivity of Respect principle being the most restricted because it is last, and the Human Rights Principle being the least restricted because it is the first of the relational criteria of moral status.

As I argued earlier, the Human Rights Principle is also an observer-relative criterion and the increment of moral stature it provides to human children who are not yet fully autonomous moral agents is a derived rather than an intrinsic form of moral status. We assign to all human persons the moral status "holder of human rights," and doing this places children on the same moral plateau as fully autonomous adult moral agents. The Principle of Transitivity of Respect, which confers added moral stature on your own child is restricted in its application so that it cannot overrule the Human Rights Principle. The Human Rights Principle is epistemically objective, while an individual's observer-relative partiality towards his children or his pets, or his property, is epistemologically subjective. That persons have equal human rights is a social or institutional fact that exists though our collective intentionality, while the individual's own attributions of moral stature are both ontologically and epistemically subjective. In cases in which a subjective attribute of moral stature conflicts with an objective one, based either on a moral patient's intrinsic moral standing or on their epistemically objective derived moral stature, the objective moral status principles should prevail. The notion of human rights, in particular, the core principle of equal dignity (equal moral stature) and the right of nondiscrimination are designed specifically to rule out certain kinds of partiality towards certain groups of moral patients who have been historically oppressed due to some features of their group identities.

Moral partiality can be either negative or positive so that relying on epistemically subjective observer-relative attributions of moral stature can either add or detract from the weight of an obligation owed to groups of moral patients. Historically, many people have accepted some version of the doctrine of human superiority/inferiority according to which some groups of human beings are inherently or "by nature" inferior to other groups. Members of groups deemed "inferior" have often been dehumanized, that is, their moral stature has been reduced in the eyes of particular moral agents, and this dimunition of moral stature has often been a prelude to human rights abuse.

The doctrine human inferiority has been the source of a great deal of suffering and injustice in human history, but because of the progress of the human rights paradigm we are finally succeeding in getting rid of it once and for all. But we do still need to be constantly reminded that all human persons have equal human rights and that it is not morally permissible to treat some groups differently because of their race, sex, religion, language, nationality, birth, property and other grounds of invidious discrimination. Everyone should be regarded as having the highest grade of moral stature, of standing on a moral plateau in which all human lives are of equal value. So when the other relational principles of moral status come into conflict with the Human Rights Principle, the Human Rights Principle outweighs them. By creating a moral plateau in which all human persons have the same moral stature, the Human Rights Principle enforces the principle of impartiality so that when personal partiality conflicts with the demands of human rights, human rights trump them.

But then what are we to day about our moral responsibilities towards other kinds of moral patients, such as sentient animals to whom the Human Rights Principle does not apply? Is it impermissible partiality to prefer to save a human being's life over that of a chimpanzee? Suppose that we have on the trolley tracks a rather severely mentally impaired human infant and a normal healthy chimpanzee. As noted earlier, Peter Singer stirred up a lot of controversy by claiming that as a utilitarian he might well choose to save the chimpanzee. But this is because he ignores or disregards the Human Rights Principle under which even mentally impaired infants are accorded the same moral stature as fully autonomous moral agents. Since this principle increases the moral stature of the human infant based on its intrinsic properties alone, it tips the moral scale in favor of saving her.

The preference for human lives is not based on an irrational preference for our own kind, but on hard won moral wisdom about what is necessary in order to prevent historically prevalent forms of oppression, suffering and injustice. While chimpanzees are indeed the kinds of creatures that can have rights, they do not at present have them. We can construct a rights regime for chimpanzees, and indeed, Singer and others are trying to do just that with new legislation in Spain (see news story). I have no objection to this at all; I am in fact strongly in favor of doing so for chimpanzees, cetaceans, whales, elephants, and some other cognitively complex animals. But, the point here is that no such rights regime presently exists. There is no set of institutional facts that support the claim that chimpanzees have rights. In this case we are thrown back onto using their intrinsic properties as the basis for ascriptions of moral standing and adding to it whatever observer-relative epistemically subjective increment of moral stature that animal lovers like Singer would want to attribute to these creatures. It is certainly morally permissible to rescue chimps from conditions of abusive captivity and protect those in the wild who are endangered, indeed, on my account we have strong moral responsibilities to do so. But at this time, we cannot say that chimpanzees have rights, only that they can have rights, and, perhaps, that they ought to have them, that is, that we humans ought to construct a rights regime to govern the ways in which we human treat these cognitive complex and vulnerable creatures.

If there were such a regime, then it would enable people like Singer to make peremptory demands on other moral agents that could be enforced coercively. There are a few such laws in some jurisdictions, for instance, laws forbidding various kinds of cruelty to animals, but they do not yet constitute a global rights regime such as now exists for human rights. But if some animals ought to have rights I do not think we should call them "human rights". I think the term "human rights" ought to reserved for the rights of human beings. I have some problem with the term "animal rights" because humans are animals, and "nonhuman animal rights" sounds odd. I think we ought to be talking about "chimpanzee rights" "elephant rights" "whale rights" and so forth since in each case the sorts of threats to the grave or vital interests of these cognitively complex sentient organisms vary to some extent. Whales are threatened by pollution in the oceans, but bonobos aren't. It makes no sense to suppose that whales have a right to vote or to equal pay for equal work, or other things that we regard as human rights. It is better to construct a rights regime designed specially to thwart the particular threats that creatures of these kinds have been subjected to.

But suppose, for the sake of argument, that Singer and others are successful in constructing these kinds of rights regimes for certain species of nonhuman animals. Suppose that there are now in existence institutional facts that allow us to state as an epistemically objective truth that chimpanzees have certain rights. In this case, I imagine that the Chimpanzee Rights Principle will work in more or less the same way as the Human Rights Principle does. So suppose I have a beloved chimpanzee as a pet, call her Cindy who needs a life-saving operation, but their is another chimp, call him Sam, who also needs this operation. Would it be morally permissible for me to give Cindy preference over Sam? I would say "yes" because of the Interspecific Principle gives Cindy greater moral stature as a member of a mixed communiity. But it would still be wrong for me to kill Sam to harvest his organs to save Cindy because of Sam's equal chimpanzee rights. When there is an epistemically-objective rights regime in place certain kinds of moral partiality are ruled out of bounds while other kinds of moral partiality are permitted.

We can account for a number of important moral intuitions we have about these matters by means of the notions of intrinsic and derived moral status leading to differences in the moral statures of different groups of moral patients. One interesting feature of this account is that it avoids the objection that in being morally partial to certain moral patients we are counting their interests as more valuable than the interests of comparable others. This is not so because we accord each individual's interests equal gravity when they are equally central to that individual's survival, well-being or freedom. What varies is their moral stature based on observer-relative criteria for assigning moral status. Some of these criteria are epistemically subjective and reflect the agent's own preferences, values, and indeed, biases, while others are epistemically objective and are derived from institutional facts that set limits on permitted forms of moral partiality. But these additional criteria are ordered so that the Human Rights Principle gets priority.

That human beings naturally exhibit various forms of partiality can be taken as a given. The task for normative ethics is to find ways to limit or control this tendency in order to keep it within morally acceptable limits. This is one reason why many moral philosophers want to insist that impartiality is essential to the moral point of view. It is because without it, and the institutional facts embodied in legal norms to enforce impartiality, humans would most likely revert to something like Locke's state of nature, in which there is no impartial judge and each man seeks only to advance his own interests. The mutual benefits obtained through the rule of law are far superior to those we could secure in such a condition, which is why civilized human societies construct ethical codes and enact laws to enforce compliance with them.

But there are also advantages to having a division of moral labor under which people are permitted and indeed required to care for the people and things that they are most motivated to care about. Some consequentialists have recognized this fact and have argued that the overall good of society is best promoted by permitting some forms of moral partiality. I think this is often the case but also that our intuitions about such matter are better explained by the theories of moral stature and moral weight I have developed here.