December 31, 2011

Why I am not a utilitarian

This seems to come up somewhat often, and I figured it would be useful to have my answer(s) gathered in a convenient place. Some of these thoughts are taken from things I’ve previously said on LessWrong; others are new.

Summary: In this post, I explain why I am not a utilitarian. I’ll start by explaining why I’d even consider utilitarianism in the first place, and then I’ll say why I reject it.


NOTE: This post does not end with me advancing my own complete, coherent, ethical framework. Sorry. Truth be told, my ethical views are not anywhere near completely fleshed-out. I know the general shape, but beyond that I’m more sure about what I don’t believe—what objections and criticisms I have to other people’s views—than about what I do believe. However, utilitarianism is so prevalent among the rationalist set, and (in my opinion) so wrong, and so dangerous if taken for granted, that saying “I don’t know what the right answer is, but it’s definitely not this” is important and justified.

1. Some quick definitions

People use some of these terms in diverse ways. The way I use them is basically the way that the Stanford Encyclopedia of Philosophy does, and in my experience this usage is prevalent among professional philosophers:

Consequentialism is the view that ethically, the only things that determine whether something (an action, a rule of behavior, etc.) is right is its consequences. Consequentialism encompasses a large class of ethical frameworks. Utilitarianism is a subset of consequentialism (there are multiple kinds of utilitarianism as well, but they’re all consequentialist theories).

In addition to consequentialism, there’s deontology (the view that certain acts are obligatory and others are forbidden, regardless of their consequences) and virtue ethics (the view that ethical rightness is about what sort of person you are, and how your actions exemplify that, regardless of their consequences).

I am not a utilitarian, but I am a consequentialist. (Mostly.)

2. Why even consider utilitarianism? Or even consequentialism?

I think that consequentialism, as a foundational idea, a basic approach, is the only one that makes sense. Deontology seems to me to be completely nonsensical as a grounding for ethics. Every seemingly-intelligent deontologist to whom I’ve spoken (which, admittedly, is a small number—a handful of people on LessWrong) has appeared to be spouting utter nonsense.

(This may mean that I am misunderstanding some key aspect of deontology. Perhaps. If you’re an intelligent deontologist, and you think you can do better at explaining deontology than Alicorn of LessWrong—in such a way that I or another consequentialist would find to be at least coherent and not nonsensical (I don’t ask for it to be convincing)—feel free to comment!)

Deontology has its uses (see Nick Bostrom’s “Infinite Ethics”, and “Ends Don’t Justify Means (Among Humans)” by Eliezer Yudkowsky, for examples), but there it’s deployed for consequentialist reasons: we think it’ll give better results.

I’ve seen it said that virtue ethics is descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind, once you’ve decided on your object-level moral views. That seems like a more-or-less reasonable stance to take. As an actual philosophical grounding for morality, virtue ethics seems like nonsense, but perhaps that’s fine, given the above.

Consequentialism actually makes sense. Consequences are the only things that matter? Well, yes. What else could there be?

The SEP lists a number of dimensions along which consequentialism may vary, which result in a somewhat bewildering variety of specific forms of consequentialism. Classic utilitarianism, specifically, is described as actual, direct, evaluative, hedonistic, maximizing, aggregative (specifically, total), universal, equal-consideration, agent-neutral consequentialism.

I take issue with the “actual”, “direct”, “hedonistic”, “aggregative”, “total”, “equal-consideration”, and “agent-neutral” parts of that. (Though I expect that my issues with “actual” will be shared by a significant portion of those who consider themselves utilitarians, at least within the rationalist memespace, and my issues with “hedonistic” and “direct” may be as well. That leaves “aggregative”+”total”, and “equal-consideration”+”agent-neutral”, as the two aspects most likely to be sources of philosophical conflict.) I’ll deal with the specific objections below.

3. Ok, so why not utilitarianism?


I think intended and foreseeable consequences matter when evaluating the moral rightness of an act, not actual consequences; judging based on actual consequences seems utterly useless, because then you can’t even apply decision theory to the problem of deciding how to act. Judging on actual consequences also utterly fails to accord with my moral intuitions, while judging on intended and foreseeable consequences fits quite well.


I tend toward rule consequentialism rather than act consequentialism; I ask not “what would be the consequences of such an act?”, but “what sort of world would it be like, where [a suitably generalized class of] people acted in this [suitably generalized] way? Would I want to live in such a world?”, or something along those lines. I find act consequentialism to be too often short-sighted, and open to all sorts of dilemmas to which rule consequentialism simply does not fall prey.


I take seriously the complexity of value, and think that hedonistic utilitiarianism utterly fails to capture that complexity. I would not want to live in a world ruled by hedonistic utilitiarians. I wouldn’t want to hand them control of the future. I generally think that preferences are what’s important, and ought to be satisfied — I don’t think there’s any such thing as intrinsically immoral preferences (not even the preference to torture children), although of course one might have uninformed preferences (no, Mr. Example doesn’t really want to drink that glass of acid; what he wants is a glass of beer, and his apparent preference for acid would dissolve immediately, were he apprised of the facts); and satisfying certain preferences might introduce difficult conflicts (the fellow who wants to torture children — well, if satisfying his preferences would result in actual children being actually tortured, then I’m afraid we couldn’t have that). “I prefer to kill myself because I am depressed” is genuinely problematic, however. That’s an issue that I think about often.

(Philosophically astute readers might note that everything I’ve said so far, while not compatible with classic utilitarianism, is compatible with (at least some forms of) preference utilitarianism. But not so fast…)

“Equal Consideration”

I don’t think it’s obvious that all beings that matter, matter equally. This is because … [redacted].

(Editor’s note: I have a truly marvelous controversial boring objection to “equal consideration”, which this blog post is already too long to contain. Perhaps another time. Instead, here are my thoughts on what entities are properly the elements of the moral calculus.)

As far as who matters—to a first approximation, I’d say it’s something like “beings intelligent and self-aware enough to consciously think about themselves”. Human-level intelligence and subjective consciousness, in other words. I don’t think animals matter. I don’t think unborn children matter, nor do infants (though there are nonetheless good reasons for not killing them, having to do with bright lines and so forth; similar considerations may protect the severely mentally disabled, though this is a matter which requires much further thought).


I don’t see anything wrong with valuing my mother much more than I value a randomly selected stranger in Mongolia. It’s not just that I do, in fact, value my mother more; I think it’s right that I should. My family and friends more than strangers; members of my culture (whatever that means, which isn’t necessarily “nation” or “country” or any such thing, though these things may be related) more than members of other cultures… this seems correct to me.

Another, somewhat related objection to agent-neutrality is the notion of the separateness of persons. This is a topic of much debate among philosophers, but the gist is: if I torture Bob to save a million people, well, maybe in some aggregate sense that’s great. But it sure is pretty bad for Bob. The fact that a million people are saved as a result doesn’t make the outcome any better for Bob. No amount of aggregation makes that fact about the world any less bad. And so in an important sense, this world is worse than a world where Bob doesn’t get tortured. Is it, in a different sense, better than the world where Bob is fine, but that million people die? Yeah; for those million people, it sure is. But these measures of “how good the world is” cannot be reconciled by merely comparing the number 1 to the number 1,000,000. It is not obvious that the latter world is simply better, period, from some objective, impersonal viewpoint. (It’s not even obvious that such a viewpoint may be coherently postulated.)

* * *

But all of that is still just quibbling. Now we get to the important point.

“Aggregative” + “Total”

I’m deeply skeptical about the possibility or even coherence of aggregating utility across individuals.

The task that faces a utilitarian, before they can decide how to aggregate value across individuals, is to figure out how to determine value (of a consequence, usually) for any given individual. The most common solution to this is Von Neumann–Morgenstern utility.

But a utility function under the VNM utility theorem is defined only up to positive affine transformation. It doesn’t make any sense to say that Bob gets 5 units of utility from getting an ice-cream, while Alice gets 12 units of utility from watching a sunset. Those are not the same units. They are not even defined in absolute terms — only relative to other outcomes for that individual. And since those quantities are not in the same units, you cannot add them, or compare them. It is, simply, mathematical nonsense to claim that giving Bob an ice-cream and having Alice watch a sunset results in 17 units of utility getting added to the group consisting of Bob and Alice; or to say that Alice gets more utility (much less “exactly 7 more units of utility”) from watching a sunset than Bob does from getting an ice-cream.

It does not seem to me that any notion of utility or other “unitary value” can avoid this problem (without introducing other forms of incoherence or absurdity).

There just does not seem to be any way to go from consistently, coherently assigning a value to consequences for one individual, to consistently and coherently assigning a value to the collective consequences for multiple individuals. At least, I’ve never seen such a way.

There are other, more minor, problems as well. For example, I don’t think my own preferences adhere to the VNM axioms, and so it may not even be possible to construct a utility function for all individuals. (It’s sometimes claimed that VNM-noncompliance is necessarily irrational, but that is not entailed by the theorem. Robyn Dawes, in Rational Choice in an Uncertain World, makes the case for one particular deviation from VNM that seems to be quite sensible and accords well with his (and my) moral intuitions.)

Another problem is that all approaches to aggregation of utility (total, average, etc.) tend to run into various paradoxes, repugnant conclusions, and so forth. This has been extensively covered elsewhere, and is not a core component of my objections to utilitarianism, so I won’t go into it here.

4. But… are you really not a utilitarian? Despite all you said, utilitarianism makes so much sense!

Don’t get me wrong: I understand, and share, the basic utilitarian intuition. Punching Alice in the face is bad. Letting Alice watch the sunset is good. Punching Bob and Alice in the face seems like it’s worse than just punching Alice in the face. Letting Alice watch the sunset and buying Bob an ice-cream seems like it’s better than just doing one of those things. And that’s all you really need for utilitarianism to be at least somewhat compelling. Once you get into killing or saving people, it gets even more persuasive.

But the problems of utilitarianism seem, to me, to be insurmountable, and fatal. And utilitarianism fails to accord with my moral intuitions, in many scenarios, hypothetical and real. (For example, I choose SPECKS in Torture vs. Dust Specks. The utilitarian intuition does not move me to choose TORTURE.) So utilitarianism fails both criteria of the curve-fitting approach to finding a workable ethical framework.

5. If not utilitarianism, then what?

I said at the beginning that this post wouldn’t end with a grand solution of my own, and so it won’t. Perhaps in a later post I’ll sketch out some of my constructive thoughts on ethics.

In the meantime, I would love to read solutions to the problems with utilitarianism that I outlined, arguments against my objections, or any other interesting thoughts on the matter!

Leave a comment

All comments are reviewed before being displayed.

Name (required):

E-mail (required, will not be published):


You can use Markdown in comments!

Enter value: Captcha