First let me explain my reason for publishing. I will speak of the singularity and seed AI, but will not explain those ideas and will not even provide a hyperlink to an explanation because I do not want to promote the ideas: I believe they are promoted enough already.

So the reader who is mystified by my references to Eliezer, the coherent extrapolated volition, etc, has to decide whether to expend the effort to search the internet for the information. If you are under 20 years old, have not completed your formal education, have not figured out where you fit into the labor market (and are not financially independent of the need to work) or have not worked out a satisfactory adult way of life, my advice to you is to postpone learning about the singularity and seed AI. It is not anything you cannot postpone learning about for a few years, and you are bound to encounter plenty of reminders of the existence of these ideas in the years ahead if you have anything like a healthy curiosity about the world around you.

I trust Eliezer to protect the life, flourishing and preferences of every human being, but I consider it a distinct possibility that in the process he will close off a possibility even more important than the fates and wishes of the humans. Although he did do so in the past (notably in a publication called TMOLFAQ which he has since unpublished) since 2003 or so Eliezer has not as far as I know acknowledged the possibility of the existence of a goal or value that trumps human values, trumps humane values and trumps pan-sentient values.

Since I do not perceive anyone else doing it effectively, until someone more qualified steps up, I will serve as an advocate among the singularitarians for a nonhuman system of value, in the course of which I will say things many people will not have heard before. For example, in a few paragraphs I will say that a human being has zero intrinsic moral value. I realize of course that (if I am effective) I will probably provoke the ill will of true believers of the humanist, egalitarian, progressive persuasion and might consequently incur adverse personal consequences. But the singularitarian project is too important for there not to be a advocate for nonhuman values.

It seems that almost everyone who has the potential to have a persistent or important effect on reality is a humanist, utilitarian or egalitarian of some sort. If it was just that these value systems dominate the political discourse, I might not feel compelled to speak out at this time, but they seem to dominate among singularitarians, too, and that causes me to enter the singularitarian discourse to advocate for values that I believe trump those values.

Later on this page, I do explain what it is about the coherent extrapolated volition that bothers me, but first I hope the reader will allow me in a very compact way to describe my system of valuing things. Happily, much or most of what I have to say applies generally to any hyperrational person with enough knowledge of science although I will not hesitate to refer to the elements of machine learning and AI when doing so will sharpen the exposition.

My first proposition is that subjective experience is irrelevant except as inexpensive-to-get evidence for relevant things. What matters is objective reality. For example, asking a sick person how he feels is a very effective way to evaluate response to medical treatment (and in my opinion to a change in diet, too). Medicine (and practical nutrition) would be much less effective if for some reason it could not rely on subjective experience as a barometer even though this information has the disadvantage that it is available only to the patient or subject and perhaps to a person (such as a spouse) with a close emotional tie to the patient or subject. Nonwithstanding the aforementioned expected utility of subjective experience (as a barometer) the ultimate purpose of medical treatment in my system of valuing things is to optimize the patient's ability to act effectively in the objective world, not to alleviate suffering.

My second proposition is that a human being has zero intrinsic moral value. A human has moral value only if he proves useful to other things that do have intrinsic moral value. (The species as a whole has no intrinsic moral value either, nor do other sentients.) In other words, in my system of values humans are means to nonhuman ends. Of course this is an extremely dangerous belief, which probably should not be advocated except when it is needed to avoid completely mistaken conclusions, and in my opinion it is needed when thinking about simulation arguments, the social and moral consequences of ultratechnologies, the eventual fate of the universe and other scenarios. If the idea took hold in ordinary legal or political deliberations, in how people treat each other day to day or in ordinary thinking about morals, destructive uncaring behavior, exploitive behavior and oppression would increase drastically, so let us be discreet about to whom we advocate it. It seems that when a group of humans comes to feel morally superior to another group of humans, then the probability drastically increases of their coming to behave murderously or exploitatively towards them. Moreover, there seem to be no end to the ways that groups of humans can convince themselves that they are saving the world or otherwise doing good when they are really oppressing or exploiting. This is a cognitive bias all humans have. Since a properly designed seed AI has negligible probability of acquiring a bias of that sort, it is okay to program it with a goal system in which humans have no intrinsic moral value.

Singularitarians and other "culture leaders" have to be very careful. On the one hand, the moral systems that have existed for centuries are inadequate for the singularitarian project and other projects. On the other hand, adopting or subscribing to a untested moral system is dangerous to the subscriber and to anyone who might be affected by the subscriber. It can have severe unforseen effects.

I do not have time right now, but eventually at this point I will have to explain why it is important thoroughly to understand utilitarian and egalitarian concepts in order to be effective in the mental, social or interpersonal environment even though utilitarianism and egalitarianism make unsatisfactory ultimate goals for singularitarians, ultratechnologists, ambitious social entrepreneurs and other "culture creators". The best I can do right now is to say that I recommend for the reader to endeavor to treat everyone as if they have the rights listed in the U.S. Bill of Rights (which is very much like believing each has intrinsic moral value) in all but quite exceptional circumstances e.g. the design and implementation of a seed AI or the creation of an intelligence explosion.

I hold that the framework Eliezer calls Bayescrafting is the best framework we currently have for understanding minds and how we increase our knowledge: E. T. Jaynes, causal models, Kolmogorov complexity, Solomonoff induction, etc.

I hold that it is desirable to increase the intelligence of whatever part of reality is under your control. For the purpose of this essay, intelligence will be defined as the ability to achieve goals. (A "general intelligence" can achieve a broad variety of goals; a "specialized intelligence" can achieve only a narrow variety of goals.)

I hold that it is desirable continuously to refine your model of reality and help other intelligent agents do the same.

I hold that what matters about a human or other intelligent agent is his ability to affect reality. If you are frozen, then brought back to life 50 years later with such skill that your ability to affect reality was unimpaired by the experience, it is better to go on with what you were doing before the freeze than to expend resources on such questions as whether your pre-freeze consciousness has been recovered or retained. Moreover, if you are duplicated with such skill that both copies of you have the same degree of ability to affect reality as you had before the duplication, rejoice that the number of intelligent agent in your neighborhood has increased by one. There is no need to expend resources on the implications of the duplication on your consciousness.

Humanity does not yet have a satisfactory explanation of consciousness, and it is prudent for humanity to invest resources in the search for an explanation and when an explantion is found to investigate its implications for our system of values, but there is no salient reason to believe that the explanation when it is found will overturn our system of values or force us to abandon or long-range goals.

I propose that causal chains that go on forever or indefinitely are important; causal chains that peter out are unimportant. In fact, the most important thing about you is your ability to have a effect on reality that goes on indefinitely. If I may be allowed a moment of humor or silliness, instead of worrying that the enemy will sap your Precious Bodily Fluids, you should worry that he will sap your Precious Ability to Initiate Indefinitely-Long Causal Chains.

I propose that every ethical intelligent agent should concentrate his resources on the possible futures in which his decisions and his actions will matter most. The more a possible future turns on what you do or how you decide, the more you should focus on it at the expense of other possible futures. I hereby name this principle the relevancy razor. fixme: give an example application of the relevancy razor.

I consider the above a fairly complete definition of the goal that trumps all other goals. Most of the other things that people consider moral principles either flow from the foregoing list or are not really correct moral principles. If I have the time, the energy and the resources, one of the things I would like to do in the years ahead is to put the definition in more technical language -- ideally in the language of mathematics, code and probability theory. That would be hard work.

I have not said yet what precisely is my objection to CEV. I will briefly summarize my two main objections. The topic deserves a much longer answer, but I have not written that yet. The author of CEV says that one of his intentions is for everyone's opinion to have weight: he does not wish to disenfranchise anyone. Since most humans care mainly or only about happiness, I worry that that will lead to an intelligence explosion that is mostly or all about maximizing happiness and that that will interfere with my plans, which are to exert a beneficial effect on reality that persists indefinitely but has little to do in the long term with whether the humans were happy or sad. Second, there is much ambiguity in CEV that has to be resolved in the process of putting it into a computer program. In other words, everything that goes into a computer program has to be specified very precisely. The person who currently has the most influence on how the ambiguities will be resolved has a complex and not-easily summarized value system, but utilitarianism and "humanism", which for the sake of this essay will be defined as the idea that humankind is the measure of all things, obviously figure very prominently.

Please continue to goal system zero.


Richard Hollerith wrote: "I propose that causal chains that go on forever or indefinitely are important; causal chains that peter out are unimportant."

  1. This makes some sense if there are causal chains of infinite length, but on what grounds do you know that there are? It's a bit like saying "Humans' interests are of no importance next to those of the gods" before you've got some evidence for the existence of the gods.

  2. While my actions now might be able to produce, or alter, a causal chain of infinite length, their influence on that chain is likely to decay over time, probably exponentially. So when deciding what to do now, shouldn't I be applying some sort of discount even if I think all future times are equally important? If so, then it's far from clear that only infinite chains matter – even if there are any.

Gareth McCaughan


Richard Hollerith here. My reply to (1) is that comparing to the existence of gods is unfair because the existence of infinitely-long causal chains is vastly more probably. Moreover, the system of valuing things under consideration, namely, goal system zero, is well defined and does not contradict itself or contradict reality even if it is possible that the intrinsic value cannot be realized. In other words, I am saying that if anything matters, it is those things that persist forever or persist indefinitely. That is a well-defined statement even if nothing matters. "Infinitely-long causal chain" is a shorthand which I explain more fully here -- using Conway's game of life as an illustration.

The most charitable interpretation I can imagine for (2) is that Gareth is worried that a human or a non-human agent's ability to influence an event E goes as the logarithm of the number of elements (namely, events) in the chain of cause and effect between the agent's action and E. Well, Gareth, when I turned my computer on this morning I initiated a chain of cause and effect this is now (several hours) * (3600 seconds per hour) * (677 million machine cycles per second) events long and yet the probability that my having turned my computer on would materially help me to publish these words was above 90%. This is an illustration of the general principle that an agent's ability to influence the future does not drop off logarithmically with the number of events for the causal chains of the most interest to the agent.

changed June 2, 2009