The following is a representative example of a hedonistic value system as it applies to the singularity. One benefit to talking about the singularity is that provides concrete anchors which assist people in having illuminating discussions about systems of value. Hedonistic systems of value are popular and probably dominant in the singularitarian community, a fact that makes me sad and fearful for the future because it strikes me as more obvious than most things philosophical that hedonistic systems of value are misguided. I pray that the individuals and small teams who will actually make the irreversible decisions that will determine the shape of post-singularity reality will be guided by ends broader, more enduring and less self-contradictory than hedonistic ends.

This next quote comes from Transtopian Principles 5.1. I picked the quote not because the author is particularly influential or particularly capable as a technologist, but rather because it is a compact or pure statement of hedonistic values: it does not require the reader to labor long to tease out the hedonistic values from a profusion of other ideas.

If one removes all unnecessary philosophical complications, strips away all the layers of sanctimoniousness, naiveté, self-delusion, and false guilt, the answer to one of mankind's oldest and most profound existential questions becomes blindingly obvious: ultimately, it's all about pleasure and pain. Without pleasant emotions and sensations life would be empty and meaningless, if not outright unbearable. There would be no hope, no rewards to look forward to; there would be only monotonous blandness interspersed with various forms of mental and physical suffering. It would truly be a fate worse than death.

Pleasure and happiness aren't merely 'desirable'; they are intrinsically desirable, and their pursuit and maximization is nothing less than a moral and logical imperative. We are very fortunate to live in these technologically advanced times, for science may soon give us the means to actually fulfill this imperative; to take hedonism to its logical conclusion and create what so far has only been a desperate fantasy -- heaven. A mechanized heaven where we ourselves will be the gods, masters of our own minds, bodies, and environment. Pain will be optional, and pleasure guaranteed.

At the moment, however, we are anything but gods. Our biological bodies are weak, as are our minds. They get damaged easily, often beyond repair. In this dangerous and primitive world, mere survival --let alone transcendence of the flesh-- requires a great deal of sacrifice and discipline. We must curb our more impractical hedonistic impulses, avoid or limit potentially harmful activities, and keep our eyes on the prize: the technological paradise that may lie just beyond the horizon. At the same time we should also live intensely and try to seize the day as often as possible, for there may be no tomorrow. This balancing act is the gist of Intelligent Hedonism.

Again, I want to point out that I consider the above a representative example of a very misguided and destructive (and depressingly popular and widespread) system of values.

Question from TGGP: What exactly is wrong with hedonism?

Richard Hollerith answers: What is wrong with hedonism is the same thing that is wrong with thousands of other prospective systems of value: there is no rational reason to believe that it should command our loyalties or that it should be the target of our efforts and our capacity to execute plans over long periods of time. More precisely, I can easily think of a fairly simple system of value which has a much better rationale. I grant that it feels obvious to most people alive today (and most people in the past) that steering towards pleasure and away from pain is the meaning of life, but that is because most people have not applied the machinery of reason to the question of the meaning of life: either they do not have access to the machinery of reason or they were never sufficiently motivated to apply the machinery to the question.

By way of defining what I mean above by the machinery of reason, let me quote George Spencer-Brown:

To arrive at the simplest truth, as Newton knew and practiced, requires years of contemplation. Not activity. Not reasoning. Not calculating. Not busy behavior of any kind. Not reading. Not talking. Not making an effort. Not thinking. Simply bearing in mind what it is that one needs to know.

The best way I know of to arrive at a rational understanding of the meaning of life and to make a wise choice of terminal values is for the person to spend hundreds of hours in the frame of mind described in the above quote. (I estimate I have spent thousands.) The frame of mind required is the same as that required to understand physics, higher math or one of the bodies of knowledge and craft used by professional programmers to help them cause computers to serve economically or socially useful functions.

Actually, the meaning of life is probably simpler than any of those three things, so to understand it requires fewer hours. The probable reason fewer people understand the meaning of life than understand physics or computer programming: more people with the necessary access to the machinery of reason are motivated to devote many thousands of hours to achieving a rational understanding of physics or computer programming than of the meaning of life. I grant that many (e.g., professors of ethics in universities) devote a lot of time to thinking about the meaning of life, but I maintain that either they do not have access to the machinery of reason or their primary motivation (which they might not be fully conscious of) is to influence others or achieve a reputation for wisdom and thereby get ahead in life rather than to achieve a deep understanding.

My tentative hypothesis BTW is that trying to make a living or secure social standing from thinking about the meaning of life makes it less likely that the thinking will turn out really well even after we take into account the fact that the thinking requires thousands of hours of dedicated effort and of course for most people the need to make a living and improve or maintain one's social standing must compete with that time requirement. (Spinoza's life is evidence of this tentative hypothesis.) The same of course is not true of physicists and computer programmers: in those fields, the professionals have a clear advantage. My tentative theory is that thinking about the meaning of life is different because the natural human motivational system or the natural human tendency to perceive reality in a biased (particularly a self-interested) fashion is much more likely to interfere with thinking about the meaning of life.

Response from TGGP: The reason thousands of systems of prospective value (I would say "all of them") give no rational reason to command our loyalties is the is-ought gap. A system of value contains some normative notion of "good" which cannot be derived from positive facts. There is similarly no rational reason why I should want to have any enduring effect on the universe. Nor is there any reason to think that life or the universe has "meaning".

Hollerith replies: Eliezer's public writings describe (among other things) a model of mind or intelligence. This model applies both to prospective AGI designs and to the human mind -- that is to say the model is nice and general. The Bayesian stance, decision theory and Solomonoff induction and minimum-length encoding (as a formalization of Occam's razor) are part of this model. Parenthetically I recommend this model in general and Eliezer's writings on it in particular to any person who wishes to become really well educated.

This model contains the notion of inductive bias: machine-learning researchers know that any algorithm an agent might use to learn about (to create and refine a model of) its environment must have a few assumptions built into its very design. For example one assumption built into the design of every sufficiently-general algorithm I know about is "Occam's razor": the assumption that a model whose length is less than some other model is more likely to describe reality than that other model. There are other inductive biases that show up in most every sufficiently-technical description of a sufficiently-general intelligence, but I'm pressed for time right now, so I will not go into them.

Let us review what I just said. When you go technical on the problem of how is it that a collection of quarks can ever know anything about reality, you learn that the collection of quarks must at the time of its creation or incarnation have a few preconceptions about reality: if it does not have these preconceptions, it just sits there and never does anything. In other words, the very act of designing an intelligence, of bringing an intelligence into being requires making at least a few assumptions about reality.

I am here to suggest to you that these few assumptions about reality, meager and few that they are, are enough information to inform the long-range plans and efforts of a creature like you or I or anyone else who might be reading these words. The inductive biases, in other words, entail what look on the day-to-day perspective on life like (a small set of simple) normative beliefs or "normative imperatives". This small set of simple imperatives (that is, what I have been calling the inductive biases, though "inductive and behavioral biases" is probably a more appropriate term) interacts with objective reality, which is of course very complex, to yield very complex plans.

To summarize my system of value: to the best of my ability I will endeavor to act as if I were one of these idealized minds, these perfect Bayesian reasoners (and deciders). I will strive to do what I would have done if I were an ideal engine of rationality, with just a few simple well-chosen inductive biases.

As you can see, the strategy of my reply was to attack the idea that the is-ought gap can never be crossed. I have briefly outlined how I intend to cross it.

Response from TGGP: Also, it is quite a strange "machinery of reason" that includes "not reasoning" and "not thinking".

Hollerith replies: I agree, but I do not want to spend the time to fix it up, so I beg the reader to translate in his own mind "machinery of reason" into some more suitable phrase.

Here is as good a place as any to announce that I have other things going on in my life Summer 2008 which will tend to make it impossible for me to devote as much time as I would have liked to these discussion. For me to meet in person with one or two others to talk about these discussions (rationality, meaning of life, terminal values) does not conflict with or distract from the other things going on in my life the way that writing about them does, so I am still willing to meet in person in the Bay Area to further the discussion. Also, I will continue to have time to reply to direct questions of the sort that TGGP asked me last 24 hours.

Rejoinder from TGGP: You are correct that an agent without some normative presuppositions will just sit there. You then imagine some ideal Bayesian AI/agent and assume its norms are given. However, they must have been given by its designer (we will assume intentionally, since it is ideal and not random or a mistake). What determines what a designer would intend to put into their AI? The normative presuppositions of the designer! You are suggesting which normative presuppositions one ought to have by reference to those possessed by an imaginary agent which trace back to those of an imaginary designer, whose norms in turn we do not know. We know they created the AI, but for all we know the reason they created it was something bizarre like the desire to have it remove any evidence of its existence including in the mind of its creator. What rational reason is there to share such a norm? Your attack is doomed to fail. The is-ought gap is indeed uncrossable.

changed April 14, 2008