« Interpreting the Fed | Main | Good InTrade bets? »

November 21, 2007


Far too few people take the time to wonder what the purpose and function of happiness is.

Seeking happiness as an end in itself is usually extremely destructive. Like pain, pleasure is a method for getting us to seek out or avoid certain behaviors, and many of these behaviors had consequences whose properties could be easily understood in terms of the motivators. (Things are more complicated now that we're not living in the same world we evolved in.)

Instead of reasoning about goals, most people just produce complex systems of rationalizations to justify their desires. That's usually pretty destructive, too.

Far too few people take the time to wonder what the purpose and function of happiness is.

You're talking as if this purpose were a property of happiness itself, rather than something that we assign to it. As a matter of historical fact, the evolutionary function of happiness is quite clear. The meaning that we assign to happiness is an entirely separate issue.

Seeking happiness as an end in itself is usually extremely destructive.

Because, er, it makes people unhappy?

One *often* sees ethicists arguing that all desires are in principle reducible to the desire for happiness? How often? If you're talking about philosopher ethicists, in general you see them arguing against this view.

You're talking as if this purpose were a property of happiness itself, rather than something that we assign to it.

There are all sorts of realities that you cannot dictate at will. Purpose can be defined evolutionarily, and function is not a property that we assign. Do you 'assign' the function of your pancreas to it, or does it simply carry it out on its own?

Because, er, it makes people unhappy?

No, because it makes them dead. Or destroys them in a host of more subtle ways.

Yes, J, I very often see this. By strict coincidence, for example, I was reading this by Shermer just now, and came across:

"I believe that humans are primarily driven to seek greater happiness, but the definition of such is completely personal and cannot be dictated and should not be controlled by any group. (Even so-called selfless acts of charity can be perceived as directed toward self-fulfillment--the act of making someone else feel good, makes us feel good. This is not a falsifiable statement, but it is observable in people's actions and feelings.) I believe that the free market--and the freer the better--is the best system yet devised for allowing all individuals to achieve greater levels of happiness."

Michael Shermer may or may not believe that all values reduce to happiness, but he is certainly "arguing as if" they do. Not every mistake has to be made primarily by professional analytic philosophers for it to be worth discussing.

Actually, seeking merely subjective happiness without any other greater purpose does often tend to make people unhappy. Or even if they manage to become somewhat happy, they will usually become even happier if they seek some other purpose as well.

One reason for this is that part of what makes people happy is their belief that they are seeking and attaining something good; so if they think they are seeking something better than happiness, they will tend to be happier than if they were seeking merely happiness.

Of course this probably wouldn't apply to a pleasure machine; presumably it is possible in principle to maximize subjective happiness without seeking any other goal. But like Eliezer, I wouldn't see this as particularly desirable.

I don't think a 'joy of scientific achievement' pill is possible. One could be made that would bathe you in it for a while, but your mental tongue would probe for your accomplishment and find nothing. Maybe the pill could just turn your attention away and carry you along, but I doubt it. Vilayanur Ramachandran gave a TED talk about the cause of people's phantom limbs - your brain detects an inconsistency and gets confused and induces pain. Something similar might prevent an 'achievement' pill from having the intended effect.

I'm nervous about the word happiness because I suspect it's a label for a basket of slippery ideas and sub-idea feelings. Still, something I don't understand about your argument is that when you demonstrate that for you happiness is not a terminal value you seem to arbitrarily stop the chain of reasoning. Terminating your inquiry is not the same as having a terminal value.

If you say you value something and I know that not everyone values that thing, I naturally wonder *why* you value it. You say it's a terminal value, but when I ask myself why you value it if someone else doesn't, I say to myself "it must make him happy to value that". In that sense, happiness may be a word we use as a terminal value by definition, not by evidence-- a convention like saying QED at the end of a proof. In the old days the terminal value was often "God wills it so", but with the invention of humanism in the middle ages, pursuit of happiness was born.

In the case where someone seems to be working against what they *say* makes them happy, that just means there are different kinds or facets or levels of happiness. Happiness is complex, but if there are no reasons beyond the final reason for taking an action, then as a conceptual convention the final reason must be happiness.

Now I will argue a little against that. What I've said up to now is based on the assumption that humans are teleonomic creatures with free will. But I think we are actually NOT such creatures. We do not exist to fulfill a purpose. So the concept of happiness, defined as it is, is a story that is pasted onto us, by us, so that we can pretend to have an ethereal conscious existence. I propose that the truth is ultimately that we do what we do because of the molecules and energy state that we possess, within the framework of our environment and the laws of physics.

I could say that eating makes me happy and that's why I do it, or I could say that the deeper truth is that my brain is constructed to feel happy about eating. I eat because of that mechanism, not because of the "happiness", which doesn't actually exist. We make up the story of happiness not because it makes us happy to do so, but because we are compelled to do so by our physical nature.

In the words of Jessica Rabbit, I'm just drawn that way.

I normally wouldn't take the scientific happiness pill because I seem to be constructed to enjoy feeling that my state of mind is substantially a product of my ongoing thoughts, not chemicals. To inject chemicals to change my thoughts is literally a form of suicide, to me. It takes the unique thought pattern that is ME, kills it, and replaces it with a thought pattern identical in some ways to anyone else who takes the pill. People alive are unique; death is the ultimate conformity and conformity a kind of death.

But the happiness illusion is complex enough that I may under some circumstances say yes to that pill and have that little suicide.

Eliezer, the exchange with Greg Stock reminds me strongly of Nozick's experience machine argument, and your position agrees with Nozick's conclusion.

One does, in real life, hear of drugs inducing a sense of major discovery, which disappears when the drug wears off. Sleep also has a reputation for producing false feelings of discovery. Some late-night pseudo-discovery is scribbled down, and in the morning it turns out to be nothing (if it's even legible).

I have sometimes wondered to what extent mysticism and "enlightenment" (satori) is centered around false feelings of discovery.

An ordinary, commonly experienced, non-drug-induced false feeling with seeming cognitive content is deja vu.

Eliezer, if we reduce every desire to "happiness" than haven't we just defined away the meaning of the word? I mean love and the pursuit of knowledge and watching a scary movie are all rather different experiences. To say that they are all about happiness-- well then, what wouldn't be? If everything is about happiness, then happiness doesn't signify anything of meaning, does it?

James, are you purposefully parodying the materialist philosophy based on the disproved Newtonian physics?

Constant-- deja vu is not always necessarily contentless. See the work of Ian Stevenson.
Mystical experiences are not necessarily centered around anything false-- see "The Spiritual Brain", by Beauregard (the neuroscientist who has studied these phenomena more than any other researcher.)


There is potentially some confusion on the term 'value' here. Happiness is not my ultimate (personal) end. I aim at other things which in turn bring me happiness and as many have said, this brings me more happiness than if I aimed at it. In this sense, it is not the sole object of (personal) value to me. However, I believe that the only thing that is good for a person (including me) is their happiness (broadly construed). In that sense, it is the only thing of (personal) value to me. These are two different senses of value.

Psychological hedonists are talking about the former sense of value: that we aim at personal happiness. You also mentioned that others ('psychological utilitarians', to coin a term) might claim that we only aim at the sum of happiness. I think both of these are false, and in fact probably no-one solely aims at these things. However, I think that the most plausible ethical theories are variants of utilitarianism (and fairly sophisticated ones at that), which imply that the only thing that makes an individual's life go well is that individual's happiness (broadly construed).

You could quite coherently think that you would fight to avoid the pill and also that if it were slipped in your drink that your life would (personally) go better. Of course the major reason not to take it is that your real scientific breakthroughs benefit others too, but I gather that we are supposed to be bracketing this (obvious) possibility for the purposes of this discussion, and questioning whether you would/should take it in the absence of any external benefits. I'm claiming that you can quite coherently think that you wouldn't take it (because that is how your psychology is set up) and yet that you should take it (because it would make your life go better). Such conflicts happen all the time.

My experience in philosophy is that it is fairly common for philosophers to expouse psychological hedonism, though I have never heard anyone argue for psychological utilitarianism. You appear to be arguing against both of these positions. There is a historical tradition of arguing for (ethical) utilitarianism. Even there, the trend is strongly against it these days and it is much more common to hear philosophers arguing that it is false. I'm not sure what you think of this position. From your comments above, it makes it look like you think it is false, but that may just be confusion about the word 'value'.

If I admitted that I found the idea of being a "wirehead" very appealing, would you think less of me?

So how about anti depressants (think SSRI à la Prozac)? They might not be Huxley's soma or quite as convincing as the pill described in the post, but still, they do simulate something that may be considered happiness. And I'm told it also works for people who aren't depressed. Or for that matter, a whole lot of other drugs such as MDMA.

Thinking about it, "simulate" is entirely the wrong word, really. If they really work, they do achieve something along the lines of happiness and do not just simulate it. Sorry about the doublepost.

Toby, I think you should probably have mentioned Derek Parfit as a reference when stating that "I'm claiming that you can quite coherently think that you wouldn't take it (because that is how your psychology is set up) and yet that you should take it (because it would make your life go better). Such conflicts happen all the time.", as the claim needs substantialy background to be obvious, but as I'm mentioning him here you don't need to any more.

Robin Hanson seems to take the simulation argument seriously. If it is the case that our reality is simulated, then aren't we already in a holodeck? So then what's so bad about going from this holodeck to another?

I agree with Eliezer here. Not all values can be reduced to desire for happiness. For some of us, the desire not to be wireheaded or drugged into happiness is at least as strong as the desire for happiness. This shouldn't be a surprise since there were and still are pyschoactive substances in our environment of evolutionary adaptation.

I think we also have a more general mechanism of aversion towards triviality, where any terminal value that becomes "too easy" loses its value (psychologically, not just over evolutionary time). I'm guessing this is probably because many of our terminal values (art, science, etc.) exist because they helped our ancestors attract mates by signaling genetic superiority. But you can't demonstrate genetic superiority by doing something easy.

Toby, I read your comment several times, but still can't figure out what distinction you are trying to draw between the two senses of value. Can you give an example or thought experiment, where valuing happiness in one sense would lead you to do one thing, and valuing it in the other sense would lead you to do something else?

Michael, do you have a more specific reference to something Parfit has written?

So then what's so bad about going from this holodeck to another?

The idea that this whole universe including us is simulated is that we ourselves are part of the simulation. Since we are and we know we are conscious, then we know that the simulated beings can be (and very likely are) conscious if they seem so. If they are, then they are "real" in an important sense, maybe the most important sense. They are not mere mindless wallpaper.

I think in order to make the simulation argument work, the simulation needs to be unreal, the inhabitants other than the person being fooled must have no inner reality of their own. Because if they have an inner reality, then in an important sense they are real and so the point of the thought experiment is lost.

I fail to understand how the "mindless wallpaper" of the next level of simulation must be "unreal" while our simulated selves "are and we know we are conscious". They cannot be unreal merely because they are simulations because in the thought-experiment we ourselves are simulations but, according to you, still real.

I fail to understand how the "mindless wallpaper" of the next level of simulation must be "unreal" while our simulated selves "are and we know we are conscious". They cannot be unreal merely because they are simulations because in the thought-experiment we ourselves are simulations but, according to you, still real.

No, you completely misunderstood what I said. I did not say that the "mindless wallpaper" (scare quotes) of the next level must be unreal. I said that in order for the philosophical thought experiment to make the point it's being used to make the mindless wallpaper (not scarequotes - this is the actual term Eliezer used) needs to be assumed mindless. In real life, I fully expect a simulated person to have an internal self, to be real in the sense of having consciousness. But what I fully expect it totally irrelevant.

We're talking philosophical stories. Are you familiar with the story about another planet that has a substance XYZ that is just like water but has a different chemical composition from water? Well, in real life, I fully expect that there is no such substance. But in order for the thought experiment to make the philosophical point it's being used to make we need to grant that there is such a substance. Same thing with the mindless wallpaper. We must assume mindlessness, or else the thought experiment just doesn't work.

If you want to be totally stubborn on this point, then fine, we just need to switch to a different thought experiment to make the same point. The drug that induces the (mistaken) feeling that the drugged person has achieved a scientific discovery doesn't suffer from that problem. Of course, if you want to be totally stubborn about the possibility of such a drug, we'll just have to come up with another thought experiment.

TGGP, the presumption is that the sex partners in this simulation have behaviors driven by a different algorithm, not software based on the human mind, software which is not conscious but is nonetheless capable of fooling a real person embedded in the simulation. Like a very advanced chatbot.

"Simulation" is a silly term. Whatever is, is real.

""Simulation" is a silly term. Whatever is, is real."

This is true, but "simulation" is still a useful word; it's used to refer to a subset of reality which attempts to resemble the whole thing (or a subset of it), but is not causally closed. "Reality", as we use the word, refers to the whole big mess which *is* causally closed.

Wei, yes my comment was less clear than I was hoping. I was talking about the distinction between 'psychological hedonism' and 'hedonism' and I also mentioned the many person versions of these theories ('psychological utilitarianism' and 'utilitarianism'). Lets forget about the many person versions for the moment and just look at the simple theories.

Hedonism is the theory that the only thing good for each individual is his or her happiness. If you have two worlds, A and B and the happiness for Mary is higher in world A, then world A is better for Mary. This is a theory of what makes someone's life go well, or to put it another way, about what is of objective value in a person's life. It is often used as a component of an ethical theory such as utilitarianism.

Psychological hedonism is the theory that people ultimately aim to increase their happiness. Thus, if they can do one of two acts, X and Y and realise that X will increase their happiness more than Y, they will do X. This is not a theory of what makes someone's life go well, or a theory of ethics. It is merely a theory of psychological motivation. In other words, it is a scientific hypothesis which says that people are wired up so that they are ultimately pursuing their own happiness.

There is some connection between these theories, but it is quite possible to hold one and not the other. For example, I think that hedonism is true but psychological hedonism is false. I even think this can be a good thing since people get more happiness when not directly aiming at it. Helping your lover because you love them leads to more happiness than helping them in order to get more happiness. It is also quite possible to accept psychological hedonism and not hedonism. You might think that people are motivated to increase their happiness, but that they shouldn't be. For example, it might be best for them to live a profound life, not a happy one.

Each theory says that happiness is the utlimate thing of value in a certain sense, but these are different senses. The first is about what I would call actual value: it is about the type of value that is involved in a 'should' claim. It is normative. The second is about what people are actually motivated to do. It is involved in 'would' claims.

Eliezer has shown that he does care about some of the things that make him happy over and above the happiness they bring, however he asked:

'The question, rather, is whether we *should* care about the things that make us happy, apart from any happiness they bring.'

Whether he *would* do something and whether he *should* are different things, and I'm not satisfied that he has answered the latter.

Toby, what are your grounds for thinking that (ethical) hedonism is true, other than that happiness appears to be something that almost everyone wants? Is it something you just find so obvious you can't question it, or are there reasons that you can describe? (The obvious reason seems to me to be "We can produce something that's at least roughly right this way, and it's nice and simple". Something along those lines?)

g, you have suggested a few of my reasons. I have thought quite a lot about this and could write many pages, but I will just give an outline here.

(1) Almost everything we want (for ourselves) increases our happiness. Many of these things evidently have no intrinsic value themselves (such as Eliezer's Ice-cream case). We often think we want them intrinsically, but on closer inspection, if we really ask whether we would want them if they didn't make us happy we find the answer is 'no'. Some people think that certain things resist this argument by having some intrinsic value even without contirbuting to happiness. I am not convinced by any of these examples and have an alternative explanation as to my opponents' views: they are having difficulty really imagining the case without any happiness accruing.

(2) I think that our lives cannot go better based on things that don't affect our mental states (such as based on what someone else does behind closed doors). If you accept this, that our lives are a function of our mental states, then happiness (broadly construed) seems the best explanation of what it is about our mental states that makes a possible life more valuable than another.

(3) I have some sympathy with preference accounts, but they are liable to count too many preferences, leading to double counting (my wife and I each prefer the other's life to go better even if we never find out, so do we count twice as much as single people?) and preferences based on false beliefs (to drive a ferrari because they are safer). Once we start ruling out the inappropriate preference types and saying that only the remaining ones count, it seems to me that this just leads back to hedonism.

Note that I'm saying that I think happiness is the only factor in determining whether a life goes well in a particular sense, this needn't be the same as the most interesting life or the most ethical life. Indeed, I think the most ethical life is the one that leads to the greatest sum of happiness across all lifes (utilitarianism). I'm not completely convinced of any of this, but am far more convinced than I am by any rival theories.

Toby, how do you get around the problem that the greatest sum of happiness across all lifes probably involves turning everyone into wireheads and putting them in vats? Or in an even more extreme scenario, turning the universe into computers that all do nothing but repeatedly runs a program that simulates a person in an ultimate state of happiness. Assuming that we have access to limited resources, these methods seem to maximize happiness for a given amount of resources.

I'm sure you agree that this is not something we do want. Do you think that it is something we should want, or that the greatest sum of happiness across all lifes can be achieved in some other way?

In a slogan, one wants to be both happy and worthy of happiness. (One needn't incorporate Kant's own criteria of worthiness to find his formulation useful.)

No slogans :)


Drake, what do you mean by worthy of happiness. How does that formulation differ, for example, from my desire to both be happy and continue to exist as myself? (It seems to me like the latter desire also explains the pro-happiness anti-blissing-out attitude.)

To the stars!

"The pills make it clearer."

You said it big man.

I value many things intrinsically! This may make me happy or not, but I don´t rely on the feelings of possible happiness when I make decisions. I see intrinsic value in happiness itself, but also as a means for other values, such as art, science, beauty, complexity, truth etc, wich I often value even more than hapiness. But sentient life may be the highest value. Why would we accept happiness as our highest terminal value when it is just a way to make living organisms do certain things. Ofcourse it feels good and is important, but it is still rather arbitary. I think theese things as rather important if we don´t want to end up wireheaded. Complexity/Beauty may be my second highest value after sentience, hapiness may only come as a third thing, then maybe truth and logic... Well, I will write more about this later...

According to the theory of evolution, organisms can be expected to have approximately one terminal value - which is - very roughly speaking - making copies of their genomes. There /is/ intragenomic conflict, of course, but that's a bit of a detail in this context.

Organisms that deviate very much from this tend to be irrational, malfunctioning or broken.

The idea that there are some values not reducible to happiness does not prove that there are "a lot of terminal values".

Happiness was never God's utitily function in the first place. Happiness is just a carrot.

A common misconception, Tim. See Evolutionary Psychology.

It seems like a vague reply - since the supposed misconception is not specified.

The "Evolutionary Psychology" post makes the point that values reside in brains, while evolutionary causes lie in ancestors. So, supposedly, if I attribute goals to a petunia, I am making a category error.

This argument is very literal-minded. When biologists talk about plants having the goal of spreading their seed about, it's intended as shorthand. Sure they /could/ say that the plant's ancestors exhibitied differential reproductive success in seed distribution, and that explains the observed seed distribution adaptations, but it's easier to say that the plant wants to spread its seeds about. Everyone knows what you really mean - and the interpretation that the plant has a brain and exhibits intentional thought is ridiculous.

Richard Dawkins faced a similar criticism a lot, with his "selfish" genes. The number of times he had to explain that this was intended as a metaphor was enormous.

Happiness is just a carrot.

And reproductive fitness is just a way to add intelligent agents to a dumb universe that begin with a big bang. Now that the intelligent agents are here, I suspect the universe no longer needs reproductive fitness.

Tim, if you understand that the "values" of evolution qua optimization process are not the values of the organisms it produces, what was the point of your 12:20 PM comment of March 6? "Terminal values" in the post refers to the terminal values of organisms. It is, as Eliezer points out, an empirical fact that people don't consciously seek to maximize fitness or any one simple value. Sure, that makes us "irrational, malfunctioning or broken" by the metaphorical standards of some metaphorical personification of evolution, but I should think that's rather besides the point.

Brains are built by genes. Those brains that reflect the optimisation target of the genes are the ones that will become ancestors. So it is reasonable - on grounds of basic evolutionary biology - to expect that human brains will generate behaviour resulting in the production of babies - thus reflecting the target of the optimisation process that constructed them.

In point of fact, human brains /do/ seem to be pretty good at making babies. The vast majority of their actions can be explained on these grounds.

That is not to say that people will necessarily *consciously* seek to maximize their expected fitness. People lie to themselves about their motives all the time - partly in order to convincingly mislead others. Consciousness is more like the PR department than the head office.

Of course, not all human brains are maximizing their expected fitness very well. I'm not claiming that nuns and preists are necessarily maximizing their expected fitness. The plasticity of brains is useful - but it means that they can get infected by memes, who may not have their owner's best interests at heart. Such infected minds sometimes serve the replication of their memes - rather than their onwner's genes. Such individuals may well have a complex terminal value, composed of many parts. However, those are individuals who - from the point of view of their genes - have had their primary utility function hijacked - and thus are malfunctioning or broken.

Is maximizing your expected reproductive fitness your primary goal in life, Tim?

When you see others maximizing their expected reproductive fitness, does that make you happy? Do you approve? Do you try to help them when you can?

More details of my views on the subject can be found here.

Biology doesn't necessarily predict that organisms should help each other, or that the success of others should be viewed positively - especially not if the organisms are rivals and compete for common resources.

More details of my views on the subject can be found here.

With the rise of "open source biology" in the coming decades, you'll probably be able to sequence your own non-coding DNA and create a pack of customized cockroaches. Here are your Nietzschean uebermensch: they'll share approx. 98% of your genome and do a fine job of maximizing your reproductive fitness.

Customized cockroaches are far from optimal for Tim because Tim understands that the most powerful tool for maximizing reproductive fitness is a human-like consciousness. "Consciousness" is Tim's term; I would have used John Stewart's term, "skill at mental modelling." Thanks for the comprehensive answer to my question, Tim!

Re: genetic immortality via customized cockroaches:

Junk DNA isn't immortal. It is overwritten by mutations, LINEs and SINEs, etc. In a geological eyeblink, the useless chromosomes would be simply deleted - rendering the proposal ineffective.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30