« Treatment Futures | Main | Don't Do Something »

November 15, 2007

Comments

The disticintion between instrumental values and terminal values is useful in thinking about political and economic issues (the 2 areas I’ve thought about so far…)
I’m running into a problem with ‘terminal’ values, and I wonder if this isn’t typical.
A terminal value implies the future in a way that an insturmental value does not. The instrumental value is for an action carried out in a finite time and leads to an outcome in the foreseeable future. A terminal value posits all futures—this is an endless recusive algorithm. (At least I don’t have an end to the future in my thinking now).
When I ask myself, “How do I want things to be in the future?” I can carry this question out only so far, but my concept of the future goes well beyond any currently imaginable scenarios.

Eliezer, what's this with your recent bias against boredom?
Are you sure it's rational or efficient or even simply useful in any way to cultivate a constant (and possibly boring) battle against boredom?

Douglas, in *principle* you ought to consider the entire state of the future universe when you set a terminal value. "I want my sister not to be killed in the next few weeks by flesh-eating bacteria" is a vague goal. "My sister not being killed by flesh-eating bacteria because the world fell into a black hole and tidal effects killed her" is not an adequate alternative.

In practice we set terminal values as if they're independent of everything else. I assume that giving my sister penicillin will not have any side effects I haven't considered. As far as I know she isn't allergic to penicillin. If it will bankrupt me then that's something I will consider. I assume the drug company is not sending its profits to support al qaeda unless somebody comes out and claims it is and the mass media take the claim seriously. I assume the drug company won't use my money to lobby for things I'd disapprove of. I completely ignore the fact that my sister's kidneys will remove the penicillin and she'll repeatedly dose her toilet with a dilute penicillin solution that will encourage the spread of penicillin-resistant bacteria. If I did think about that I might want her to save her urine so it could be treated to destroy the penicillin before it's thrown away.

In practice people think about what they want, and they think about important side effects they have learned to consider, and that's all. If we actually had a holistic view of things we would be very different people.

What is the difference between moral terminal values and terminal values in general? At first glance, the former considers other beings, whereas the latter may only consider oneself -- can someone make this more precise?

What is the difference between moral terminal values and terminal values in general? At first glance, the former considers other beings, whereas the latter may only consider oneself -- can someone make this more precise?

Huh? Considering only oneself is less general than considering everything.

n moral arguments, some disputes are about instrumental consequences, and some disputes are about terminal values. If your debating opponent says that banning guns will lead to lower crime, and you say that banning guns lead to higher crime, then you agree about a superior instrumental value (crime is bad), but you disagree about which intermediate events lead to which consequences. ...
This important distinction often gets flushed down the toilet in angry arguments. People with factual disagreements and shared values, each decide that their debating opponents must be sociopaths.

I don't think it's possible to find a truer statement about political debates on the internet.

I've lost count of how many exchanges I've been in that have gone like this:

me: Plan X would better reduce environmental impact at lower cost.
them: So, in other words, you think the whole global warming thing is a myth?

***

And then, of course, people sometimes can't get keep straight *which* consequence you're debating:

me: The method you've described does not show a viable way to produce intellectual works for-profit without IP.
them: I disagree with your claim that no one has ever produced any intellectual works without IP protection.

I’m running into a problem with ‘terminal’ values . . .

A terminal value posits all futures — this is an endless recusive algorithm. (At least I don’t have an end to the future in my thinking now).

I believe this is a real problem, and my way of resolving it is to push my terminals values indefinitely far into the future, so for example in my system for valuing things, only causal chains of indefinite length have nonzero intrinsic importance or value. To read a fuller account, click on my name.

I simply want to express my great appreciation for Eliezer's substantial efforts to share his observations of the journey, his willingness (in principle) to update his beliefs, and his presently ongoing integration of the epistemologically undeniable "subjective" with the hardcore reductionist "objective." I'm joyfully anticipating what comes next!

is that washington avenue in south beach? are there many publix stores outside of florida?

Peter de Blanc
"Huh? Considering only oneself is less general than considering everything."

Certainly. But can you give a succinct way of distinguishing moral terminal values from other terminal values?

Certainly. But can you give a succinct way of distinguishing moral terminal values from other terminal values?

No. What other sorts of terminal values did you have in mind?

Good post!

Peter de Blanc
> No. What other sorts of terminal values [other than moral] did you have in mind?

Well, one could have a terminal value of making themselves happy at all costs, without any regard for whether it harms others. A sadist could have the terminal value of causing pain to others. I wouldn't call those moral. I'm interested in a succinct differentiation between moral and other terminal values.

Josh, I would say that making oneself happy is a morality, and so is causing pain to others. It sure isn't our morality. If you could find a short definition of our morality, I would be totally amazed.

J Thomas--"in *principle* you ought to consider the entire state of the future universe when you set a terminal value."
Yes, and in practice we don't. But as I look further into the future to see the consequences of my terminal value(s), that's when the trouble begins.

igor--I want to defend Eliezer's bias against boredom. It seems that many of the 'most moral' terminal values (total freedom, complete knowledge, endless bliss...) would end up in a condition of hideous boredom.
Maybe that's why we don't achieve them.

Richard- I read your post. I agree with the conclusions to a large extent, but totally disagree with the premises. (For example- I think the only valueable thing is subjective experience) Isn't that amazing?

I have a question about this picture.

Imagine you have something like a chess playing program. It's got some sort of basic position evaluation function, then uses some sort of look ahead to assign values to the instrumental nodes based on the terminal nodes you anticipate along the path. But unless the game actually ends at the terminal node, it's only "terminal" in the sense that that's where you choose to stop calculating. There's nothing really special about them.

Human beings are different from the chess program in that for us the game never ends, there are no "true" terminal nodes. As you point out, we care what happens after we are dead. So wouldn't it be true that in a sense there's nothing but instrumental values, that a "terminal value" just means that a point at which we've chosen to stop calculating, rather than saying something about the situation itself?

The very first "compilation" I would suggest to your choice system would be to calculate the "Expected Utility of Success" for each Action.

1) It is rational to be prejudiced against Actions with a large difference between their "Expected Utility of Success" and their "Expected Utility", even if that action might have the highest "Expected Utility". People with a low tolerance for risk (constitutionally) would find the possible downside of such actions unacceptable.

2) Knowing the "Expected Utility of Success" gives information for future planning if success is realized. If success might be "winning a Hummer SUV in a raffle in December", it would probably be irrational to construct a "too small" car port in November, even with success being non-certain.

Eliezer, I have a question.

In a simple model, how best to avoid the failure mode of taking a course of action with an unacceptable chance of leading to catastrophic failure? I am inclined to compute separately, for each action, its probability of leading to a catastrophic failure, and immediately exclude from further consideration those actions that cross a certain threshold.

Is this how you would proceed?

it's only "terminal" in the sense that that's where you choose to stop calculating..

No, the way Eliezer is using "terminal value", only the positions that are wins, losses or draws are terminal values for the chess-playing agent.

So wouldn't it be true that a "terminal value" just means a point at which we've chosen to stop calculating, rather than saying something about the situation itself?

Neither. A terminal value says something about the preferences of the intelligent agent.

And Eliezer asked us to imagine for a moment a hypothetical agent that never "stops calculating" until the rules of the game say the game is over. That is what the following text was for.

This is a mathematically simple sketch of a decision system. It is not an efficient way to compute decisions in the real world.

Suppose, for example, that you need a sequence of acts to carry out a plan? The formalism can easily represent this by letting each Action stand for a whole sequence. But this creates an exponentially large space, like the space of all sentences you can type in 100 letters. As a simple example, if one of the possible acts on the first turn is "Shoot my own foot off", a human planner will decide this is a bad idea generally - eliminate all sequences beginning with this action. But we've flattened this structure out of our representation. We don't have sequences of acts, just flat "actions".

So, yes, there are a few minor complications. Obviously so, or we'd just run out and build a real AI this way. In that sense, it's much the same as Bayesian probability theory itself.

But this is one of those times when it's a surprisingly good idea to consider the absurdly simple version before adding in any high-falutin' complications.

Terminal values sound, essentially, like moral axioms - they are, after all, terminal. (If they had a basis in a specific future, it would be a question of what, specifically, about that future is appealing - and that quality would, in turn, become a new terminal value.) When treating morality as a logical system, it would simplify your language in explaining yourself somewhat, I think, to describe them as such - particularly since once you have done so, Godel's theorem goes a long way towards explaining why you can't rationalize a conceptual terminal value down any further. (They are very interesting axioms, since we can only consistently treat them conceptually and as variables, but nevertheless axiomatic in nature.)

Speaking of people coming to think of B as a good thing itself, many of those in favour of banning guns treat gun abolition as a terminal value in its own right - challenging those in favour of gun freedoms to prove that guns reduce crime, rather than asserting that they increase it. That is, they treat the abolition of guns as a positive thing in its own right, and only the improvement of another positive thing, say, by reducing crime, can balance the inherent evil of permitting people to own guns.

Adirian, re gun control, are you sure? I haven't studied people's attitudes to that issue, but what you describe sounds very strange and quite unlike the thought processes of the only pro-gun-control person whose thought processes I know really well, namely me. Allowing people to do things is (in itself) just about always positive; gun control is desirable (if it is) because of effects such as (allegedly) reducing gun crime, reducing accidents involving guns, making it less likely that people will think of killing people as a natural way to deal with conflicts, etc.

At least, that's how I think, and so far as I can tell from the few gun control discussions I've been in it's also how other people who are in favour of gun control think. I'd guess (though obviously I could be very wrong) that anyone who thinks of either gun abolition or gun ownership as a terminal value or disvalue is doing so as a cognitive shorthand, having already come to some strong opinion on the likely consequences of having more guns or fewer guns.

I'm sure there are plenty of people for whom guns produce a positive or negative visceral reaction (e.g., because they're seen as representing gratuitous violence, or freedom, or power over potential attackers, or something). I don't think that's the same thing as treating gun abolition or gun ownership as a terminal value; it's just another source of bias which, if they're wise, they'll try to overcome when thinking about the issue. (Few people are wise.)

It's hardly surprising if pro-gun-control people prefer to frame the issue by challenging their opponents to show that guns reduce crime, or if anti-gun-control people prefer to frame it by challenging theirs to show that guns increase crime. Everyone likes to put the burden of proof on their opponents. (Remark: "Burden of proof" is a rather silly phrase. What's really involved in saying that the burden of proof lies on the advocates of position X is the claim that the probability of X, prior to any nonobvious arguments that might be offered, is low. This is a nice example of something Eliezer has pointed out a few times: we tend to phrase what we say about reasoning in quasi-moral terms -- A "owes" B some evidence, B has "justified" her position, etc. -- when it is generally more useful to think in terms of probability-updating. Or belief-updating or something, if for some reason you don't like using the term "probability" for these things. End of remark.)

I don't understand your appeal to Goedel's theorem. Thinking of ethics as (like) a logical system and applying Goedel might lead to some conclusion like "There will always be situations for which your principles yield no clear answer", though actually I don't see why anyone would expect the conditions of Goedel's theorem to hold in this context so I'm not even convinced of that; but once you decide to think of terminal values as axioms you've *already* explained (kinda) "why you can't rationalize a conceptual terminal value down any further".

It is a terminal value, however - you are regarding B as something other than B, something other than a stage from which to get to C. To exactly the ends you permit your visceral reaction to the guns themselves shape your opinion, you are treating the abolition or freedom to use guns as an ends, rather than a means. (To reduce crime or promote freedom generally, respectively.) Remember that morality itself is the use of bias - on deciding between two ethical structures which is the better based on subjectively defined values - so to say that something is bias in a moral framework means that it is being treated as a moral axiom, a terminal value.

Your commentary means one of two things - either your don't believe ethics is a rational system to which logic can be applied, or you don't accept that axioms have a place in ethics. Addressing the latter, it is certain that they do, as in any rational system. At the very least you must accept the axioms of definition - among which will be those axioms, those values, by which you judge the merits of any given situation or course of action. "Death is bad" can be an axiom or a derived value - but in order to be derived, you must posit an axiom by which it can be derived, say, that "Thinking is good," and then reason from there, by stating, for example, that death stops the process of thinking. Which applies no matter which direction you come from - from the side of the axioms, trying to discover what situations are best, or from the side of the derived values, trying to figure out what axioms led to their derivation.

Regarding the latter argument - then you take ethics itself as a thing which cannot further be defined, and so claim that morality is itself the terminal value, the axiom. Which I don't think would be your position.

I think there's a distinction that I'm trying to make and you're trying to elide, between actually thinking something's a terminal value and behaving sometimes as if it is. Obviously all of us, all of the time, have all sorts of things that we treat as values without thinking through their consequences, and typically they fluctuate according to things like how hungry we are. If all you meant is that some people have an "eww" reaction to guns then sure, I agree (though I find it odd that you chose to remark on that and not on the equally clear fact that some people have an "ooo" reaction to guns) and we're merely debating about words.

I have literally no idea on what basis you say that I either don't believe ethics is a rational system to which logic can be applied or don't accept that axioms have a place in ethics. For what it's worth, I think any given system of ethics (including the One True System Of Ethics if there is one) is a somewhat-rational system to which logic can be applied, and that there's a place for first principles, but that ethics isn't all that much like mathematical logic and that terms like "axiom" are liable to mislead. And I certainly don't think that any real person's ethics are derived from any manageable set of clearly statable axioms. (One can go the other way and find "axioms" that do a tolerable job of generating ethics, but that doesn't mean that those axioms actually did generate anyone's ethics.)

I also have no idea how you get from "axioms have no place in ethics" to "morality itself is a terminal value and an axiom". Unless all you mean is that whatever ethics anyone adopts, you can just take *absolutely everything they think about right and wrong* as axioms, which is possibly true but useless.

Our behavior is nothing more than the expression of our thoughts. If we behave as though something is a terminal value - we are doing nothing more than expressing our intents and regards, which is to say, we THINK of it as a terminal value. There is no distinction between physical action and mental thought, or between what is in our heads and what comes out of our mouths - our mind moves our muscles, and our thoughts direct our voice. There is no "actual thought" and - what? Nonactual thought? As if your body operated of its own will, acting against what your actual thoughts are. The mind is responsible for what the body does. I'm not eluding the distinction. I'm denying it.

Your language explains precisely why I said that you don't believe ethics is rational. Somewhat-rational means irrational - that is, something that is rational only some of the time it is, in fact, irrational. Either a thing is rational, and logic can reasonably and consistently be applied to it - or it isn't. There isn't "mathematical logic" and then "otherwise logic." Many have been going to great lengths to explain, among other things, how Bayesian Reasoning - derived entirely from a pretty little formula which is quite mathematical - is meaningful in daily thinking. There is just logic. It's the same logic in mathematics as it is in philosophy. It is only the axioms - the definitions - which vary.

Because axioms exist where rationality begins - that is their purpose. They are the definitions, the borders, from which rationality starts.

Incidentally, if you don't think ethics is like mathematical logic, and you've been reading and agreeing with anything Eliezer posts on the subject, you should take a foundations of mathematics course. He is going to great lengths to describe ethics in a way that is extremely mathematical, if the language has been stripped away for legibility. (For example, he explains infinite recursion, rather than using the word.) Which may, of course, be why he avoids the use of the word "axiom," and instead simply explains it.
I'd also recommend a classical philosophy course - because the very FIELD of ethics is derived from precisely the thing you are suggesting is ridiculous, the search for mathematical, for logical, expressions of morality. The root of which I think it is clear is the value code upon which an individual builds their morality - a thing without rational value in itself, save as a definition, save as an axiom.

That is almost what I meant by axioms. Values. Terminal values, specifically. And also the basis of any individual's ethical code. The entire point of my post was linguistics - hence the sentence that axioms would be a simpler way of explaining terminal values. What I meant by "morality itself is a terminal value and an axiom," however, is akin to what you suggest - it is that if morality is treated as an irrational entity, as you seem want to do, then yes, absolutely everything someone thinks about right and wrong must be treated in a rational ethical system as an axiom. Which is, as you say, possibly true - but thoroughly worthless.

Adirian, I have done post-doctoral research in pure mathematics; I don't need a course in the foundations of mathematics. But thanks for the suggestion. And I've read plenty of philosophy, and so far as I can judge I've understood it well. Of course none of that means that I'm not the idiot you clearly take me for, but as it happens I don't think I am :-).

I didn't say "eluding", I said "eliding". "Denying" is fine, too. I understand why you think the distinction is unreal. I disagree, not because I imagine that there's some fundamental discontinuity between thought and action, but (ironically, in view of the other stuff going on in this discussion) because our thoughts are logically (and often not quite so logically) connected to one another in ways that our actions and feelings aren't. If on one occasion my visceral response when thinking about guns is "eww, killing and violence and stuff" and on another it's "ooo, power and freedom and stuff" then I'm not guilty of any inconsistency, whereas anything that seriously purports to be a moral system rather than just a vague fog of preferences needs to choose, or at least to assign consistent weights to those considerations.

"Somewhat rational" does not mean "irrational". There are three different ways in which something can be said to be rational. (1) That reason can be applied to it. Duh, reason can be applied to *everything*. (2) That it's prosecuted by means of reason. Ethical thought sometimes proceeds by means of reason, and sometimes not. Hence, "somewhat rational". (3) That applying reason to it doesn't show up inconsistencies. Perhaps some people have (near enough) perfectly consistent ethical positions. Certainly most people don't. It's not unheard of for philosophers to advocate embracing that inconsistency. But generally there's some degree of consistency, and sufficiently gross inconsistencies can prompt revision. Hence, again, "somewhat rational".

I haven't suggested that looking for logical expressions of morality is "ridiculous", and once again I have literally no idea where you get thate idea from. You have repeatedly made claims about what I think and why, and you've been consistently wrong. You might want to reconsider whatever methods you're using for guessing. (I apologize if I've done likewise to you, though I don't think I have.)

I feel like I ought to make my ritual attempt to fly the deontology flag on this site by reference to the possibility of attaching do/don't do evaluations directly to actions without reference to any outcome-evaluations at all.

Yet... the end of this post might actually be the most interesting argument I've heard in a while for the existence and permanence of what Rawls calls "the fact of reasonable pluralism" -- Elizer offers us the useful notion that interconnections between our values are so computationally messy that there is just no way to reconcile them all and come to agreement on actual social positions without artifically constraining the decision-space.

I think that part of the problem here is that humans are actually structured in a manner that leads to instrumental values fairly easily becoming terminal values, especially in the case of intense instrumental values. Furthermore, we place a terminal value on this fact about ourselves, at least with regard to positive instrumentalities becoming positive terminal values. A big part of liberalism is essentially the decision not to let negative instrumental values become negative terminal values.

I have difficulty interpreting the following paragraphs, could you expand on them? Are you equating sociopathy with differing terminal values?

"In moral arguments, some disputes are about instrumental consequences, and some disputes are about terminal values. If your debating opponent says that banning guns will lead to lower crime, and you say that banning guns lead to higher crime, then you agree about a superior instrumental value (crime is bad), but you disagree about which intermediate events lead to which consequences. But I do not think an argument about female circumcision is really a factual argument about how to best achieve a shared value of treating women fairly or making them happy.

This important distinction often gets flushed down the toilet in angry arguments. People with factual disagreements and shared values, each decide that their debating opponents must be sociopaths. As if your hated enemy, gun control / rights advocates, really wanted to kill people, which should be implausible as realistic psychology."

This post crystallizes some arguments I've been trying to make in decision theory. Certain representations of decision theory suggest that propositions (or "events") get values, but I've thought that only "states" (maximal descriptions of the complete state of the world) should get values. Their position, as far as I can tell, comes down to thinking that since every proposition has an expected value, we can use this as the value of the proposition. Thinking of this as a type error cuts right through that. (ps, I'm a philosopher too, arguing against some other philosophers - I don't think there's a disciplinary boundary issue here, though perhaps some disciplines are more likely to think of these things one way than another)

Me: "in *principle* you ought to consider the entire state of the future universe when you set a terminal value."

Douglas: 'Yes, and in practice we don't. But as I look further into the future to see the consequences of my terminal value(s), that's when the trouble begins.'


Me: Doctor, it hurts when I do this.

Doctor: Then don't do that.

""Somewhat rational" does not mean "irrational". There are three different ways in which something can be said to be rational. (1) That reason can be applied to it. Duh, reason can be applied to *everything*. (2) That it's prosecuted by means of reason. Ethical thought sometimes proceeds by means of reason, and sometimes not. Hence, "somewhat rational". (3) That applying reason to it doesn't show up inconsistencies. Perhaps some people have (near enough) perfectly consistent ethical positions. Certainly most people don't. It's not unheard of for philosophers to advocate embracing that inconsistency. But generally there's some degree of consistency, and sufficiently gross inconsistencies can prompt revision. Hence, again, "somewhat rational"."

The second is the only situation by which somewhat rational makes sense, but was not the context of the argument, which was, after all, about moral systems, and not moral thoughts - as for the third, inconsistent consistency, I think you will agree, is not consistency at all.

Since we're having a conversation, I might hazard a suggestion that it is what you are saying that is giving me the impressions of what it is you think. And I stated my reasons in each case why I thought you were thinking as you were - if you wish to address me, address the reasons I gave, so I might know in what way I am failing to understand what it is you are attempting to communicate.

Adirian, I've been trying to address the reasons you've given, in so far as you've given them. But for the most part what you've said about my opinions seems to consist of total non sequiturs, which doesn't give me much to work on in ways more productive than saying "whatever you're doing, you're getting this all wrong".

If you don't think it's reasonable to call a system of ethics "somewhat rational" when some of its bits are the way they are because of chains of reasoning and others aren't, and when the person or society whose system of ethics it is sometimes treats inconsistencies as meaning that revision is needed and sometimes not, then clearly we have a terminological disagreement. Fair enough.

Since there are insanely many slightly different outcomes, terminal value is also too big to be considered. So it's useless to pose a question of making a difference between terminal values and instrumental values, since you can't reason about specific terminal values anyway. All things you can reason about are instrumental values.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31