« Seduced by Imagination | Main | Getting Nearer »

January 16, 2009


I find this to be a very well-written, very informative post. I'd like to ask for help with the implementation of the ideas it presents. As an example (not meant to have any political implications):

Yesterday, Eric Holder, the nominee for Attorney General, said that water-boarding is torture, and that the United States would not engage in torture, which he said is illegal. It is his responsibility to enforce the laws. He is appointed by the President, who has the responsibility (among others) of ensuring public safety. Water-boarding is said to be extremely effective, with CIA volunteers resisting an average of only 14 seconds, and it is reported that valuable terrorist information has been obtained by using the method. Its use has given the USA a bad image, and authoritative sources have said that it is inhumane. It can be an example of the suffering of a few preventing the suffering of many, widely discussed in an earlier post.

Given the quote from this post: "Others care more about my basic goals than about how exactly I achieve them", how can i use the near-far system to come to a proper conclusion about such an issue (or any other issue where the near and far aspects seem to be in conflict)?

Why is this feature and not a bug? In some situations, rationality is weak, and thoughts diverge from reality easier. Some emotions work by promoting or demoting certain thoughts (perceptions, expectations, plans), and you can move towards (or away from) those thoughts either by developing a situation where you have corresponding experiences and realistic intentions, or by thinking pie-in-the-sky, where limited connection with reality can't stop you. For example, we are afraid of discomfort more than we actually suffer from it, we expect to be more happy after a good event than we actually become, we expect to grief more than we actually do. In each of these cases, weak far thoughts are affected by given emotion more than strong near thoughts.

If image is sufficient, so that it's enough for you to sometimes talk about pie-in-the-sky without actually succeeding, social emotions achieve their objective by only hijacking weakly rational thoughts, and there is no need to make them stronger. It looks like incentive for evolution to specifically create such schizophrenic emotions only comes with ability of organisms to communicate declarative thoughts. Maybe, there is design in this disconnect of social emotions, but maybe it's just a bug like with other emotions.

This view also suggests that once organism obtains ability to communicate declarative thoughts (or to methodically process them, e.g. writing them down or remembering better, to make realistic plans based on them later), balance of morality shifts. Effect of all emotions on behavior becomes stronger, to different degree for different emotions.

retired, reading your newspaper thinking about if torture is ever acceptable is far thinking, while considering torturing a suspect in front of you to get info that would make your day is near thinking.

Vladimir, yes language could have made our thoughts more visible, increasing image pressures on mind design.

Robin, I guess that's true, but I wasn't talking about that.

retired, reading your newspaper thinking about if torture is ever acceptable is far thinking, while considering torturing a suspect in front of you to get info that would make your day is near thinking.

My question is whether understanding that mechanism has any practical value in the resolution of a conflict that may exist between the conclusions reached by near and far thinking. Or is it only descriptive? Can I learn to use it productively, or is underlying conflict inevitable in some situations?

I warm to Vlad's position: this is a side effect of evolution, and insofar as it hampers us today, it's a bug. But it seems that in the past it would have been a feature; then, the near would have been much more important to us.

It's interesting to examine the wetware here - check out a brain pic. My impression is we shouldn't be surprised that these two systems are disjointed and uncoordinated in themselves.

It seems like doing laundry: even stacked on top of each other, my washer ain't my dryer, and I tend to use them in a definite order (Nobody dries their clothes before washing them, altho' I can easily move clothes from washer to dryer) or completely independently - but the units are quite separate.

Now Robin's response to Vlad about language strikes me with a stick. It appears that it would have been quite beneficial for humans to have improved the linkage between these two areas, but instead we learned to talk, thus directing more power into the social communication & near system.

It also seems to explain why we have such difficulty speaking precisely (requires more "sparse" abstraction and more heavily using the "far," which we ain't good at).

In the end it does seem like a brain region co-ordination problem. I would love to see the experiments in the Science article performed on people in MRIs. Then we could see where the two systems are in use and how strongly they link/interact/activate in different tasks.

Robin's arguing that since we need hypocrisy and for hypocrisy to be successful, we have to believe the lies we're telling, to pop the hypocrisy onto this disjointed system is functionally convenient. Ok, I'm buying that.

Most people reading this are going to be frustrated by this problem and want to consider engineering solutions, or maybe developing some awesome Zen meditation that allows us to practice forcibly linking these regions.

However, then we might be less socially successful, since to live together nicely we unfortunately appear to need to lie well to both ourselves and other people. I think we're stuck, no?

This is very interesting and provocative.

If I understand correctly, this theory would suggest that individuals in groups (such as this one) that elevate hypocrisy and irrationality to fiercely antisocial vices will make better decisions, and present a worse image to others.

Wvolutionary psychology is tricky stuff; just-so stories are both convincing and easy to create. More experiments will be valuable.

The broad idea seems promising, but the applications are unpersuasive. "[W]e value particular foreign-born associates, but oppose foreign immigration." We probably think our valued associates are not typical of immigrants. If we had detailed information about every immigrant, we might well judge the great majority of them to be undesirable as associates. "[W]e say we want to lose weight, but actually don't exercise more or eat less." This is probably just mental laziness or weakness of will. The more detailed knowledge I have of my present physical state, the more I judge that I ought to stop eating and go work out. I don't do it simply for lack of virtue. "[W]e say we care about distant future folk, but don't save money for them." Again, why think that a more detailed knowledge of alternative possible futures would change our abstract judgment that we ought show concern for the interests of future people? We're just taking the easy, selfish course; we know abstractly *and would know in detail* (if we bothered to gather the detailed information) that this is wrong.

Maybe your theory should be that the one mental module generates decisions that are in one's short-term self-interest, the other decisions in accord with utilitarian moral philosophy, serving the interests of everyone (including one's own *future* self). Might *this* be the real inherent contradiction?

An example of the first tradeoff:

Rationally evaluate when I should attack my enemies. Convince my potential enemies that if they attack my family I will seek revenge regardless of the cost to me.

In some minds the first criteria dominates and the person is a "coward", in others the second dominates and the person is always starting fights.

Perhaps this tradeoff explains why humans have such difficulty ignoring sunk costs. I wonder if the economics students who have the most difficulty understanding why businesses should ignore sunk costs are the most likely to be violent?

Thomas Schelling in "Strategy of Conflict" claims that deterrence necessitates convincing potential opponents that you will retaliate regardless of whether it is rational to. As he points out, once the enemy has attacked, the retaliating is less rational than rethinking how to progress from that point, therefore if you are going to deter an attack you must convince any potential attackers that you are crazy enough to retaliate no matter the consequences or you must have in place preparations that will automatically retaliate. I haven't reread this section yet, and only read it 15 years ago so I might be misremembering details, but I disliked his conclusions enough that I paid close attention at the time.

Also you might see Jane Jacobs's "Systems of Survival" where she contends that there are two distinct ethical systems: exchange (appropriate for business and economics) and guardian (appropriate for military and defense). She makes a good argument that these two, antithetical systems are both necessary for a successful society, but that many social problems are caused by using one where the other is more appropriate, or worse by creating mixtures that cannot work. I mention this because "sunk costs" and "retaliation" cannot be effectively compared to each other. I do wonder whether the excessive honoring of sunk costs may be the result of inappropriately applied guardian morality. (If I can find my copy I'll see if Jacobs addressed that point and I just don't remember.)

James, yes that is an example of the first tradeoff.

bill, there are many proposals for how the brain divides into two systems; the far/near divide proposal seems to me to be based on much more diverse and compelling evidence than most other such proposals.

Philo, it seems to me you are just illustrating how our brains are practiced at making up excuses for the contradictions we consistently generate. What else is "weakness of will" but just a name for a puzzling inconsistency?

Johnicholas, yes if a group can see hypocrisy and shame it that should reduce hypocrisy and lower that group's image in the view of outsiders. In that sense we are indeed stuck, as frelkins says.

Jacobs's system is not a "brain" divide, but a cultural difference; and I mentioned it mainly as a counter to James Miller's comparisons of sunk costs and retaliation (revenge). In fact, the lists of different values she presents for exchange and guardian moralities both have near and far aspects/consequences.

Certainly an interesting and promising theory! Two questions present themselves: how to test it; and can we think of any apparent counterexamples? A counterexample would be something that would activate the "near" module but where we are more socially deceptive than truthful, or vice versa.

Well, what about love? Most people would say that their loved ones are "near and dear", that they feel tremendous closeness to them. And yet as we have discussed, this is an area where it seems that we are often more manipulative than truthful, and act more in accordance with social norms than our own self-interest. We talked about romantic love recently, but as another example, happiness studies show that child care is actually perceived as onerous and unpleasant, while most people will claim that it is the happiest and most joyful part of their lives.

If right, this seems to me to elegantly explain running away from problems by declaring them impossible as staying in FAR mode, whether out of a desire not to look stupid or because abstract thinking seems more appropriate to the problem or both, when you would need to think NEAR to make progress.

I wonder whether thinking about it like this can help me those times when I know I really want to think about a problem in detail, but my mind just keeps rehashing the intuitions I've come up with in the past...

Certainly it seems to explain why "not running away from the problem" seems like a particular specific thing you can do differently.

This reminds me more than a little bit of the work on picoeconomics that somebody linked to previously on this blog: [Breakdown of Will (pdf)](http://www.picoeconomics.com/aBreakdown_Will.pdf)

While I agree with some of what Robin is getting at here, I am not so sure that all Robin's examples match up with a "near/decision vs far/image" tradeoff, but rather with a will power tradeoff. For example, I don't think most people say they want to lose weight because it's good to be perceived as wanting to be thinner but because they actually want the benefits attendant with being thinner (whether that be health, attractiveness, or whatever).

Hal, the theory isn't of an exact correspondence, but just that farness is a good heuristic for when image is more important. People claim parenting is joyful in far mode, thinking about the future, but not so much in near mode, about this moment when the kid is in front of you. In far mode we say we would go to the ends of the Earth for our love, but in near mode we don't.

Chad, will power makes no sense without a conflict between different internal systems.

Rosa, the question is why we don't notice in far mode that there are costs of losing weight. Sure we might not notice all the details, but why do we get it so wrong?

I was reading this and thinking "Click" - I'm proud to say that I got the punchline before reading it, though not before starting the post, alas. But I can also think of a couple of points that seem anomalous in this light, e.g:

1) (Good / professional / publishable) authors have to force highly detailed visualizations in order to write.

2) The outside view is less optimistic than the inside view and much more accurate.

Should one essay a more detailed model to account for such relatively anomalous points? Though they may not be quite anomalous, for example, you could suggest: "Authors, though biased, are less biased than people having fully abstract discussions in bars" and this could be tested.

This seems like an important schema, but not everything seems to quite fit it; and I'll have to let that bake and see if I notice a pattern to the exceptions.

Robin: So which of near or far thinking is our "true" thinking? Perhaps neither; perhaps we really contain an essential contradiction, which we don't want to admit, much less resolve.

- the grim truth of that matter is that we probably contain lots of contradictions. Especially with regard to our preferences.

We in this rationalist community seem to fall into the trap of thinking that our minds implement some abstract set of preferences, though imperfectly. A more accurate model might be to think of the mind as an input/output machine with the property that in some contexts some of its behaviors can be approximated as "implementing preferences". Globally, though, there is absolutely no reason why our behaviors and opinions should conform to anything consistent. An optimist would call that "part of being human". A pessimist/realist would call it "cognitive bias".

The human mind

Eliezer, those are indeed good items to ponder.

retired, I believe I may be slipping off topic, but it is my understanding that the issue of torture is not as simple a cost/benefit trade as your post implies. Specifically, torture does not tend to produce accurate information, but rather tends to cause the subject to say whatever he/she thinks will end the pain. This, of course, is usually whatever the interrogator *already wants* to hear. This, along with its moral problems, explains why torture has been primarily used by self-interested regimes, concerned more for their image than for their decisions.

I seem to have drifted back within striking distance of the topic towards the end there. Odd, that.

Of course, I do not have double-blind tests confirming these claims, nor do I ever expect them to be done, and I suppose it is entirely possible that these claims were made by commentators trying to make a difficult political issue more one-sided.


Preference between concrete and abstract thinking is probably personal, not universal trait. Most people specialize in concrete thinking, some people specialize in abstract thinking, and maybe 5% can easily jump between different abstraction levels.

In software engineering, concrete-thinking people like bottom-up design, abstract-thinking people like top-down design. Abstraction level jumper (like the architect) is needed for the balance. Otherwise you either suffer from bad but beautiful abstractions or spend too much time arguing about nitty gritty details.


I believe its only a 'bug' from a macro (entire race) perspective. From the perspective of the individual it is very rational to separate image from reality. For the entire race this is of course a disadvantage. It seems like a classic prisoner's dilemma to me. So in that sense we aren't 'stuck', as prisoner dilemmas can be overcome (though maybe only by re-designing our minds?) provided the cost of coordination isn't too high. Its difficult to imagine this happening with natural selection as it currently exists, since we depend on deception so heavily in order to attract mates.

Actually I think this model fits in with the sex games people play pretty well.

Perhaps the best solution would allow the 'image' part of our minds to fade away when dealing with people who we also expect to cooperate with us by revealing their 'true' natures. Or do we already do this? We're often much more polite to strangers than close friends.

FWIW, I think this is one of the best posts I've read on OB.

@Mike Blume, et al:

In the first comment on this post, I asked: how can I use the near-far system to come to a proper conclusion about such an issue (or any other issue where the near and far aspects seem to be in conflict)? Hanson's response did not address the question at all, and I asked again: My question is whether understanding (the near-far) mechanism has any practical value in the resolution of a conflict that may exist between the conclusions reached by near and far thinking. Or is it only descriptive? Can I learn to use it productively, or is underlying conflict inevitable in some situations?

So, again, is near-far only descriptive, or is it a mechanism that can be consciously controlled to improve decisions? If cognition is dichotomous, can one chose to be in only one branch, or is it hard-wired? If one believes he is using only the near or only the far to address a topic, is it self-deception? Is conflict between near and far thinking a source of "existential angst", and is it inevitable?

Interestingly, abstract thinking may prevent concrete thinking, and vice versa. How the issue was framed when we first time approach it, may influence our thinking about it in the future.

For example, in Finland there seems to be conflict between regular people who deem modern architecture ugly, and architects who claim that people's taste is just uneducated.

It seems that regular people view buildings more abstractly than architects, and trained architects are no longer able to view buildings in this manner. They always resort to talk about individual characteristics of the building, never talk about the building as a whole.

retired: It's a hard-wired product of evolution that rationalists must make conscious allowances/adjustments for. And while it may get bandied about amongst jargoneers under the moniker "construal level theory," the idea is also out there in the popular culture. See: "Stumbling on Happiness."

"It can make sense to have specialized mental systems for these different approaches, i.e., systems best at reasoning from detailed representations, versus systems best at reasoning from sparse abstractions. "

I would question this hypothesis. I find it perfectly reasonable to expect that the same basic mental architecture grows from abstract to concrete performance with improved information quite easily on a continuous spectrum. Perhaps an unconvincing analogy, but OOP has the same basic architecture regardless of where in the super/sub class hierarchy you are operating.

My general impression is that concrete thinking is akin to a system with more extensive 'training' on more 'data'. Thus, any comment of pulling out abstract thought and plugging in concrete thought seems nonsensical aside from the process of training or learning to get from one to the other. It is also nonsensical to approach a new subject from the concrete-thinking mode. One starts at abstract, and moves to concrete.

Likewise, I think one could easily concrete-think on distant future topics but you are not guaranteed to have 'trained' your mind to operate on 'data' (which can be your own prior conclusions) that is provably connected to reality. In order to concrete-think about the future all you have to do is practice. However, it is quite easy to build a castle on sand and be unaware of it. To have a detailed, but erroneus, account of any subject. I hope there is no irony there. :p

Robin: Chad, will power makes no sense without a conflict between different internal systems.

My point was not that there wasn't a conflict between two internal systems in will power tradeoffs. I was trying to point out that calling all such conflicts as being between "better decision vs better image" seemed like an overgeneralization.

And in any case, if you haven't read "A Breakdown of Will" before, highly recommend it (http://www.picoeconomics.com/aBreakdown_Will.pdf) -- most thought provoking writing I've come across recently (OB aside, of course).

This is fascinating! I like how this theory fits with other, more specific examples of bias, such as _Stumbling On Happiness_, and voting as signalling tribal identification rather than being accurate. And the research on deliberation where people's views become more extreme after talking to like-minded people. If deliberation were an exchange of details using the "near" system, it should make people's views more informed, but if it is an exercise in proving one's values by using the "far" system, it makes sense that it drives views to the extreme.

Retired U, I'd suggest that it is a waste of time to worry about what your policy should be towards torture. You probably aren't wondering if you should start (or stop!) torturing people. You probably aren't even in a position to materially influence whether anyone else is torturing people. The belief that you should spend time on this issue is exactly the sort of self-serving bias that this blog is intended to eliminate. IMO.

"the sort of self-serving bias"

I think you must mean self-deluding; worrying about something you can't even
affect doesn't fit any use of "self-serving" I've ever come across.

@Hal Finney:

I didn't say I was concerned about the torture issue; I said it was an example of an issue where near and far thinking might give conflicting results. I asked how one should deal with issues that have such conflict. Your ad hominem comment does not advance the so-called "intentions" of this blog. IMO.

Chad, thank you for posting the link to A Breakdown of Will. Thank you for posting it twice, because I only bothered to follow the URL after you reiterated. Interesting stuff.

Retired U: "So, again, is near-far only descriptive, or is it a mechanism that can be consciously controlled to improve decisions?"

Most people have near-far conflicts they have trouble resolving consciously. I have known for a long time that I should exercise more. But, near-far is a description of our thinking that might lead to useful progress, where we learn better tricks for resolving these conflicts, and also identify more cases where either near or far thinking tends to be in error.

To be consistent, estimates made by sparse approaches should equal the average of estimates made when both sparse and detail approaches contribute.

Isn't this goal recursive? Why not just say "estimates made by sparse approaches should attempt to approximate estimates made by detailed approaches"?

What of the intermediary decision zone between near and far? e.g. We often make quite detailed plans for vacations and decisions about them having only fuzzy, limited details and vague notions of what we think we would like. Compare to Chess or Go strategy with their opening, middle game, and end game, or even to Delany's concept of thinking borrowed from earth science: simplex, complex, multiplex.

A lovely post!
My own conclusion is that" getting things right" is much more important than "being seen as nice". Decisions are more important than image. And then I think both systems (near and far) can be used. A map of the underground is a wonderful example of "sparse thinking" and it is at the same time wonderfully useful in the near decison of which platform to stand on.
Thus I would agree with Hal, that the issue of tortue (in this context) is image and should be rejected as a near solution to the question Retired posed on how to use the model in practice. When Hal ends his post with "IMO" this should be regarded as a social (far) signal of near/decisional modesty. When Retired ends his post with "IMO" this should be regarded as a social signal of social imodesty.
In other words my answer to your question, Retired, is to throw away the harvesting of social benefits.

Thanos, I expect we have a continuum of systems for varying levels of detail. I've tried to write everything to be consistent with that.

"I expect we have a continuum of systems for varying levels of detail."

Is this supportable? We expect memory and expertise to be encoded in a connectionist fashion. How unfortunate it would be to have to continuously transfer these memories to new systems (read: new areas of the brain) as more detail is made available or our interest in the subject increases. A property of all learning is starting with little detail and going to more detail. Our minds will be optimized for that pattern. In your description this learning process cuts across systems every time.

My opinion here is that near/far (really just concrete/abstract) are a matter really of amount of resources devoted, which in turn is a function of the details available to consume and whether the problem merits the effort. However, it's all applied on the same basic machinery. If abstract thinking is as far as you get on a subject it is either because:
a) You are unwilling to process the details available.
b) There are no details available.
c) You are unwilling to make-up details.
d) The subject is too complex for you (your concrete conclusions repeatedly fail verification)

The bias towards image making rather than accurate projection of current behavior in far-off descriptions of self is real but seems to require a different explanation. We admit that to synch these up is a hard problem and that may be precisely why it exists. Eating pizza today does not directly falsify the goal to be skinny.

This sounds like an example:

I don't know what happened there - the URL was http://www.economist.com/science/displaystory.cfm?story_id=12971028 .

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30