« The Wire | Main | The Ritual »

October 10, 2008

Comments

This is an unusually high quality post, even for you Eliezer; congrats!

It seems that it takes an Eliezer-level rationalist to make an explicit account of what any ten-year-old can do intuitively. For those not quite Eliezer-level or not willing to put in the effort, this is really frustrating in the context of an argument or debate.

I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like "if a belief is false, you are better off knowing that it is false".

It is even possible that some overoptimistic transhumanists/singularitarians are better off, by their own standards, remaining deluded about the potential dangers of technology. You have the luxury of being intelligent enough to be able to utilize your correct belief about how precarious our continued existence is becoming. For many people, such a belief is of no practical benefit yet is psychologically detrimental.

This creates a "tradgedy of the commons" type problem in global catastrophic risks: each individual is better off living in a fool's paradise, but we'd all be much better off if everyone faced up to the dangers of future technology.

Many in this world retain beliefs whose flaws a ten-year-old could point out

Very true. Case in point: the belief that "minimum description length" or "Solomonoff induction" can actually predict anything. Choose a language that can describe MWI more easily than Copenhagen, and they say you should believe MWI; choose a language that can describe Copenhagen more easily than MWI, and they say you should believe Copenhagen. I certainly could have told you that when I was ten...

Bo, the point is that what's most difficult in these cases isn't the thing that the 10-year-old can do intuitively (namely, evaluating whether a belief is credible, in the absence of strong prejudices about it) but something quite different: noticing the warning signs of those strong prejudices and then getting rid of them or getting past them. 10-year-olds aren't specially good at that. Most 10-year-olds who believe silly things turn into 11-year-olds who believe the same silly things.

Eliezer talks about allocating "some uninterrupted hours", but for me a proper Crisis of Faith takes longer than that, by orders of magnitude. If I've got some idea deeply embedded in my psyche but am now seriously doubting it (or at least considering the possibility of seriously doubting it), then either it's right after all (in which case I shouldn't change my mind in a hurry) or I've demonstrated my ability to be very badly wrong about it despite thinking about it a lot. In either case, I need to be very thorough about rethinking it, both because that way I may be less likely to get it wrong and because that way I'm less likely to spend the rest of my life worrying that I missed something important.

Yes, of course, a perfect reasoner would be able to sit down and go through all the key points quickly and methodically, and wouldn't take months to do it. (Unless there were a big pile of empirical evidence that needed gathering.) But if you find yourself needing a Crisis of Faith, then ipso facto you *aren't* a perfect reasoner on the topic in question.

Wherefore, I at least don't have the *time* to stage a Crisis of Faith about every deeply held belief that shows signs of meriting one.

I think there would be value in some OB posts about resource allocation: deciding which biases to attack first, how much effort to put into updating which beliefs, how to prioritize evidence-gathering versus theorizing, and so on and so forth. (We can't Make An Extraordinary Effort every single time.) It's a very important aspect of practical rationality.

Some interesting, useful stuff in this post. Minus the status-cocaine of declaring that you're smarter than Robert Aumann about his performed religious beliefs and the mechanics of his internal mental state. In that area, I think Michael Vassar's model for how nerds interpret the behavior of others is your God. There's probably some 10 year olds that can see through it (look everybody, the emperor has no conception that people can believe one thing and perform another). Unless this is a performance on your part too, and there's shimshammery all the way down!

"How do I know if long-held belief X is false?"

Eliezer, I guess if you already are asking this question you are well on your way. The real problem arises when you didn't even manage to pinpoint the possibly false believe. And yes I was a religious person for many years before realizing that I was on the wrong way.

Why didn't I question my faith? Well, it was so obviously true to me. The thing is: did you ever question heliocentrism? No? Why not? When you ask the question "How do I know if Heliocentrism is false?" You are already on your way. The thing is, your brain needs a certain amount of evidence to pinpoint the question.

How did I overcome my religion? I noticed that something was wrong with my worldview like seeing a deja vu in the matrix every now and then. This on an intellectual level, not as a visible thing. But much more subtle and less obvious so you really have to be attentive no notice it, to notice that there is a problem in the pattern. Things aren't the way they should be.

But over time I became more and more aware that the pieces weren't fitting together. But from there to arrive at the conclusion that my basic assumptions where wrong was really not easy. If you live in the matrix and see strange things happening, how will you arrive at the conclusion that this is because you are in a simulation?

Your posts on rationality were a big help, though. They always say: "Jesus will make you free." Unfortunately that didn't work out for me. Well, I finally am free after a decade of false believing, and during all the time I was a believer I never was as happy as I'm now.

Good post but this whole crisis of faith business sounds unpleasant. One would need Something to Protect to be motivated to deliberately venture into this masochistic experience.

All these posts present techniques for applying a simple principle: check every step on the way to your belief. They adapt this principle to be more practically useful, allowing a person to start on the way lacking necessary technical knowledge, to know which errors to avoid, which errors come with being human, where not to be blind, which steps to double-check, what constitutes a step and what a map of a step, and so on. All the techniques should work in background mode, gradually improving the foundations, propagating the consequences of the changes to more and more dearly held beliefs, shifting the focus of inquiry.

Crisis of faith finds a target to attack, boosts a priority of checking the foundations for a specific belief. I'm not sure how useful forcing this process could be, major shifts in defining beliefs take time, and probably deservingly so. Effects of a wrong belief should be undone by the holes in a network supporting these beliefs, not by executive decision declaring the belief wrong. Even though executive decision is based on the same grounds, it's hard to move more than one step of inferential distance without shooting yourself in the foot, before you train yourself to intuitively perceive the holes, or rather repaired fabric. So I guess that the point of exercise is in making the later gradual review more likely to seriously consider the evidence, to break the rust, not in changing the outlook overnight. Changing the outlook is a natural conclusion of a long road, it doesn't take you by surprise. One day you just notice the old outlook to be dead, and so leave it in the past.

Fact check: MDL is not Bayesian. Done properly, it doesn't even necessarily obey the likelihood principle. Key term: normalized maximum likelihood distribution.

My father is an atheist with Jewish parents, and my mother is a (non-practicing) Catholic. I was basically raised "rationalist", having grown up reading my father's issues of Skeptical Inquirer magazine. I find myself in the somewhat uncomfortable position of admitting that I acquired my belief in "Science and Reason" in pretty much the same way that most other people acquire their religious beliefs.

I'm pretty sure that, like everyone else, I've got some really stupid beliefs that I hold too strongly. I just don't know which ones they are!

Great post. I think that this sort of post on rationality is extremely valuable. While one can improve everyday judgment and decision making by learning about rationality from philosophy, econ and statistics, I think that these informal posts can also make a significant difference to people.

The recent posts on AI theorists and EY's biography were among my least favorite on OB. If you have a choice, please spend more time on either technical sequences (e.g. stuff on concepts/concept space, evolutionary bio, notion of bias in statistics) or stuff on rationality like this.

A good reminder. I've recently been studying anarcho-capitalism. It's easy to get excited about a new, different perspective that has some internal consistency and offers alternatives to obvious existing problems. Best to keep these warnings in mind when evaluating new systems, particularly when they have an ideological origin.

"Try to think the thought that hurts the most."

This is exactly why I like to entertain religious thoughts. My background, training, and inclination are to be a thoroughgoing atheist materialist, so I find that trying to make sense of religious ideas is good mental exercise. Feel the burn!

In that vein, here is an audio recording of Robert Aumann on speaking on "The Personality of God".

Also, the more seriously religious had roughly the same idea, or maybe it's the opposite idea. The counterfactuality of religious ideas is part of their strength, apparently.

Here's a doubt for you: I'm a nerd, I like nerds, I've worked on technology, and I've loved techie projects since I was a kid. Grew up on SF, all of that.

My problem lately is that I can't take Friendly AI arguments seriously. I do think AI is possible, that we will invent it. I do think that at some point in the next hundreds of years, it will be game over for the human race. We will be replaced and/or transformed.

I kind of like the human race! And I'm forced to conclude that a human race without that tiny fraction of nerds could last a good long time yet (tens of thousands of years) and would change only slowly, through biological evolution. They would not do much technology, since it takes nerds (in the broadest sense) to do this. But, they would still have fulfilling, human, lives.

On the other hand, I don't think a human race with nerds can forever avoid inventing a self-destructive technology like AI. So much as I have been brought up to think of politicians and generals as destroyers, and scientists and other nerds as creators, I have to admit that it's the other way around, ultimately.

The non-nerds can't destroy the human race. Only we nerds can do that.

That's my particular crisis of faith. Care to take a side?

I'd be interested in a list of questions you had decided to have a crisis of faith over. If I get round to it I might try and have one over whether a system can recursively self-improve in a powerful way or not.

A lot of truths in EY's post. Though I also agree with Hopefully Anon's observations -- as is so often the case, Eliezer reminds me of Descartes -- brilliant, mathematical, uncowed by dogma, has his finger on the most important problems, is aware of how terrifyingly daunting those problems are, thinks he has a universal method to solve those problems.

Thank you for this post, Eliezer. I must painfully question my belief that a positive Singularity is likely to occur in the foreseeable future.

before i could post my little comment being beliver in faith; do not you think your identity stands because;there are theists and as such you are athiest? if this is your permanet identity; then is it not necessary that thiests must exist? thanks

Nazir Ahmad Bhat, you are missing the point. It's not a question of identity, like which ice cream flavor you prefer. It's about truth. I do not believe there is a teapot orbiting around Jupiter, for the various reasons explained on this site (see _Absence of evidence is evidence of absence_ and the posts on Occam's Razor). You may call this a part of my identity. But I don't need people to believe in a teapot. Actually, I want everyone to know as much as possible. Promoting false beliefs is harming people, like slashing their tires. You don't believe in a flying teapot: do you need other people to?

Nazir, must there be atheists in order for you to believe in a god? The "identity" of those who believe that the world is round does not depend on others believing that the world is flat, or vice versa. Truth does not require disagreement.

Excellent post, Eliezer. Along with your comments on MR about the financial crisis, definitely good stuff worth reading.

I would submit that, for you, the belief you are unable to question is materialistic reductionism. I would suggest reading Irreducible Mind which will acquaint you with a great deal of evidence that reality is different from the current model of it you hold in your mind. I would suggest that you begin with chapter 3 which presents a vast body of observational and research evidence from medicine that simply doesn't fit into your current belief system. Start with the introduction, read the entire introduction (which is very good and fits with many of the more conceptual posts you have made here about avoiding pitfalls along the path of rationality), and then read chapter 3 about empirical findings of the relationship between mind and body.

And this is why the mainstream believes in black holes, dark matter, dark energy and invisible unicorns.

Matthew C.,

You've been suggesting that for a while:

http://www.overcomingbias.com/2007/01/godless_profess.html#comment-27993437
http://www.overcomingbias.com/2008/09/psychic-powers.html#comment-130445874

Those who have read it (or the hundreds of pages available on Google Books, which I have examined) don't seem to be impressed.

Why do you think it's better than Broderick's book? If you want to promote it more effectively in the face of silence (http://www.overcomingbias.com/2007/02/what_evidence_i.html), why not pay for a respected reviewer's time and a written review (in advance, so that you're not accused of bribing to ensure a favorable view)? Perhaps from a statistician?

Do these methods actually work? There were a few posts here on how more evidence and bias awareness don't actually change minds or reduce bias, at least not without further effort. Can a practical "Deduce the Truth in 30 Days" guide be derived from these methods, and change the world?

A fifty-fifty chance of choosing your previous belief does not constitute a reasonable test. If your belief is unreasonable, why would treating it as equally plausible as the alternative be valid?

The trick is to suspend belief and negate the biasing tendencies of belief when you re-evaluate, not to treat all potentials as equal.

Eliezer:

If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it.

I think you should try applying your own advice to this belief of yours. It is usually true, but it is certainly not always true, and reeks of irrational bias.

My experience with my crisis of faith seems quite opposite to your conceptions. I was raised in a fundamentalist family, and I had to "make an extraordinary effort" to keep believing in Christianity from the time I was 4 and started reading through the Bible, and finding things that were wrong; to the time I finally "came out" as a non-Christian around the age of 20. I finally gave up being Christian only when I was worn out and tired of putting forth such an extraordinary effort.

So in some cases your advice might do more harm than good. A person who is committed to making "extraordinary efforts" concerning their beliefs is more likely to find justifications to continue to hold onto their belief, than is someone who is lazier, and just accepts overwhelming evidence instead of letting it kick them into an "extraordinary effort." In other words, you are advocating a combative, Western approach; I am bringing up a more Eastern approach, which is not to be so attached to anything in the first place, but to bend if the wind blows hard enough.

MichaelG:

On the other hand, I don't think a human race with nerds can forever avoid inventing a self-destructive technology like AI.

The idea is that if we invent Friendly AI *first*, it will become powerful enough to keep later, Unfriendly ones in check (either alone, or with several other FAIs working together with humanity). You don't need to avoid inventing one forever: it's enough to avoid inventing one as the first thing that comes up.

If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it.
It is easy to construct at least these 2 kinds of cases where this is false:
  • You have a set of beliefs optimized for co-occurence, and you are replacing one of these beliefs with a more-true belief. In other words, the new true belief will cause you harm because of other untrue (or less true) beliefs that you still hold.
  • If an entire community can be persuaded to adopt a false belief, it may enable them to overcome a tragedy-of-the-commons or prisoners'-dilemma situation.

If you still aren't convinced whether you are always better-off with a true belief, ask yourself whether you have ever told someone else something that was not quite true, or withheld a truth from them, because you thought the full truth would be harmful.

I was raised in a christian family, fairly liberal Church of England, and my slide into agnosticism started when I about 5-7 when I asked if Santa Claus and God were real. I refused to get confirmed and stopped going to church when I was 13ish I think.

In other words, you are advocating a combative, Western approach; I am bringing up a more Eastern approach, which is not to be so attached to anything in the first place, but to bend if the wind blows hard enough.

The trouble is that you cannot break new ground this way. You can't do einstein like feats. You should follow the direction of the wind, but engage nitrous to follow that direction. Occasionally stopping and sticking a finger out the window to make sure you are going the right direction.

If an entire community can be persuaded to adopt a false belief, it may enable them to overcome a tragedy-of-the-commons or prisoners'-dilemma situation.

In a PD, agents hurt *each other*, not *themselves*. Obviously false beliefs in my enemy can help me.

Study this deranged rant. Its ardent theism is expressed by its praise of the miracles God can do, if he choses.

And yet,... There is something not quite right here. Isn't it merely cloakatively theistic? Isn't the ringing denounciation of "Crimes against silence" militant atheism at its most strident?

So here is my idea: Don't try to doubt a whole core belief. That is too hard. Probe instead for the boundary. Write a little fiction, perhaps a science fiction of first contact, in which you encounter a curious character from a different culture. Write him a borderline belief, troublingly odd to both sides in a dispute about which your own mind is made up. He sits on one of our culture's fences. What is his view like from up there?

Is he "really" on your side, or "really" on the other side. Now there is doubt you can actually be curious about. You have a thread to pull on; what unravels if you tug?

If a belief is true you will be better off believing it, and if it is false you will be better off rejecting it.
I think evolution facilitated self-delusion precisely because that is not the case.

I was a Fred Phelps style ultra-Calvinist and my transition involved scarcely any effort.

Also, anti-reductionist, that's the first comment you've made I felt was worth reading. You may take it as an insult but I felt compelled to give you kudos.

I suspect that there are many people in this world who are, by their own standards, better off remaining deluded. I am not one if them; but I think you should qualify statements like "if a belief is false, you are better off knowing that it is false".

Of course I deliberately did not qualify it. Frankly, if you're still qualifying the statement, you're not the intended audience for a post about how to make a convulsive effort to be rational using two dozen different principles.

Eliezer, what do you mean here? Do you mean:

(A1) Individuals in the reference class really are always better off with the truth, with sufficient probability that the alternative does not bear investigating;

(A2) Humans are so unreliable as judges of what we would and would not benefit from being deceived about that the heuristic "we're always better off with the truth" is more accurate than the available alternatives;

(B) Individuals must adopt the Noble Might-be-truth "I'm always better off with the truth" to have a chance at the Crisis of Faith technique?

Eliezer: The position that people may be better off deluded in some situations is VERY compelling. If your audience is people who are literally NEVER better off deluded then I sincerely doubt that it includes you or anyone else. Obviously not every belief need receive all appropriate qualifications every single time, but when someone else points out a plausible qualification you should, as a rationalist, acknowledge it.

I'm very open to Anna's (A1), especially given the special difficulties of this sort of investigation, but only with respect to themselves. I would expect someone as smart as me who knew me well enough to some day come upon a situation where I should, by my values, be deceived, at least for some period.

@Anna:

I mean that you've given up trying to be clever.

@Vassar:

The position that people may be better off deluded in some situations is VERY compelling.

The position that people may be optimally deluded, without a third alternative, is much less compelling.

The position that realistic human students of rationality can be trying to do their best (let alone do the impossible), while trying to deliberately self-delude, strikes me as outright false. It would be like trying to win a hot-dog eating contest while keeping a golf ball in your mouth.

It is this outright falsity that I refer to when I say that by the time you attempt to employ techniques at this level, you should already have given up on trying to be clever.

As someone once said to Brennan:

She reared back in mock-dismay. "Why, Brennan, surely you don't expect me to just tell you!"

Brennan gritted his teeth. "Why not?"

"What you're feeling now, Brennan, is called curiosity. It's an important emotion. You need to learn to live with it and draw upon its power. If I just give you the information, why, you won't be curious any more." Her eyes turned serious. "Not that you should prefer ignorance. There is no curiosity that does not want an answer. But, Brennan, tradition doesn't say I have to hand you knowledge on a silver platter."

It's easy to visualize Jeffreyssai deciding to not say something - in fact, he does that every time he poses a homework problem without telling the students the answer immediately. Can you visualize him lying to his students? (There are all sorts of clever-sounding reasons why you might gain a short-term benefit from it. Don't stop thinking when you come to the first benefit.) Can you imagine Jeffreyssai deliberately deciding that he himself is better off not realizing that X is true, therefore he is not going to investigate the matter further?

Clearly, if everyone was always better off being in immediate possession of every truth, there would be no such thing as homework. But the distinction between remaining silent, and lying, and not wanting to know the truth even for yourself, suggests that there is more at work here than "People are always better off being in immediate possession of every truth."

The problem with the idea that sometimes people are better of not knowing is that it has no practical impact on how an ideal rationalist should behave, even assuming it's true. By the time you've learned something you'd be better off not knowing, it's too late to unlearn it. Humans can't really do doublethink, and especially not at the precision that would be required to be extremely rational while using it.

it has no practical impact on how an ideal rationalist should behave

With respect to themselves, not necessarily to others. Withholding information or even lying can be rational.

In other words, you are advocating a combative, Western approach; I am bringing up a more Eastern approach, which is not to be so attached to anything in the first place, but to bend if the wind blows hard enough.
The trouble is that you cannot break new ground this way. You can't do einstein like feats. You should follow the direction of the wind, but engage nitrous to follow that direction. Occasionally stopping and sticking a finger out the window to make sure you are going the right direction.
I think Einstein is a good example of both bending with the wind (when he came up with relativity), and of not bending with the wind (when he refused to accept quantum mechanics).

By "bending with the wind" I don't mean "bending with public opinion". I mean not being emotionally attached to your views.

In a PD, agents hurt *each other*, not *themselves*.
In a PD, everyone having accurate information about the payoff matrix leads to a worse outcome for everyone, than some false payoff matrices you could misinform them with. That is the point.

In retrospect, I was certainly leaving christianity the day I decided that if god existed, he could not possibly object to me honestly trying to determine The Truth. Doubts stopped feeling like sins.

I think your something-to-protect must be the accuracy of your map, not anything on the map. (at least for a moment)

If someone says you need to fire a revolver at your something-to-protect, you will raise objection based on strongly held beliefs about the effects of revolvers. It's so hard to risk those beliefs because, with your current belief set, someone who lacked those beliefs would lose their something-to-protect. You can't stop believing as long as you believe the cost for disbelieving is any kind of hell.

I was about to say I was lucky to have such a god, but no: I constructed a god just nice enough to let me relieve that cognitive tension.

It's relatively easy to invent and defend very contrarian ideas when you start off joking. This could be a technique if you're confident you can later break a good idea out of the silly-idea prison.

In a PD, everyone having accurate information about the payoff matrix leads to a worse outcome for everyone, than some false payoff matrices you could misinform them with. That is the point.

Do you agree that in a PD, it is not the case for any individual that that individual is harmed by that individual's knowledge? Your point goes through if we somehow think of the collective consisting as a single "you" with beliefs and preferences, but raises all sorts of issues and anyway isn't what Eliezer was talking about.

I think Einstein is a good example of both bending with the wind (when he came up with relativity)
I'm not sure what you mean by bending with the wind. I thought it was the evidence that provided the air pressure, but there was no evidence to support Einstein's theory above the theories of the day. He took an idea and ran with it to its logical conclusions. Then the evidence came, he was running ahead of the evidential wind.

If the wind is following occams or something internal, then it can be blowing in the wrong direction...

If the wind is following occams or something internal, then it can be blowing in the wrong direction...

Isn't that the subject of The Ritual?

One more argument against deceiving epistemic peers when it seems to be in their interest is that if you are known to have the disposition to do so, this will cause others to trust your non-deceptive statements less; and here you could recommend that they shouldn't trust you less, but then we're back into doublethink territory.

Phil Goetz, who I was replying to, was saying that type of thought should be unnecessary, if you don't hang on to your ideas tightly.

Not hanging on to ideas tightly is great for engineers and experimental scientists. It doesn't matter to a chemist if MWI or bohm is right. He can use either, switching back or forth from the view points as he sees fit.

For a theoretical quantum physicist, he has to have some way of determining at which face of the knowledge mine to work, he has to pick one or the other. If it is not a strong reason then he might split his work and get less far with either.

For this sort of person it makes sense to pick one direction and run with it, getting invested in it etc. At least until he comes across reasons that maybe he should take the opposite direction or neither direction, then the crisis of faith might be needed.

Next to impossible to imagine that an optimal teacher always reveals info or ever self-deceives. Whether they ever lie seems to pretty clearly depend on time horizons, cognitive and inferential gaps, knowledge of the students etc. Since remaining silent communicates info, it seems that it can *be* a lie too, or a truth that one might not want to reveal.

Realistic students of human rationality? I sincerely doubt that they can aspire to be the best or close to it while self deceiving, but there could, it seems to me, be people with different personalities, personalities which preclude becoming best, which call for self-deception in some occasions.

Delusion fans, fill in X in this form: "I am better off deluding myself into believing X."

"I am better off deluding myself into believing my cherished late spouse was faithful to me."

Not universally true, but plausibly true in some cases.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31